[
  {
    "path": ".gitattributes",
    "content": "# Auto detect text files and perform LF normalization\n* text=auto\n\n"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "* @pentaho/sp-branch-write"
  },
  {
    "path": ".gitignore",
    "content": "bin/\ndist/\nlib/\nlib-provided/\nstage-pmr/\ntest-lib/\neclipse-bin/\noverride.properties\n.settings/\n.classpath\n.project\n/dev-lib\n/pdi-null\n/legacy/pdi-null\n/pdi-bin\n*.iml\n.idea/\n.vscode/\ntarget/\nrebel.xml\n.DS_Store\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "Pentaho Developer Edition 10.3 Copyright 2024 Hitachi Vantara, LLC; licensed under the \nBusiness Source License 1.1 (BSL). This project may include third party components that\nare individually licensed per the terms indicated by their respective copyright owners\nincluded in text file or in the source code. \n\nLicense text copyright (c) 2020 MariaDB Corporation Ab, All Rights Reserved.\n\"Business Source License\" is a trademark of MariaDB Corporation Ab.\n\nParameters\n\nLicensor:             Hitachi Vantara, LLC.\nLicensed Work:        Pentaho Developer Edition 10.3. The Licensed Work is (c) 2024\n                      Hitachi Vantara, LLC.\nAdditional Use Grant: None\nChange Date:          Four years from the date the Licensed Work is published.\nChange License:       Apache 2.0\n\nFor information about alternative licensing arrangements for the Licensed Work,\nplease contact support@pentaho.com.\n\nNotice\n\nBusiness Source License 1.1\n\nTerms\n\nThe Licensor hereby grants you the right to copy, modify, create derivative\nworks, redistribute, and make non-production use of the Licensed Work. The\nLicensor may make an Additional Use Grant, above, permitting limited production use.\n\nEffective on the Change Date, or the fourth anniversary of the first publicly\navailable distribution of a specific version of the Licensed Work under this\nLicense, whichever comes first, the Licensor hereby grants you rights under\nthe terms of the Change License, and the rights granted in the paragraph\nabove terminate.\n\nIf your use of the Licensed Work does not comply with the requirements\ncurrently in effect as described in this License, you must purchase a\ncommercial license from the Licensor, its affiliated entities, or authorized\nresellers, or you must refrain from using the Licensed Work.\n\nAll copies of the original and modified Licensed Work, and derivative works\nof the Licensed Work, are subject to this License. This License applies\nseparately for each version of the Licensed Work and the Change Date may vary\nfor each version of the Licensed Work released by Licensor.\n\nYou must conspicuously display this License on each original or modified copy\nof the Licensed Work. If you receive the Licensed Work in original or\nmodified form from a third party, the terms and conditions set forth in this\nLicense apply to your use of that work.\n\nAny use of the Licensed Work in violation of this License will automatically\nterminate your rights under this License for the current and all other\nversions of the Licensed Work.\n\nThis License does not grant you any right in any trademark or logo of\nLicensor or its affiliates (provided that you may use a trademark or logo of\nLicensor as expressly required by this License).\n\nTO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON\nAN \"AS IS\" BASIS. LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS,\nEXPRESS OR IMPLIED, INCLUDING (WITHOUT LIMITATION) WARRANTIES OF\nMERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND TITLE."
  },
  {
    "path": "README.markdown",
    "content": "Pentaho Big Data Plugin\n=======================\n\nThe Pentaho Big Data Plugin Project provides support for an ever-expanding Big Data community within the Pentaho ecosystem. It is a plugin for the Pentaho Kettle engine which can be used within Pentaho Data Integration (Kettle), Pentaho Reporting, and the Pentaho BI Platform.\n\nBuilding\n--------\nIt's a maven build, so `mvn clean install` is a typical default for a local build.\n\nPre-requisites\n---------------\nJDK 11 in your path.\nMaven 3.3.9 in your path.\nThis [settings.xml](https://raw.githubusercontent.com/pentaho/maven-parent-poms/master/maven-support-files/settings.xml)\n\nHow to use the custom settings.xml\n---------------\nOption 1: Copy this file into your <user-home>/.m2 folder and name it \"settings.xml\". \nWarning: If you do this, it will become your default settings.xml for all maven builds.\n\nOption 2: Copy this file into some other folder--possibly the project folder for the project you want to build and use the maven 's' option to build with this settings.xml file. Example: `mvn -s public-settings.xml install`.\n\nThe Pentaho profile defaults to pull all artifacts through the Pentaho public repository. \nIf you want to try resolving maven plugin dependencies through the maven central repository instead of the Pentaho public repository, activate the \"central\" profile like this:\n\n`mvn -s -public-settings.xml -P central install`\n\n\nIf your fails to resolve the jacoco-maven-plugin version 0.7.7-SNAPSHOT\n---------------\nThe 0.7.7-SNAPSHOT property version for the jacoco-maven-plugin is defined in several releases of the Pentaho parent poms, but it is only available in the Pentaho artifact repositories. If you are trying to resolve through maven central or other public repositories you should override to get the latest version like this:\n\n`mvn -s -public-settings.xml -P central install -Djacoco-maven-plugin.version=0.7.7.201606060606`\n\nFurther Reading\n---------------\nAdditional documentation is available on the Community wiki: [Big Data Plugin for Java Developers](https://pentaho-community.atlassian.net/wiki/display/BAD/Getting+Started+for+Java+Developers)\n\nLicense\n-------\nLicensed under the Apache License, Version 2.0. See LICENSE.txt for more information.\n"
  },
  {
    "path": "api/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-api</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <modules>\n    <module>runtimeTest</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "api/runtimeTest/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-api</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <mockito-core.version>5.17.0</mockito-core.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito-core.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpclient</artifactId>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpcore</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n      <dependency>\n          <groupId>org.apache.logging.log4j</groupId>\n          <artifactId>log4j-1.2-api</artifactId>\n          <version>${log4j.version}</version>\n      </dependency>\n  </dependencies>\n  <build>\n    <plugins>\n      <plugin>\n        <artifactId>maven-jar-plugin</artifactId>\n        <version>2.5</version>\n        <executions>\n          <execution>\n            <goals>\n              <goal>test-jar</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/RuntimeTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.Set;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTest {\n  boolean accepts( Object objectUnderTest );\n\n  String getModule();\n\n  String getId();\n\n  String getName();\n\n  boolean isConfigInitTest();\n\n  Set<String> getDependencies();\n\n  RuntimeTestResultSummary runTest( Object objectUnderTest );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/RuntimeTestProgressCallback.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTestProgressCallback {\n  void onProgress( RuntimeTestStatus runtimeTestStatus );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/RuntimeTestStatus.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\n\nimport java.util.List;\n\n/**\n * Created by bryan on 8/18/15.\n */\npublic interface RuntimeTestStatus {\n  List<RuntimeTestModuleResults> getModuleResults();\n\n  int getTestsDone();\n\n  int getTestsRunning();\n\n  int getTestsOutstanding();\n\n  boolean isDone();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/RuntimeTester.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTester {\n  void runtimeTest( Object objectUnderTest, RuntimeTestProgressCallback runtimeTestProgressCallback );\n  void addRuntimeTest( RuntimeTest test );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/RuntimeTestAction.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action;\n\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\n/**\n * Created by bryan on 9/8/15.\n */\npublic interface RuntimeTestAction {\n  String getName();\n  String getDescription();\n  RuntimeTestEntrySeverity getSeverity();\n  RuntimeTestActionPayload getPayload();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/RuntimeTestActionHandler.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action;\n\n/**\n * Created by bryan on 9/8/15.\n */\npublic interface RuntimeTestActionHandler {\n  boolean canHandle( RuntimeTestAction runtimeTestAction );\n\n  void handle( RuntimeTestAction runtimeTestAction );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/RuntimeTestActionPayload.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action;\n\n/**\n * Created by bryan on 9/9/15.\n */\npublic interface RuntimeTestActionPayload {\n  /**\n   * This will be called and logged when the Action isn't handled by any registered handlers\n   *\n   * @return the message associated with the payload\n   */\n  String getMessage();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/RuntimeTestActionService.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action;\n\n/**\n * Created by bryan on 9/8/15.\n */\npublic interface RuntimeTestActionService {\n  void handle( RuntimeTestAction runtimeTestAction );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/impl/HelpUrlPayload.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.pentaho.runtime.test.action.RuntimeTestActionPayload;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\n/**\n * Created by bryan on 9/9/15.\n */\npublic class HelpUrlPayload implements RuntimeTestActionPayload {\n  public static final String HELP_URL_PAYLOAD_MESSAGE = \"HelpUrlPayload.Message\";\n  private final MessageGetter messageGetter;\n  private final String title;\n  private final String header;\n  private final String url;\n\n  public HelpUrlPayload( MessageGetterFactory messageGetterFactory, String title, String header, String url ) {\n    this.messageGetter = messageGetterFactory.create( getClass() );\n    this.title = title;\n    this.header = header;\n    this.url = url;\n  }\n\n  @Override public String getMessage() {\n    return messageGetter.getMessage( HELP_URL_PAYLOAD_MESSAGE, url );\n  }\n\n  public String getTitle() {\n    return title;\n  }\n\n  public String getHeader() {\n    return header;\n  }\n\n  public String getUrl() {\n    return url;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/impl/LoggingRuntimeTestActionHandlerImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionHandler;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.impl.BaseMessagesMessageGetterFactoryImpl;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\n\n/**\n * Created by bryan on 9/8/15.\n */\npublic class LoggingRuntimeTestActionHandlerImpl implements RuntimeTestActionHandler {\n  public static final String LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL = \"LoggingRuntimeTestActionHandlerImpl.Action\";\n  public static final String LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL_MISSING_SEVERITY = \"LoggingRuntimeTestActionHandlerImpl.MissingSeverity\";\n  private final Logger logger;\n  private final MessageGetter messageGetter;\n  private static LoggingRuntimeTestActionHandlerImpl instance =\n    new LoggingRuntimeTestActionHandlerImpl( BaseMessagesMessageGetterFactoryImpl.getInstance() );\n\n  public static LoggingRuntimeTestActionHandlerImpl getInstance() {\n    return instance;\n  }\n\n  public LoggingRuntimeTestActionHandlerImpl( MessageGetterFactory messageGetterFactory ) {\n    this( messageGetterFactory, LogManager.getLogger( LoggingRuntimeTestActionHandlerImpl.class ) );\n  }\n\n  public LoggingRuntimeTestActionHandlerImpl( MessageGetterFactory messageGetterFactory, Logger logger ) {\n    this.messageGetter = messageGetterFactory.create( LoggingRuntimeTestActionHandlerImpl.class );\n    this.logger = logger;\n  }\n\n  @Override public boolean canHandle( RuntimeTestAction runtimeTestAction ) {\n    return true;\n  }\n\n  private String getMessage( RuntimeTestAction runtimeTestAction ) {\n    return messageGetter\n      .getMessage( LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        runtimeTestAction.getName(), runtimeTestAction.getDescription(),\n        String.valueOf( runtimeTestAction.getPayload() ) );\n  }\n\n  @Override public void handle( RuntimeTestAction runtimeTestAction ) {\n    RuntimeTestEntrySeverity severity = runtimeTestAction.getSeverity();\n    if ( severity == null ) {\n      logger.warn( messageGetter\n        .getMessage( LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL_MISSING_SEVERITY, runtimeTestAction.getName(),\n          runtimeTestAction.getDescription(), String.valueOf( runtimeTestAction.getPayload() ) ) );\n      return;\n    }\n    switch( severity ) {\n      case DEBUG:\n        logger.debug( getMessage( runtimeTestAction ) );\n        break;\n      case SKIPPED:\n      case WARNING:\n        logger.warn( getMessage( runtimeTestAction ) );\n        break;\n      case ERROR:\n      case FATAL:\n        logger.error( getMessage( runtimeTestAction ) );\n        break;\n      default:\n        logger.info( getMessage( runtimeTestAction ) );\n        break;\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/impl/RuntimeTestActionImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionPayload;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\n/**\n * Created by bryan on 9/9/15.\n */\npublic class RuntimeTestActionImpl implements RuntimeTestAction {\n  private final String name;\n  private final String description;\n  private final RuntimeTestEntrySeverity severity;\n  private final RuntimeTestActionPayload payload;\n\n  public RuntimeTestActionImpl( String name, String description, RuntimeTestEntrySeverity severity,\n                                RuntimeTestActionPayload payload ) {\n    this.name = name;\n    this.description = description;\n    this.severity = severity;\n    this.payload = payload;\n  }\n\n  @Override public String getName() {\n    return name;\n  }\n\n  @Override public String getDescription() {\n    return description;\n  }\n\n  @Override public RuntimeTestEntrySeverity getSeverity() {\n    return severity;\n  }\n\n  @Override public RuntimeTestActionPayload getPayload() {\n    return payload;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/action/impl/RuntimeTestActionServiceImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionHandler;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.i18n.impl.BaseMessagesMessageGetterFactoryImpl;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * Created by bryan on 9/8/15.\n */\npublic class RuntimeTestActionServiceImpl implements RuntimeTestActionService {\n  private final List<RuntimeTestActionHandler> runtimeTestActionHandlers;\n  private final RuntimeTestActionHandler defaultHandler;\n  private static RuntimeTestActionServiceImpl instance;\n\n  /**\n   * Creates the RuntimeTestActionService\n   *\n   * @param runtimeTestActionHandlers list of handlers\n   * @param defaultHandler fallback handler (MUST BE ABLE TO HANDLE ANY PAYLOAD)\n   */\n  public RuntimeTestActionServiceImpl( List<RuntimeTestActionHandler> runtimeTestActionHandlers,\n                                       RuntimeTestActionHandler defaultHandler ) {\n    this.runtimeTestActionHandlers = runtimeTestActionHandlers;\n    this.defaultHandler = defaultHandler;\n  }\n\n  public static RuntimeTestActionServiceImpl getInstance() {\n    if ( instance == null ){\n      LoggingRuntimeTestActionHandlerImpl loggingRuntimeTestActionHandler =\n        LoggingRuntimeTestActionHandlerImpl.getInstance();\n      List<RuntimeTestActionHandler> runtimeTestActionHandlers = new ArrayList<>();\n      runtimeTestActionHandlers.add( loggingRuntimeTestActionHandler );\n      instance = new RuntimeTestActionServiceImpl( runtimeTestActionHandlers, loggingRuntimeTestActionHandler );\n    }\n    return instance;\n  }\n\n  @Override public void handle( RuntimeTestAction runtimeTestAction ) {\n    for ( RuntimeTestActionHandler runtimeTestActionHandler : runtimeTestActionHandlers ) {\n      if ( runtimeTestActionHandler.canHandle( runtimeTestAction ) ) {\n        runtimeTestActionHandler.handle( runtimeTestAction );\n        return;\n      }\n    }\n    defaultHandler.handle( runtimeTestAction );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/i18n/MessageGetter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic interface MessageGetter {\n  String getMessage( String key, String... parameters );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/i18n/MessageGetterFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic interface MessageGetterFactory {\n  MessageGetter create( Class<?> PKG );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/i18n/impl/BaseMessagesMessageGetterFactoryImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n.impl;\n\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class BaseMessagesMessageGetterFactoryImpl implements MessageGetterFactory {\n  private static BaseMessagesMessageGetterFactoryImpl instance = new BaseMessagesMessageGetterFactoryImpl();\n\n  @Override public MessageGetter create( Class<?> PKG ) {\n    return new BaseMessagesMessageGetterImpl( PKG );\n  }\n\n  public static BaseMessagesMessageGetterFactoryImpl getInstance() {\n    return instance;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/i18n/impl/BaseMessagesMessageGetterImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n.impl;\n\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class BaseMessagesMessageGetterImpl implements MessageGetter {\n  private final Class<?> PKG;\n\n  public BaseMessagesMessageGetterImpl( Class<?> PKG ) {\n    this.PKG = PKG;\n  }\n\n  @Override public String getMessage( String key, String... parameters ) {\n    if ( parameters != null && parameters.length > 0 ) {\n      return BaseMessages.getString( PKG, key, parameters );\n    } else {\n      return BaseMessages.getString( PKG, key );\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/impl/RuntimeTestComparator.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\n\nimport java.util.Comparator;\nimport java.util.Map;\n\n/**\n * Created by bryan on 8/18/15.\n */\npublic class RuntimeTestComparator implements Comparator<RuntimeTest> {\n  private final Map<String, Integer> orderedModules;\n\n  public RuntimeTestComparator( Map<String, Integer> orderedModules ) {\n    this.orderedModules = orderedModules;\n  }\n\n  private Integer nullSafeCompare( Object first, Object second ) {\n    if ( first == null ) {\n      if ( second == null ) {\n        return null;\n      } else {\n        return 1;\n      }\n    }\n    if ( second == null ) {\n      return -1;\n    }\n    if ( first.equals( second ) ) {\n      return 0;\n    }\n    return null;\n  }\n\n  private int compareModuleNames( String o1Module, String o2Module ) {\n    Integer result = nullSafeCompare( o1Module, o2Module );\n    if ( result != null ) {\n      return result;\n    }\n    Integer o1OrderNum = orderedModules.get( o1Module );\n    Integer o2OrderNum = orderedModules.get( o2Module );\n    result = nullSafeCompare( o1OrderNum, o2OrderNum );\n    if ( result != null ) {\n      return result;\n    }\n    return o1Module.compareTo( o2Module );\n  }\n\n  @Override public int compare( RuntimeTest o1, RuntimeTest o2 ) {\n    Integer result = compareModuleNames( o1.getModule(), o2.getModule() );\n    if ( result != 0 ) {\n      return result;\n    }\n    String o1Id = o1.getId();\n    String o2Id = o2.getId();\n    result = nullSafeCompare( o1Id, o2Id );\n    if ( result == null ) {\n      result = o1Id.compareTo( o2Id );\n    }\n    return result;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/impl/RuntimeTestRunner.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.module.impl.RuntimeTestModuleResultsImpl;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestDelegateWithMoreDependencies;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultImpl;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.concurrent.ExecutorService;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic class RuntimeTestRunner {\n  private static final Class<?> PKG = RuntimeTestRunner.class;\n  private final Set<RuntimeTest> remainingTests;\n  private final Object objectUnderTest;\n  private final RuntimeTestProgressCallback runtimeTestProgressCallback;\n  private final ExecutorService executorService;\n  private final Set<String> satisfiedDependencies;\n  private final Set<String> failedDependencies;\n  private final List<String> runtimeModuleList;\n  private final Map<String, List<String>> stringRuntimeTestModuleToTestIdMap;\n  private final Map<String, RuntimeTestResult> runtimeTestResultMap;\n  private final Set<String> outstandingTestIds;\n  private final Set<String> runningTestIds;\n  private final int numberOfTests;\n\n  @SuppressWarnings( \"unchecked\" )\n  public RuntimeTestRunner( Collection<? extends RuntimeTest> runtimeTests, Object objectUnderTest,\n                            RuntimeTestProgressCallback runtimeTestProgressCallback, ExecutorService executorService ) {\n    this.objectUnderTest = objectUnderTest;\n    runtimeModuleList = new ArrayList<>();\n    stringRuntimeTestModuleToTestIdMap = new HashMap<>();\n    runtimeTestResultMap = new HashMap<>();\n    outstandingTestIds = new HashSet<>();\n    runningTestIds = new HashSet<>();\n\n    Set<RuntimeTest> initTests = new HashSet<>();\n    Set<String> initTestIds = new HashSet<>();\n    Set<RuntimeTest> nonInitTests = new HashSet<>();\n    int numberOfTests = 0;\n    for ( RuntimeTest runtimeTest : runtimeTests ) {\n      if ( runtimeTest.accepts( objectUnderTest ) ) {\n        numberOfTests++;\n        String runtimeTestModule = runtimeTest.getModule();\n        List<String> runtimeIdsForModule = stringRuntimeTestModuleToTestIdMap.get( runtimeTestModule );\n        if ( runtimeIdsForModule == null ) {\n          runtimeModuleList.add( runtimeTestModule );\n          runtimeIdsForModule = new ArrayList<>();\n          stringRuntimeTestModuleToTestIdMap.put( runtimeTestModule, runtimeIdsForModule );\n        }\n        String runtimeTestId = runtimeTest.getId();\n        runtimeIdsForModule.add( runtimeTestId );\n        if ( runtimeTest.isConfigInitTest() ) {\n          initTests.add( runtimeTest );\n          initTestIds.add( runtimeTestId );\n        } else {\n          nonInitTests.add( runtimeTest );\n        }\n      }\n    }\n    this.numberOfTests = numberOfTests;\n    this.remainingTests = new HashSet<>( initTests );\n    for ( RuntimeTest nonInitTest : nonInitTests ) {\n      remainingTests.add( new RuntimeTestDelegateWithMoreDependencies( nonInitTest, initTestIds ) );\n    }\n    for ( RuntimeTest remainingTest : remainingTests ) {\n      String remainingTestId = remainingTest.getId();\n      runtimeTestResultMap\n        .put( remainingTestId, new RuntimeTestResultImpl( remainingTest, false, new RuntimeTestResultSummaryImpl(),\n          0L ) );\n      outstandingTestIds.add( remainingTestId );\n    }\n    this.satisfiedDependencies = new HashSet<>();\n    this.failedDependencies = new HashSet<>();\n    this.runtimeTestProgressCallback = runtimeTestProgressCallback;\n    this.executorService = executorService;\n  }\n\n  private void markSkipped( RuntimeTest runtimeTest ) {\n    Set<String> relevantFailed = new HashSet<>( failedDependencies );\n    relevantFailed.retainAll( runtimeTest.getDependencies() );\n\n    // Get one of the dependencies' names for display\n    String failedDependencyName = \"a prerequisite\";\n    if ( !relevantFailed.isEmpty() ) {\n      String failedDependencyId = relevantFailed.iterator().next();\n      RuntimeTestResult runtimeTestResult = runtimeTestResultMap.get( failedDependencyId );\n      if ( runtimeTestResult != null ) {\n        failedDependencyName = runtimeTestResult.getRuntimeTest().getName();\n      }\n\n    }\n\n    // We had a dependency fail so we need to skip\n    String runtimeTestId = runtimeTest.getId();\n    failedDependencies.add( runtimeTestId );\n    outstandingTestIds.remove( runtimeTestId );\n    runningTestIds.remove( runtimeTestId );\n    runtimeTestResultMap.put( runtimeTestId, new RuntimeTestResultImpl( runtimeTest, true,\n      new RuntimeTestResultSummaryImpl( new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.SKIPPED,\n        BaseMessages.getString( PKG, \"RuntimeTestRunner.Skipped.Desc\", failedDependencyName ),\n        BaseMessages.getString( PKG, \"RuntimeTestRunner.Skipped.Message\", runtimeTest.getName(), relevantFailed ), (Throwable) null ) ), 0L ) );\n  }\n\n  private void callbackState() {\n    callbackState( false );\n  }\n\n  private void callbackState( boolean done ) {\n    if ( runtimeTestProgressCallback != null ) {\n      List<RuntimeTestModuleResults> moduleResults = new ArrayList<>( runtimeModuleList.size() );\n      for ( String runtimeModule : runtimeModuleList ) {\n        List<RuntimeTestResult> runtimeTestResults = new ArrayList<>();\n        Set<RuntimeTest> runningTests = new HashSet<>();\n        HashSet<RuntimeTest> outstandingTests = new HashSet<>();\n        for ( String testId : stringRuntimeTestModuleToTestIdMap.get( runtimeModule ) ) {\n          RuntimeTestResult runtimeTestResult = runtimeTestResultMap.get( testId );\n          runtimeTestResults.add( runtimeTestResult );\n          if ( runningTestIds.contains( testId ) ) {\n            runningTests.add( runtimeTestResult.getRuntimeTest() );\n          } else if ( outstandingTestIds.contains( testId ) ) {\n            outstandingTests.add( runtimeTestResult.getRuntimeTest() );\n          }\n        }\n        moduleResults\n          .add( new RuntimeTestModuleResultsImpl( runtimeModule, runtimeTestResults, runningTests, outstandingTests ) );\n      }\n      int testsRunning = runningTestIds.size();\n      int testsOutstanding = outstandingTestIds.size();\n      int testsDone = numberOfTests - testsOutstanding - testsRunning;\n      runtimeTestProgressCallback.onProgress(\n        new RuntimeTestStatusImpl( Collections.unmodifiableList( moduleResults ), testsDone, testsRunning,\n          testsOutstanding, done ) );\n    }\n  }\n\n  private void runTest( RuntimeTest runtimeTest ) {\n    String eligibleTestId = runtimeTest.getId();\n    RuntimeTestResultSummary runtimeTestResultSummary;\n    long before = System.currentTimeMillis();\n    RuntimeTestEntrySeverity overallSeverity;\n    try {\n      runtimeTestResultSummary = runtimeTest.runTest( objectUnderTest );\n      overallSeverity = runtimeTestResultSummary.getOverallStatusEntry().getSeverity();\n    } catch ( Throwable e ) {\n      overallSeverity = RuntimeTestEntrySeverity.FATAL;\n      runtimeTestResultSummary = new RuntimeTestResultSummaryImpl(\n        new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.FATAL,\n          BaseMessages.getString( PKG, \"RuntimeTestRunner.Error.Desc\", runtimeTest.getName() ), e.getMessage(), e ) );\n    }\n    long after = System.currentTimeMillis();\n    RuntimeTestResult runtimeTestResult =\n      new RuntimeTestResultImpl( runtimeTest, true, runtimeTestResultSummary, after - before );\n    synchronized ( this ) {\n      if ( overallSeverity == RuntimeTestEntrySeverity.ERROR || overallSeverity == RuntimeTestEntrySeverity.FATAL ) {\n        failedDependencies.add( eligibleTestId );\n      } else {\n        satisfiedDependencies.add( eligibleTestId );\n      }\n      runtimeTestResultMap.put( eligibleTestId, runtimeTestResult );\n      runningTestIds.remove( eligibleTestId );\n      callbackState();\n      notifyAll();\n    }\n  }\n\n  public synchronized void runTests() {\n    callbackState();\n    while ( remainingTests.size() > 0 || runningTestIds.size() > 0 ) {\n      Set<RuntimeTest> eligibleTests = new HashSet<>();\n      Set<RuntimeTest> skippingTests = new HashSet<>();\n      Set<String> possibleToSatisfyIds = new HashSet<>( satisfiedDependencies );\n      for ( RuntimeTest remainingTest : remainingTests ) {\n        possibleToSatisfyIds.add( remainingTest.getId() );\n      }\n      possibleToSatisfyIds.addAll( outstandingTestIds );\n      possibleToSatisfyIds.addAll( runningTestIds );\n      for ( RuntimeTest remainingTest : remainingTests ) {\n        Set<String> remainingTestDependencies = remainingTest.getDependencies();\n        if ( satisfiedDependencies.containsAll( remainingTestDependencies ) ) {\n          eligibleTests.add( remainingTest );\n        } else if ( !Collections.disjoint( remainingTestDependencies, failedDependencies ) || !possibleToSatisfyIds\n          .containsAll( remainingTestDependencies ) ) {\n          skippingTests.add( remainingTest );\n          markSkipped( remainingTest );\n        }\n      }\n      remainingTests.removeAll( eligibleTests );\n      remainingTests.removeAll( skippingTests );\n      for ( RuntimeTest eligibleTest : eligibleTests ) {\n        String eligibleTestId = eligibleTest.getId();\n        outstandingTestIds.remove( eligibleTestId );\n        runningTestIds.add( eligibleTestId );\n      }\n      final int wasRunning = runningTestIds.size();\n      for ( final RuntimeTest eligibleTest : eligibleTests ) {\n        executorService.submit( new Runnable() {\n          @Override\n          public void run() {\n            runTest( eligibleTest );\n          }\n        } );\n      }\n      // If we skipped test(s) state has changed and we should rerun immediately, otherwise we can wait until one\n      // finishes\n      if ( skippingTests.size() == 0 ) {\n        if ( wasRunning > 0 ) {\n          while ( wasRunning == runningTestIds.size() ) {\n            try {\n              // Wait until a test finishes\n              wait();\n            } catch ( InterruptedException e ) {\n              // Ignore\n            }\n          }\n        }\n      } else {\n        callbackState();\n      }\n    }\n    callbackState( true );\n  }\n\n  public static class Factory {\n    public RuntimeTestRunner create( Collection<? extends RuntimeTest> runtimeTests, Object objectUnderTest,\n                                     RuntimeTestProgressCallback runtimeTestProgressCallback,\n                                     ExecutorService executorService ) {\n      return new RuntimeTestRunner( runtimeTests, objectUnderTest, runtimeTestProgressCallback, executorService );\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/impl/RuntimeTestStatusImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\n\nimport java.util.List;\n\n/**\n * Created by bryan on 8/18/15.\n */\npublic class RuntimeTestStatusImpl implements RuntimeTestStatus {\n  private final List<RuntimeTestModuleResults> runtimeTestModuleResults;\n  private final int testsDone;\n  private final int testsRunning;\n  private final int testsOutstanding;\n  private final boolean done;\n\n  public RuntimeTestStatusImpl( List<RuntimeTestModuleResults> runtimeTestModuleResults, int testsDone,\n                                int testsRunning, int testsOutstanding, boolean done ) {\n    this.runtimeTestModuleResults = runtimeTestModuleResults;\n    this.testsDone = testsDone;\n    this.testsRunning = testsRunning;\n    this.testsOutstanding = testsOutstanding;\n    this.done = done;\n  }\n\n  @Override public List<RuntimeTestModuleResults> getModuleResults() {\n    return runtimeTestModuleResults;\n  }\n\n  @Override public int getTestsDone() {\n    return testsDone;\n  }\n\n  @Override public int getTestsRunning() {\n    return testsRunning;\n  }\n\n  @Override public int getTestsOutstanding() {\n    return testsOutstanding;\n  }\n\n  @Override public boolean isDone() {\n    return done;\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"RuntimeTestStatusImpl{\" +\n      \"runtimeTestModuleResults=\" + runtimeTestModuleResults +\n      \", testsDone=\" + testsDone +\n      \", testsRunning=\" + testsRunning +\n      \", testsOutstanding=\" + testsOutstanding +\n      \", done=\" + done +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/impl/RuntimeTesterImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\nimport org.pentaho.runtime.test.RuntimeTester;\n\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\n\n/**\n * Created by bryan on 8/12/15.\n */\npublic class RuntimeTesterImpl implements RuntimeTester {\n  private final List<RuntimeTest> runtimeTests;\n  private final ExecutorService executorService;\n  private final RuntimeTestRunner.Factory runtimeTestRunnerFactory;\n  private RuntimeTestComparator runtimeTestComparator;\n  private static RuntimeTesterImpl instance;\n\n  public RuntimeTesterImpl( List<RuntimeTest> runtimeTests, ExecutorService executorService,\n                            String orderedModulesString ) {\n    this( runtimeTests, executorService, orderedModulesString, new RuntimeTestRunner.Factory() );\n  }\n\n  public static RuntimeTester getInstance(){\n    if ( instance == null ) {\n      List<RuntimeTest> runtimeTests = new ArrayList<>();\n      instance = new RuntimeTesterImpl( runtimeTests, Executors.newCachedThreadPool(), \"Hadoop Configuration,Hadoop File System,Map Reduce,Oozie,Zookeeper\" );\n    }\n    return instance;\n  }\n\n  public RuntimeTesterImpl( List<RuntimeTest> runtimeTests, ExecutorService executorService,\n                            String orderedModulesString, RuntimeTestRunner.Factory runtimeTestRunnerFactory ) {\n    this.runtimeTests = runtimeTests;\n    this.executorService = executorService;\n    this.runtimeTestRunnerFactory = runtimeTestRunnerFactory;\n    HashMap<String, Integer> orderedModules = new HashMap<>();\n    String[] split = orderedModulesString.split( \",\" );\n    for ( int module = 0; module < split.length; module++ ) {\n      orderedModules.put( split[ module ].trim(), module );\n    }\n    runtimeTestComparator = new RuntimeTestComparator( orderedModules );\n  }\n\n  @Override\n  public void runtimeTest( final Object objectUnderTest,\n                           final RuntimeTestProgressCallback runtimeTestProgressCallback ) {\n    final List<RuntimeTest> runtimeTests = new ArrayList<>( this.runtimeTests );\n    Collections.sort( runtimeTests, runtimeTestComparator );\n    executorService.submit( new Runnable() {\n      @Override public void run() {\n        runtimeTestRunnerFactory.create( runtimeTests, objectUnderTest, runtimeTestProgressCallback, executorService )\n          .runTests();\n      }\n    } );\n  }\n\n  public void addRuntimeTest( RuntimeTest test ) {\n    this.runtimeTests.add( test );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/module/RuntimeTestModuleResults.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.module;\n\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\n\nimport java.util.List;\nimport java.util.Set;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTestModuleResults {\n  String getName();\n\n  List<RuntimeTestResult> getRuntimeTestResults();\n\n  Set<RuntimeTest> getRunningTests();\n\n  Set<RuntimeTest> getOutstandingTests();\n\n  RuntimeTestEntrySeverity getMaxSeverity();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/module/impl/RuntimeTestModuleResultsImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.module.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\n\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Set;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic class RuntimeTestModuleResultsImpl implements RuntimeTestModuleResults {\n  private final String name;\n  private final List<RuntimeTestResult> runtimeTestResults;\n  private final Set<RuntimeTest> runningTests;\n  private final Set<RuntimeTest> outstandingTests;\n  private final RuntimeTestEntrySeverity maxSeverity;\n\n  public RuntimeTestModuleResultsImpl( String name, List<RuntimeTestResult> runtimeTestResults,\n                                       Set<RuntimeTest> runningTests, Set<RuntimeTest> outstandingTests ) {\n    this.name = name;\n    this.runningTests = Collections.unmodifiableSet( new HashSet<>( runningTests ) );\n    this.outstandingTests = Collections.unmodifiableSet( new HashSet<>( outstandingTests ) );\n    this.runtimeTestResults = Collections.unmodifiableList( new ArrayList<>( runtimeTestResults ) );\n    this.maxSeverity = RuntimeTestEntrySeverity.maxSeverityResult( runtimeTestResults );\n  }\n\n  @Override public String getName() {\n    return name;\n  }\n\n  @Override public List<RuntimeTestResult> getRuntimeTestResults() {\n    return runtimeTestResults;\n  }\n\n  @Override public RuntimeTestEntrySeverity getMaxSeverity() {\n    return maxSeverity;\n  }\n\n  @Override public Set<RuntimeTest> getRunningTests() {\n    return runningTests;\n  }\n\n  @Override public Set<RuntimeTest> getOutstandingTests() {\n    return outstandingTests;\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"RuntimeTestModuleResultsImpl{\" +\n      \"getName='\" + name + '\\'' +\n      \", runtimeTestResults=\" + runtimeTestResults +\n      \", runningTests=\" + runningTests +\n      \", outstandingTests=\" + outstandingTests +\n      \", maxSeverity=\" + maxSeverity +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/network/ConnectivityTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network;\n\n\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic interface ConnectivityTest {\n  RuntimeTestResultEntry runTest();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/network/ConnectivityTestFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network;\n\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic interface ConnectivityTestFactory {\n  ConnectivityTest create( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                           boolean haPossible );\n\n  ConnectivityTest create( MessageGetterFactory messageGetterFactory, String hostname, String port, boolean haPossible,\n                           RuntimeTestEntrySeverity severityOfFailures );\n\n  ConnectivityTest create( MessageGetterFactory messageGetterFactory, String url, String testPath,\n                           String user, String password );\n\n  ConnectivityTest create( MessageGetterFactory messageGetterFactory, String url, String testPath,\n                           String user, String password, RuntimeTestEntrySeverity severityOfFailures );\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/network/impl/ConnectivityTestFactoryImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network.impl;\n\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport java.net.URI;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class ConnectivityTestFactoryImpl implements ConnectivityTestFactory {\n  @Override public ConnectivityTest create( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                                            boolean haPossible ) {\n    return create( messageGetterFactory, hostname, port, haPossible, RuntimeTestEntrySeverity.FATAL );\n  }\n\n  @Override public ConnectivityTest create( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                                            boolean haPossible, RuntimeTestEntrySeverity severityOfFailures ) {\n    return new ConnectivityTestImpl( messageGetterFactory, hostname, port, haPossible, severityOfFailures );\n  }\n\n  @Override\n  public ConnectivityTest create( MessageGetterFactory messageGetterFactory, String url, String testPath,\n                                  String user, String password ) {\n    return new GatewayConnectivityTestImpl( messageGetterFactory, URI.create( url ), testPath, user, password,\n      RuntimeTestEntrySeverity.FATAL );\n  }\n\n  @Override\n  public ConnectivityTest create( MessageGetterFactory messageGetterFactory, String url, String testPath,\n                                  String user, String password, RuntimeTestEntrySeverity severityOfFailures ) {\n    return new GatewayConnectivityTestImpl( messageGetterFactory, URI.create( url ), testPath, user, password,\n      severityOfFailures );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/network/impl/ConnectivityTestImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network.impl;\n\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport java.io.IOException;\nimport java.net.InetSocketAddress;\nimport java.net.InetAddress;\nimport java.net.Proxy;\nimport java.net.Socket;\nimport java.net.UnknownHostException;\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class ConnectivityTestImpl implements ConnectivityTest {\n  public static final String CONNECT_TEST_HOST_BLANK_DESC = \"ConnectTest.HostBlank.Desc\";\n  public static final String CONNECT_TEST_HOST_BLANK_MESSAGE = \"ConnectTest.HostBlank.Message\";\n  public static final String CONNECT_TEST_HA_DESC = \"ConnectTest.HA.Desc\";\n  public static final String CONNECT_TEST_HA_MESSAGE = \"ConnectTest.HA.Message\";\n  public static final String CONNECT_TEST_PORT_BLANK_DESC = \"ConnectTest.PortBlank.Desc\";\n  public static final String CONNECT_TEST_PORT_BLANK_MESSAGE = \"ConnectTest.PortBlank.Message\";\n  public static final String CONNECT_TEST_CONNECT_SUCCESS_DESC = \"ConnectTest.ConnectSuccess.Desc\";\n  public static final String CONNECT_TEST_CONNECT_SUCCESS_MESSAGE = \"ConnectTest.ConnectSuccess.Message\";\n  public static final String CONNECT_TEST_CONNECT_FAIL_DESC = \"ConnectTest.ConnectFail.Desc\";\n  public static final String CONNECT_TEST_CONNECT_FAIL_MESSAGE = \"ConnectTest.ConnectFail.Message\";\n  public static final String CONNECT_TEST_UNKNOWN_HOSTNAME_DESC = \"ConnectTest.UnknownHostname.Desc\";\n  public static final String CONNECT_TEST_UNKNOWN_HOSTNAME_MESSAGE = \"ConnectTest.UnknownHostname.Message\";\n  public static final String CONNECT_TEST_NETWORK_ERROR_DESC = \"ConnectTest.NetworkError.Desc\";\n  public static final String CONNECT_TEST_NETWORK_ERROR_MESSAGE = \"ConnectTest.NetworkError.Message\";\n  public static final String CONNECT_TEST_PORT_NUMBER_FORMAT_DESC = \"ConnectTest.PortNumberFormat.Desc\";\n  public static final String CONNECT_TEST_PORT_NUMBER_FORMAT_MESSAGE = \"ConnectTest.PortNumberFormat.Message\";\n  public static final String CONNECT_TEST_UNREACHABLE_DESC = \"ConnectTest.Unreachable.Desc\";\n  public static final String CONNECT_TEST_UNREACHABLE_MESSAGE = \"ConnectTest.Unreachable.Message\";\n  private static final Class<?> PKG = ConnectivityTestImpl.class;\n  protected final MessageGetter messageGetter;\n  protected final String hostname;\n  protected final String port;\n  private final boolean haPossible;\n  protected final RuntimeTestEntrySeverity severityOfFalures;\n  private final SocketFactory socketFactory;\n  protected final InetAddressFactory inetAddressFactory;\n\n  public ConnectivityTestImpl( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                               boolean haPossible ) {\n    this( messageGetterFactory, hostname, port, haPossible, RuntimeTestEntrySeverity.FATAL );\n  }\n\n  public ConnectivityTestImpl( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                               boolean haPossible,\n                               RuntimeTestEntrySeverity severityOfFailures ) {\n    this( messageGetterFactory, hostname, port, haPossible, severityOfFailures, new SocketFactory(),\n      new InetAddressFactory() );\n  }\n\n  public ConnectivityTestImpl( MessageGetterFactory messageGetterFactory, String hostname, String port,\n                               boolean haPossible,\n                               RuntimeTestEntrySeverity severityOfFailures, SocketFactory socketFactory,\n                               InetAddressFactory inetAddressFactory ) {\n    this.messageGetter = messageGetterFactory.create( PKG );\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    this.hostname = variables.environmentSubstitute( hostname );\n    this.port = variables.environmentSubstitute( port );\n    this.haPossible = haPossible;\n    this.severityOfFalures = severityOfFailures;\n    this.socketFactory = socketFactory;\n    this.inetAddressFactory = inetAddressFactory;\n  }\n\n  @Override public RuntimeTestResultEntry runTest() {\n    List<RuntimeTestResultEntry> runtimeTestResultEntries = new ArrayList<>();\n\n    if ( Const.isEmpty( hostname ) ) {\n      return new RuntimeTestResultEntryImpl( severityOfFalures,\n        messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_DESC ),\n        messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_MESSAGE ) );\n    } else if ( Const.isEmpty( port ) ) {\n      if ( haPossible ) {\n        return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( CONNECT_TEST_HA_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_HA_MESSAGE, hostname ) );\n      } else {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( CONNECT_TEST_PORT_BLANK_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_PORT_BLANK_MESSAGE ) );\n      }\n    } else {\n      Socket socket = null;\n      try {\n        if( !isSocks5ProxyServer() ) {\n          if ( inetAddressFactory.create( hostname ).isReachable( 10 * 1000 ) ) {\n            try {\n              socket = socketFactory.create( hostname, Integer.parseInt( port ) );\n              return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO,\n                      messageGetter.getMessage( CONNECT_TEST_CONNECT_SUCCESS_DESC ),\n                      messageGetter.getMessage( CONNECT_TEST_CONNECT_SUCCESS_MESSAGE, hostname, port ) );\n            } catch ( IOException e ) {\n              return new RuntimeTestResultEntryImpl( severityOfFalures,\n                      messageGetter.getMessage( CONNECT_TEST_CONNECT_FAIL_DESC ),\n                      messageGetter.getMessage( CONNECT_TEST_CONNECT_FAIL_MESSAGE, hostname, port ), e );\n            } finally {\n              if ( socket != null ) {\n                try {\n                  socket.close();\n                } catch ( IOException e ) {\n                  // Ignore\n                }\n              }\n            }\n          } else {\n            return new RuntimeTestResultEntryImpl( severityOfFalures,\n                    messageGetter.getMessage( CONNECT_TEST_UNREACHABLE_DESC, hostname ),\n                    messageGetter.getMessage( CONNECT_TEST_UNREACHABLE_MESSAGE, hostname ) );\n          }\n        }\n        else {\n          String proxyHost = System.getProperty( \"socksProxyHost\" );\n          int proxyPort = Integer.parseInt( System.getProperty( \"socksProxyPort\" ) );\n          SocketFactory proxySocketFactory = new SocketFactory( proxyHost, proxyPort );\n          try {\n            socket = proxySocketFactory.create( hostname, Integer.parseInt( port ) );\n            return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO,\n                    messageGetter.getMessage( CONNECT_TEST_CONNECT_SUCCESS_DESC ),\n                    messageGetter.getMessage( CONNECT_TEST_CONNECT_SUCCESS_MESSAGE, hostname, port ) );\n          } catch ( IOException e ) {\n            return new RuntimeTestResultEntryImpl( severityOfFalures,\n                    messageGetter.getMessage( CONNECT_TEST_CONNECT_FAIL_DESC ),\n                    messageGetter.getMessage( CONNECT_TEST_CONNECT_FAIL_MESSAGE, hostname, port ), e);\n          } finally {\n            if ( socket != null ) {\n              try {\n                socket.close();\n              } catch ( IOException e ) {\n                // Ignore\n              }\n            }\n          }\n        }\n      } catch ( UnknownHostException e ) {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( CONNECT_TEST_UNKNOWN_HOSTNAME_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_UNKNOWN_HOSTNAME_MESSAGE, hostname ), e );\n      } catch ( IOException e ) {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( CONNECT_TEST_NETWORK_ERROR_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_NETWORK_ERROR_MESSAGE, hostname, port ), e );\n      } catch ( NumberFormatException e ) {\n        return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( CONNECT_TEST_PORT_NUMBER_FORMAT_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_PORT_NUMBER_FORMAT_MESSAGE, port ), e );\n      }\n    }\n  }\n\n  private boolean isSocks5ProxyServer() {\n    String proxyHost = System.getProperty( \"socksProxyHost\" );\n    String proxyPort = System.getProperty( \"socksProxyPort\" );\n\n    return proxyHost != null && !proxyHost.isEmpty() && proxyPort != null && !proxyPort.isEmpty();\n  }\n\n  /**\n   * Pulled out class to enable mock injection in tests\n   */\n  public static class SocketFactory {\n    private final String proxyHost;\n    private final int proxyPort;\n    public SocketFactory() {\n      this.proxyHost = null;\n      this.proxyPort = -1;\n    }\n    public SocketFactory( String proxyHost, int proxyPort ) {\n      this.proxyHost = proxyHost;\n      this.proxyPort = proxyPort;\n    }\n    public Socket create( String hostname, int port ) throws IOException {\n      if ( proxyHost != null && proxyPort > 0 ) {\n        Proxy proxy = new Proxy( Proxy.Type.SOCKS, new InetSocketAddress( proxyHost, proxyPort ) );\n        Socket socket = new Socket( proxy);\n        socket.connect( new InetSocketAddress( hostname, port ), 10000 );\n        return socket;\n      } else {\n        return new Socket( hostname, port );\n      }\n    }\n  }\n\n  /**\n   * Pulled out class to enable mock injection in tests\n   */\n  public static class InetAddressFactory {\n    public InetAddress create( String hostname ) throws UnknownHostException {\n      return InetAddress.getByName( hostname );\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/network/impl/GatewayConnectivityTestImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network.impl;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.protocol.HttpClientContext;\nimport org.pentaho.di.core.util.HttpClientManager;\nimport org.pentaho.di.core.util.HttpClientUtil;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport javax.net.ssl.KeyManager;\nimport javax.net.ssl.SSLContext;\nimport javax.net.ssl.SSLException;\nimport javax.net.ssl.TrustManager;\nimport javax.net.ssl.X509TrustManager;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.UnknownHostException;\nimport java.security.KeyManagementException;\nimport java.security.NoSuchAlgorithmException;\nimport java.security.SecureRandom;\nimport java.security.cert.CertificateException;\nimport java.security.cert.X509Certificate;\n\n\n/**\n * Created by dstepanov on 26/04/17.\n */\npublic class GatewayConnectivityTestImpl extends ConnectivityTestImpl {\n  public static final String GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_DESC =\n    \"GatewayConnectTest.Success.Desc\";\n  public static final String GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_MESSAGE =\n    \"GatewayConnectTest.Success.Message\";\n  public static final String GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_DESC =\n    \"GatewayConnectTest.UnknownReturnCode.Desc\";\n  public static final String GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_MESSAGE =\n    \"GatewayConnectTest.UnknownReturnCode.Message\";\n  public static final String GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_DESC =\n    \"GatewayConnectTest.ServiceNotFound.Desc\";\n  public static final String GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_MESSAGE =\n    \"GatewayConnectTest.ServiceNotFound.Message\";\n  public static final String GATEWAY_CONNECT_TEST_FORBIDDEN_DESC =\n    \"GatewayConnectTest.Forbidden.Desc\";\n  public static final String GATEWAY_CONNECT_TEST_FORBIDDEN_MESSAGE =\n    \"GatewayConnectTest.Forbidden.Message\";\n  public static final String GATEWAY_CONNECT_TLSCONTEXT_DESC =\n    \"GatewayConnectTest.TLSContext.Desc\";\n  public static final String GATEWAY_CONNECT_SSLEXCEPTION_MESSAGE =\n    \"GatewayConnectTest.SSLException.Message\";\n  public static final String GATEWAY_CONNECT_SSLEXCEPTION_DESC =\n    \"GatewayConnectTest.SSLException.Desc\";\n  public static final String GATEWAY_CONNECT_TLSCONTEXT_MESSAGE =\n    \"GatewayConnectTest.TLSContext.Message\";\n  public static final String GATEWAY_CONNECT_TEST_UNAUTHORIZED_DESC =\n    \"GatewayConnectTest.Unauthorized.Desc\";\n  public static final String GATEWAY_CONNECT_TEST_UNAUTHORIZED_MESSAGE =\n    \"GatewayConnectTest.Unauthorized.Message\";\n  public static final String GATEWAY_CONNECT_TLSCONTEXTINIT_DESC =\n    \"GatewayConnectTest.TLSContextInit.Desc\";\n  public static final String GATEWAY_CONNECT_TLSCONTEXTINIT_MESSAGE =\n    \"GatewayConnectTest.TLSContextInit.Message\";\n  public static final String GATEWAY_CONNECT_EXECUTION_FAILED_DESC =\n    \"GatewayConnectTest.ExecutionFailed.Desc\";\n  public static final String GATEWAY_CONNECT_EXECUTION_FAILED_MESSAGE =\n    \"GatewayConnectTest.ExecutionFailed.Message\";\n\n  private static final Class<?> PKG = GatewayConnectivityTestImpl.class;\n  private final URI uri;\n  private final String path;\n  private final String user;\n  private final String password;\n  private final Variables variables;\n  private HttpClientManager httpClientManager = HttpClientManager.getInstance();\n\n  public GatewayConnectivityTestImpl( MessageGetterFactory messageGetterFactory, URI uri, String testPath,\n                                      String user, String password, RuntimeTestEntrySeverity severity ) {\n    super( messageGetterFactory, uri.getHost(), Integer.toString( uri.getPort() ), true,\n      severity );\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    variables = new Variables();\n    variables.initializeVariablesFrom( null );\n    this.path = variables.environmentSubstitute( testPath );\n    this.password = variables.environmentSubstitute( password );\n    this.user = variables.environmentSubstitute( user );\n    this.uri = uri.resolve( uri.getPath() + path );\n  }\n\n  @Override\n  public RuntimeTestResultEntry runTest() {\n\n    if ( StringUtils.isBlank( hostname ) ) {\n      return new RuntimeTestResultEntryImpl( severityOfFalures,\n        messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_DESC ),\n        messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_MESSAGE ) );\n    } else {\n\n      try {\n        Integer portInt = Integer.parseInt( port );\n        // Ignore ssl certificate issues if KETTLE_KNOX_IGNORE_SSL = true\n        if ( variables.getBooleanValueOfVariable( \"${KETTLE_KNOX_IGNORE_SSL}\", false ) ) {\n          SSLContext ctx = getTlsContext();\n          initContextWithTrustAll( ctx );\n          SSLContext.setDefault( ctx );\n        }\n        String userString = \"\";\n        HttpClientContext context = null;\n        HttpGet method = new HttpGet( uri.toString() );\n        HttpClient httpClient;\n\n        if ( StringUtils.isNotBlank( user ) ) {\n          userString = user;\n          httpClient = getHttpClient( user, password );\n          context = HttpClientUtil.createPreemptiveBasicAuthentication( uri.getHost(), portInt, user, password );\n        } else {\n          httpClient = getHttpClient();\n        }\n\n        HttpResponse httpResponse =\n          context != null ? httpClient.execute( method, context ) : httpClient.execute( method );\n        Integer returnCode = httpResponse.getStatusLine().getStatusCode();\n\n        switch ( returnCode ) {\n          case 200: {\n            return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO,\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_DESC ),\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_MESSAGE, uri.toString() ) );\n          }\n          case 404: {\n            return new RuntimeTestResultEntryImpl( severityOfFalures,\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_DESC ),\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_MESSAGE, uri.toString() ) );\n          }\n          case 403: {\n            return new RuntimeTestResultEntryImpl( severityOfFalures,\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_FORBIDDEN_DESC ),\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_FORBIDDEN_MESSAGE, uri.toString(), userString ) );\n          }\n          case 401: {\n            return new RuntimeTestResultEntryImpl( severityOfFalures,\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_UNAUTHORIZED_DESC ),\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_UNAUTHORIZED_MESSAGE, uri.toString(), userString ) );\n          } default: {\n            return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.WARNING,\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_DESC ),\n              messageGetter.getMessage( GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_MESSAGE, userString,\n                returnCode.toString(), uri.toString() ) );\n          }\n        }\n      } catch ( NoSuchAlgorithmException e ) {\n        return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( GATEWAY_CONNECT_TLSCONTEXT_DESC ),\n          messageGetter.getMessage( GATEWAY_CONNECT_TLSCONTEXT_MESSAGE ), e );\n      } catch ( SSLException e ) {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( GATEWAY_CONNECT_SSLEXCEPTION_DESC ),\n          messageGetter.getMessage( GATEWAY_CONNECT_SSLEXCEPTION_MESSAGE, uri.toString(), e.getMessage() ), e );\n      } catch ( UnknownHostException e ) {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( CONNECT_TEST_UNKNOWN_HOSTNAME_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_UNKNOWN_HOSTNAME_MESSAGE, uri.getHost() ), e );\n      } catch ( KeyManagementException e ) {\n        return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( GATEWAY_CONNECT_TLSCONTEXTINIT_DESC ),\n          messageGetter.getMessage( GATEWAY_CONNECT_TLSCONTEXTINIT_MESSAGE ), e );\n      } catch ( IOException e ) {\n        return new RuntimeTestResultEntryImpl( severityOfFalures,\n          messageGetter.getMessage( GATEWAY_CONNECT_EXECUTION_FAILED_DESC ),\n          messageGetter.getMessage( GATEWAY_CONNECT_EXECUTION_FAILED_MESSAGE, uri.toString() ), e );\n      } catch ( NumberFormatException e ) {\n        return new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( CONNECT_TEST_PORT_NUMBER_FORMAT_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_PORT_NUMBER_FORMAT_MESSAGE, port ), e );\n      }\n    }\n  }\n\n  void initContextWithTrustAll( SSLContext ctx ) throws KeyManagementException {\n    ctx.init( new KeyManager[ 0 ], new TrustManager[] { new X509TrustManager() {\n\n      @Override public void checkClientTrusted( X509Certificate[] x509Certificates, String s )\n        throws CertificateException {\n\n      }\n\n      @Override public void checkServerTrusted( X509Certificate[] x509Certificates, String s )\n        throws CertificateException {\n\n      }\n\n      @Override public X509Certificate[] getAcceptedIssuers() {\n        return null;\n      }\n    } }, new SecureRandom() );\n  }\n\n  SSLContext getTlsContext() throws NoSuchAlgorithmException {\n    return SSLContext.getInstance( \"TLS\" );\n  }\n\n  @VisibleForTesting\n  HttpClient getHttpClient() {\n    return httpClientManager.createDefaultClient();\n  }\n\n  @VisibleForTesting\n  HttpClient getHttpClient( String user, String password ) {\n    HttpClientManager.HttpClientBuilderFacade clientBuilder = httpClientManager.createBuilder();\n    clientBuilder.setCredentials( user, password );\n    return clientBuilder.build();\n  }\n\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/result/RuntimeTestEntrySeverity.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result;\n\nimport java.util.Collection;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic enum RuntimeTestEntrySeverity {\n  DEBUG, INFO, WARNING, SKIPPED, ERROR, FATAL;\n\n  public static RuntimeTestEntrySeverity maxSeverityResult( Collection<RuntimeTestResult> runtimeTestResults ) {\n    RuntimeTestEntrySeverity maxSeverity = null;\n    for ( RuntimeTestResult runtimeTestResult : runtimeTestResults ) {\n      if ( runtimeTestResult.isDone() ) {\n        RuntimeTestEntrySeverity severity = runtimeTestResult.getOverallStatusEntry().getSeverity();\n        if ( maxSeverity == null || ( severity != null && severity.ordinal() > maxSeverity.ordinal() ) ) {\n          maxSeverity = severity;\n        }\n      }\n    }\n    return maxSeverity;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/result/RuntimeTestResult.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result;\n\nimport org.pentaho.runtime.test.RuntimeTest;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTestResult extends RuntimeTestResultSummary {\n  RuntimeTest getRuntimeTest();\n\n  boolean isDone();\n\n  long getTimeTaken();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/result/RuntimeTestResultEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic interface RuntimeTestResultEntry {\n  RuntimeTestEntrySeverity getSeverity();\n\n  String getDescription();\n\n  String getMessage();\n\n  Throwable getException();\n\n  RuntimeTestAction getAction();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/result/RuntimeTestResultSummary.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result;\n\nimport java.util.List;\n\n/**\n * Created by bryan on 8/26/15.\n */\npublic interface RuntimeTestResultSummary {\n  RuntimeTestResultEntry getOverallStatusEntry();\n\n  List<RuntimeTestResultEntry> getRuntimeTestResultEntries();\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/result/org/pentaho/runtime/test/result/impl/RuntimeTestResultSummaryImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl;\n\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\n\n/**\n * Created by bryan on 8/26/15.\n */\npublic class RuntimeTestResultSummaryImpl implements RuntimeTestResultSummary {\n  private final RuntimeTestResultEntry rollupTestResultEntry;\n  private final List<RuntimeTestResultEntry> runtimeTestResultEntries;\n\n  public RuntimeTestResultSummaryImpl() {\n    this( null );\n  }\n\n  @SuppressWarnings( \"unchecked\" )\n  public RuntimeTestResultSummaryImpl( RuntimeTestResultEntry rollupTestResultEntry ) {\n    this( rollupTestResultEntry, Collections.EMPTY_LIST );\n  }\n\n  public RuntimeTestResultSummaryImpl( RuntimeTestResultEntry rollupTestResultEntry,\n                                       List<RuntimeTestResultEntry> runtimeTestResultEntries ) {\n    this.rollupTestResultEntry = rollupTestResultEntry;\n    this.runtimeTestResultEntries = Collections.unmodifiableList( new ArrayList<>( runtimeTestResultEntries ) );\n  }\n\n  @Override public RuntimeTestResultEntry getOverallStatusEntry() {\n    return rollupTestResultEntry;\n  }\n\n  @Override public List<RuntimeTestResultEntry> getRuntimeTestResultEntries() {\n    return runtimeTestResultEntries;\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"RuntimeTestResultSummaryImpl{\" +\n      \"rollupTestResultEntry=\" + rollupTestResultEntry +\n      \", runtimeTestResultEntries=\" + runtimeTestResultEntries +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:OFF\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/test/impl/BaseRuntimeTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\n\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.Set;\n\n/**\n * Created by bryan on 8/11/15.\n */\npublic abstract class BaseRuntimeTest implements RuntimeTest {\n  private final Class<?> classUnderTest;\n  private final String module;\n  private final String id;\n  private final String name;\n  private final boolean configInitTest;\n  private final Set<String> dependencies;\n\n  public BaseRuntimeTest( Class<?> classUnderTest, String module, String id, String name, Set<String> dependencies ) {\n    this( classUnderTest, module, id, name, false, dependencies );\n  }\n\n  public BaseRuntimeTest( Class<?> classUnderTest, String module, String id, String name, boolean configInitTest,\n                          Set<String> dependencies ) {\n    this.classUnderTest = classUnderTest;\n    this.module = module;\n    this.id = id;\n    this.name = name;\n    this.configInitTest = configInitTest;\n    this.dependencies = Collections.unmodifiableSet( new HashSet<>( dependencies ) );\n  }\n\n  @Override public boolean accepts( Object objectUnderTest ) {\n    return classUnderTest.isInstance( objectUnderTest );\n  }\n\n  @Override public String getModule() {\n    return module;\n  }\n\n  @Override public String getId() {\n    return id;\n  }\n\n  @Override public String getName() {\n    return name;\n  }\n\n  @Override public Set<String> getDependencies() {\n    return dependencies;\n  }\n\n  @Override public boolean isConfigInitTest() {\n    return configInitTest;\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"BaseRuntimeTest{\" +\n      \"module='\" + module + '\\'' +\n      \", id='\" + id + '\\'' +\n      \", getName='\" + name + '\\'' +\n      \", configInitTest=\" + configInitTest +\n      \", dependencies=\" + dependencies +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/test/impl/RuntimeTestDelegateWithMoreDependencies.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.Set;\n\n/**\n * Created by bryan on 8/17/15.\n */\npublic class RuntimeTestDelegateWithMoreDependencies implements RuntimeTest {\n  private final RuntimeTest delegate;\n  private final Set<String> extraDependencies;\n\n  public RuntimeTestDelegateWithMoreDependencies( RuntimeTest delegate, Set<String> extraDependencies ) {\n    this.delegate = delegate;\n    this.extraDependencies = new HashSet<>( extraDependencies );\n  }\n\n  @Override public boolean accepts( Object objectUnderTest ) {\n    return delegate.accepts( objectUnderTest );\n  }\n\n  @Override public String getModule() {\n    return delegate.getModule();\n  }\n\n  @Override public String getId() {\n    return delegate.getId();\n  }\n\n  @Override public String getName() {\n    return delegate.getName();\n  }\n\n  @Override public boolean isConfigInitTest() {\n    return delegate.isConfigInitTest();\n  }\n\n  @Override public Set<String> getDependencies() {\n    HashSet<String> set = new HashSet<String>( extraDependencies );\n    set.addAll( delegate.getDependencies() );\n    return Collections.unmodifiableSet( set );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    return delegate.runTest( objectUnderTest );\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"RuntimeTestDelegateWithMoreDependencies{\" +\n      \"delegate=\" + delegate +\n      \", extraDependencies=\" + extraDependencies +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/test/impl/RuntimeTestResultEntryImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/12/15.\n */\npublic class RuntimeTestResultEntryImpl implements RuntimeTestResultEntry {\n  private final RuntimeTestEntrySeverity severity;\n  private final String description;\n  private final String message;\n  private final Throwable exception;\n  private final RuntimeTestAction runtimeTestAction;\n\n  public RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity severity, String description, String message ) {\n    this( severity, description, message, (Throwable) null );\n  }\n\n  public RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity severity, String description, String message,\n                                     RuntimeTestAction runtimeTestAction ) {\n    this( severity, description, message, null, runtimeTestAction );\n  }\n\n  public RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity severity, String description, String message,\n                                     Throwable exception ) {\n    this( severity, description, message, exception, null );\n  }\n\n  public RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity severity, String description, String message,\n                                     Throwable exception, RuntimeTestAction runtimeTestAction ) {\n    this.severity = severity;\n    this.description = description;\n    this.message = message;\n    this.exception = exception;\n    this.runtimeTestAction = runtimeTestAction;\n  }\n\n  @Override public RuntimeTestEntrySeverity getSeverity() {\n    return severity;\n  }\n\n  @Override public String getDescription() {\n    return description;\n  }\n\n  @Override public String getMessage() {\n    return message;\n  }\n\n  @Override public Throwable getException() {\n    return exception;\n  }\n\n  @Override public RuntimeTestAction getAction() {\n    return runtimeTestAction;\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n  @Override public String toString() {\n    return \"RuntimeTestResultEntryImpl{\" +\n      \"severity=\" + severity +\n      \", description='\" + description + '\\'' +\n      \", message='\" + message + '\\'' +\n      \", exception=\" + exception +\n      '}';\n  }\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/java/org/pentaho/runtime/test/test/impl/RuntimeTestResultImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\nimport java.util.List;\n\n/**\n * Created by bryan on 8/12/15.\n */\npublic class RuntimeTestResultImpl implements RuntimeTestResult {\n  private final RuntimeTest runtimeTest;\n  private final boolean isDone;\n  private final RuntimeTestResultSummary runtimeTestResultSummary;\n  private final long timeTaken;\n\n  public RuntimeTestResultImpl( RuntimeTest runtimeTest, boolean isDone,\n                                RuntimeTestResultSummary runtimeTestResultSummary,\n                                long timeTaken ) {\n    this.runtimeTest = runtimeTest;\n    this.isDone = isDone;\n    this.runtimeTestResultSummary =\n      runtimeTestResultSummary == null ? new RuntimeTestResultSummaryImpl( null ) : runtimeTestResultSummary;\n    this.timeTaken = timeTaken;\n  }\n\n  @Override public RuntimeTest getRuntimeTest() {\n    return runtimeTest;\n  }\n\n  @Override public boolean isDone() {\n    return isDone;\n  }\n\n  @Override public long getTimeTaken() {\n    return timeTaken;\n  }\n\n  @Override public RuntimeTestResultEntry getOverallStatusEntry() {\n    return runtimeTestResultSummary.getOverallStatusEntry();\n  }\n\n  @Override public List<RuntimeTestResultEntry> getRuntimeTestResultEntries() {\n    return runtimeTestResultSummary.getRuntimeTestResultEntries();\n  }\n\n  //OperatorWrap isn't helpful for autogenerated methods\n  //CHECKSTYLE:OperatorWrap:OFF\n\n  @Override public String toString() {\n    return \"RuntimeTestResultImpl{\" +\n      \"runtimeTest=\" + runtimeTest +\n      \", isDone=\" + isDone +\n      \", runtimeTestResultSummary=\" + runtimeTestResultSummary +\n      \", timeTaken=\" + timeTaken +\n      '}';\n  }\n\n  //CHECKSTYLE:OperatorWrap:ON\n}\n"
  },
  {
    "path": "api/runtimeTest/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n           http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0 http://aries.apache.org/schemas/blueprint-cm/blueprint-cm-1.1.0.xsd\">\n  <cm:property-placeholder persistent-id=\"org.pentaho.runtime.test\"\n                           update-strategy=\"reload\">\n    <cm:default-properties>\n      <cm:property name=\"orderedModules\" value=\"Hadoop Configuration,Hadoop File System,Map Reduce,Oozie,Zookeeper\"/>\n    </cm:default-properties>\n  </cm:property-placeholder>\n\n  <bean id=\"loggingRuntimeTestActionHandlerImpl\"\n        class=\"org.pentaho.runtime.test.action.impl.LoggingRuntimeTestActionHandlerImpl\" scope=\"singleton\">\n    <argument ref=\"baseMessagesMessageGetterFactoryImpl\"/>\n  </bean>\n\n  <bean id=\"runtimeTestActionServiceImpl\" class=\"org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl\"\n        scope=\"singleton\">\n    <argument ref=\"loggingRuntimeTestActionHandlerImpl\"/>\n    <argument ref=\"runtimeTestActionHandlers\"/>\n  </bean>\n\n  <bean id=\"runtimeTesterImpl\" class=\"org.pentaho.runtime.test.impl.RuntimeTesterImpl\" scope=\"singleton\">\n    <argument ref=\"runtimeTests\"/>\n    <argument ref=\"executorService\"/>\n    <argument value=\"${orderedModules}\"/>\n  </bean>\n\n  <bean id=\"connectivityTestFactoryImpl\"\n        class=\"org.pentaho.runtime.test.network.impl.ConnectivityTestFactoryImpl\" scope=\"singleton\"/>\n\n  <bean id=\"baseMessagesMessageGetterFactoryImpl\"\n        class=\"org.pentaho.runtime.test.i18n.impl.BaseMessagesMessageGetterFactoryImpl\" scope=\"singleton\"/>\n\n  <reference-list id=\"runtimeTests\" interface=\"org.pentaho.runtime.test.RuntimeTest\"\n                  availability=\"optional\"/>\n  <reference-list id=\"runtimeTestActionHandlers\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionHandler\"\n                  availability=\"optional\"/>\n  <reference id=\"executorService\" interface=\"java.util.concurrent.ExecutorService\"/>\n\n  <service ref=\"runtimeTesterImpl\" interface=\"org.pentaho.runtime.test.RuntimeTester\"/>\n  <service ref=\"runtimeTestActionServiceImpl\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionService\"/>\n  <service ref=\"connectivityTestFactoryImpl\"\n           interface=\"org.pentaho.runtime.test.network.ConnectivityTestFactory\"/>\n  <service ref=\"baseMessagesMessageGetterFactoryImpl\"\n           interface=\"org.pentaho.runtime.test.i18n.MessageGetterFactory\"/>\n</blueprint>"
  },
  {
    "path": "api/runtimeTest/src/main/resources/org/pentaho/runtime/test/action/impl/messages/messages_en_US.properties",
    "content": "HelpUrlPayload.Message=Please see help at {0}\nLoggingRuntimeTestActionHandlerImpl.Action=Recommended action: {0}, {1}\\nResource: {2}\nLoggingRuntimeTestActionHandlerImpl.MissingSeverity=Recommended action: {0}, {1}\\nResource: {2}"
  },
  {
    "path": "api/runtimeTest/src/main/resources/org/pentaho/runtime/test/impl/messages/messages_en_US.properties",
    "content": "RuntimeTestRunner.Skipped.Desc=This test was skipped because {0} was not successful.\nRuntimeTestRunner.Skipped.Message=The {0} test was skipped because test {1} was not successful.\nRuntimeTestRunner.Error.Desc=We couldn''t run test {0}.\n"
  },
  {
    "path": "api/runtimeTest/src/main/resources/org/pentaho/runtime/test/network/impl/messages/messages_en_US.properties",
    "content": "ConnectTest.HostBlank.Desc=Hostname is required.\nConnectTest.HostBlank.Message=A hostname was not found for this service.\nConnectTest.HA.Desc=This service supports High Availability.\nConnectTest.HA.Message=Since no port is set, we assume that High Availability has been enabled for {0}.\nConnectTest.PortBlank.Desc=Port number is required.\nConnectTest.PortBlank.Message=Port number is required.\nConnectTest.ConnectSuccess.Desc=Successfully connected to host.\nConnectTest.ConnectSuccess.Message=Successfully connected to {0} at port {1}.\nConnectTest.ConnectFail.Desc=Unable to connect to the host.\nConnectTest.ConnectFail.Message=Unable to connect to {0} at port: {1}.\nConnectTest.UnknownHostname.Desc=Hostname is unknown.\nConnectTest.UnknownHostname.Message=Hostname {0} is unknown.  Verify that the hostname is valid.\nConnectTest.Unreachable.Desc=Unable to connect to hostname {0}.\nConnectTest.Unreachable.Message=Unable to connect to hostname {0}. Contact the network or Hadoop administrator for help.\nConnectTest.NetworkError.Desc=Unable connect because of network problems.\nConnectTest.NetworkError.Message=There was network problem when we tried to connect to hostname {0} and port: {1}.\nConnectTest.PortNumberFormat.Desc=The port must be a number.\nConnectTest.PortNumberFormat.Message=Port {0} must be a number.\n\nGatewayConnectTest.Success.Desc=Successfully connected to gateway.\nGatewayConnectTest.Success.Message=Successfully connected to gateway {0}.\nGatewayConnectTest.UnknownReturnCode.Desc=UnknownReturnCode returned from gateway.\nGatewayConnectTest.UnknownReturnCode.Message=UnknownReturnCode {1} returned for user {0} from gateway for uri {2}.\nGatewayConnectTest.ServiceNotFound.Desc=Desired service not found.\nGatewayConnectTest.ServiceNotFound.Message=Service for uri {0} not found.\nGatewayConnectTest.Forbidden.Desc=Forbidden answer from gateway.\nGatewayConnectTest.Forbidden.Message=Desired resource {0} is not available for user {1}.\nGatewayConnectTest.TLSContext.Desc=TLSContext failed to create.\nGatewayConnectTest.TLSContext.Message=TLSContext failed to create.\nGatewayConnectTest.SSLException.Desc=SSLException.\nGatewayConnectTest.SSLException.Message=SSLException {1} occurs with accessing {0}\nGatewayConnectTest.Unauthorized.Desc=Authorization is required.\nGatewayConnectTest.Unauthorized.Message=Authorization is required for uri {0}. User {1}.\nGatewayConnectTest.TLSContextInit.Desc=TLSContextInitialization failed.\nGatewayConnectTest.TLSContextInit.Message=TLSContextInitialization failed.\nGatewayConnectTest.ExecutionFailed.Desc=Unable to check service at gateway.\nGatewayConnectTest.ExecutionFailed.Message=Unable to check service {0} at gateway."
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/RuntimeTestEntryUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class RuntimeTestEntryUtil {\n  public static RuntimeTestResultEntry expectOneEntry( List<RuntimeTestResultEntry> runtimeTestResultEntries ) {\n    assertNotNull( runtimeTestResultEntries );\n    assertEquals( 1, runtimeTestResultEntries.size() );\n    return runtimeTestResultEntries.get( 0 );\n  }\n\n  public static void verifyRuntimeTestResultEntry( RuntimeTestResultEntry runtimeTestResultEntry,\n                                                   RuntimeTestEntrySeverity severity, String desc, String message ) {\n    verifyRuntimeTestResultEntry( runtimeTestResultEntry, severity, desc, message, null );\n  }\n\n  public static Throwable verifyRuntimeTestResultEntry( RuntimeTestResultEntry runtimeTestResultEntry,\n                                                        RuntimeTestEntrySeverity severity, String desc, String message,\n                                                        Class<?> exceptionClass ) {\n    assertNotNull( runtimeTestResultEntry );\n    assertEquals( severity, runtimeTestResultEntry.getSeverity() );\n    assertEquals( desc, runtimeTestResultEntry.getDescription() );\n    assertEquals( message, runtimeTestResultEntry.getMessage() );\n    Throwable runtimeTestResultEntryException = runtimeTestResultEntry.getException();\n    if ( exceptionClass == null ) {\n      assertNull( runtimeTestResultEntryException );\n    } else {\n      assertTrue( \"expected exception of type \" + exceptionClass,\n        exceptionClass.isInstance( runtimeTestResultEntryException ) );\n    }\n    return runtimeTestResultEntryException;\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/TestMessageGetter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\n\nimport static org.junit.Assert.assertNotEquals;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class TestMessageGetter implements MessageGetter {\n  private final Class<?> PKG;\n\n  public TestMessageGetter( Class<?> PKG ) {\n    this.PKG = PKG;\n  }\n\n  @Override public String getMessage( String key, String... parameters ) {\n    StringBuilder stringBuilder = new StringBuilder( \"BaseMessages equivalent: BaseMessage.getMessage( \" );\n    stringBuilder.append( PKG );\n    stringBuilder.append( \", \\\"\" );\n    stringBuilder.append( key );\n    stringBuilder.append( \"\\\"\" );\n    String realValue;\n    boolean hasParameters = parameters != null && parameters.length > 0;\n    if ( hasParameters ) {\n      realValue = BaseMessages.getString( PKG, key, parameters );\n      stringBuilder.append( \", \\\"\" );\n      for ( String parameter : parameters ) {\n        stringBuilder.append( parameter );\n        stringBuilder.append( \"\\\", \\\"\" );\n      }\n      stringBuilder.setLength( stringBuilder.length() - 3 );\n    } else {\n      realValue = BaseMessages.getString( PKG, key );\n    }\n    assertNotEquals( \"!\" + key + \"!\", realValue );\n    stringBuilder.append( \" )\" );\n    if ( hasParameters ) {\n      for ( String parameter : parameters ) {\n        assertTrue( \"Expected \" + realValue + \" to contain \\\"\" + parameter + \"\\\"\",\n          realValue.contains( parameter ) );\n      }\n    }\n    return stringBuilder.toString();\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/TestMessageGetterFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test;\n\n\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class TestMessageGetterFactory implements MessageGetterFactory {\n  @Override public MessageGetter create( Class<?> PKG ) {\n    return new TestMessageGetter( PKG );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/action/impl/HelpUrlPayloadTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * Created by bryan on 9/10/15.\n */\npublic class HelpUrlPayloadTest {\n  private MessageGetter messageGetter;\n  private String title;\n  private String header;\n  private String url;\n  private HelpUrlPayload helpUrlPayload;\n\n  @Before\n  public void setup() {\n    TestMessageGetterFactory messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( HelpUrlPayload.class );\n    title = \"title\";\n    header = \"header\";\n    url = \"url\";\n    helpUrlPayload = new HelpUrlPayload( messageGetterFactory, title, header, url );\n  }\n\n  @Test\n  public void testGetTitle() {\n    assertEquals( title, helpUrlPayload.getTitle() );\n  }\n\n  @Test\n  public void testGetHeader() {\n    assertEquals( header, helpUrlPayload.getHeader() );\n  }\n\n  @Test\n  public void testGetUrl() {\n    assertEquals( url, helpUrlPayload.getUrl() );\n  }\n\n  @Test\n  public void testGetMessage() {\n    assertEquals( messageGetter.getMessage( HelpUrlPayload.HELP_URL_PAYLOAD_MESSAGE, url ),\n      helpUrlPayload.getMessage() );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/action/impl/LoggingRuntimeTestActionHandlerImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.apache.logging.log4j.Logger;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionPayload;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 9/10/15.\n */\npublic class LoggingRuntimeTestActionHandlerImplTest {\n  private MessageGetter messageGetter;\n  private Logger logger;\n  private LoggingRuntimeTestActionHandlerImpl loggingRuntimeTestActionHandler;\n  private RuntimeTestAction runtimeTestAction;\n  private String actionDescription;\n  private String actionName;\n  private RuntimeTestActionPayload runtimeTestActionPayload;\n\n  @Before\n  public void setup() {\n    TestMessageGetterFactory messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( LoggingRuntimeTestActionHandlerImpl.class );\n    logger = mock( Logger.class );\n    loggingRuntimeTestActionHandler = new LoggingRuntimeTestActionHandlerImpl( messageGetterFactory, logger );\n    runtimeTestAction = mock( RuntimeTestAction.class );\n    actionName = \"actionName\";\n    actionDescription = \"actionDescription\";\n    runtimeTestActionPayload = mock( RuntimeTestActionPayload.class );\n  }\n\n  @Test\n  public void testCanHandle() {\n    // Should work with least specific payload as it always returns true\n    when( runtimeTestAction.getPayload() ).thenReturn( mock( RuntimeTestActionPayload.class ) );\n    assertTrue( loggingRuntimeTestActionHandler.canHandle( runtimeTestAction ) );\n  }\n\n  private void handleSetup( RuntimeTestEntrySeverity severity ) {\n    when( runtimeTestAction.getSeverity() ).thenReturn( severity );\n    when( runtimeTestAction.getName() ).thenReturn( actionName );\n    when( runtimeTestAction.getDescription() ).thenReturn( actionDescription );\n    when( runtimeTestAction.getPayload() ).thenReturn( runtimeTestActionPayload );\n    loggingRuntimeTestActionHandler.handle( runtimeTestAction );\n  }\n\n  @Test\n  public void testHandleNullSeverity() {\n    handleSetup( null );\n    verify( logger ).warn( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL_MISSING_SEVERITY,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleDebugSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.DEBUG );\n    verify( logger ).debug( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleInfoSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.INFO );\n    verify( logger ).info( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleWarningSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.WARNING );\n    verify( logger ).warn( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleSkippedSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.SKIPPED );\n    verify( logger ).warn( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleErrorSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.ERROR );\n    verify( logger ).error( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n\n  @Test\n  public void testHandleFatalSeverity() {\n    handleSetup( RuntimeTestEntrySeverity.FATAL );\n    verify( logger ).error( messageGetter\n      .getMessage( LoggingRuntimeTestActionHandlerImpl.LOGGING_RUNTIME_TEST_ACTION_HANDLER_IMPL,\n        actionName, actionDescription, runtimeTestActionPayload.toString() ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/action/impl/RuntimeTestActionImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.action.RuntimeTestActionPayload;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 9/10/15.\n */\npublic class RuntimeTestActionImplTest {\n  private String name;\n  private String description;\n  private RuntimeTestEntrySeverity severity;\n  private RuntimeTestActionPayload payload;\n  private RuntimeTestActionImpl runtimeTestAction;\n\n  @Before\n  public void setup() {\n    name = \"name\";\n    description = \"description\";\n    severity = RuntimeTestEntrySeverity.DEBUG;\n    payload = mock( RuntimeTestActionPayload.class );\n    runtimeTestAction = new RuntimeTestActionImpl( name, description, severity, payload );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( name, runtimeTestAction.getName() );\n  }\n\n  @Test\n  public void testGetDescription() {\n    assertEquals( description, runtimeTestAction.getDescription() );\n  }\n\n  @Test\n  public void testGetSeverity() {\n    assertEquals( severity, runtimeTestAction.getSeverity() );\n  }\n\n  @Test\n  public void testGetPayload() {\n    assertEquals( payload, runtimeTestAction.getPayload() );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/action/impl/RuntimeTestActionServiceImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.action.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionHandler;\n\nimport java.util.Arrays;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 9/10/15.\n */\npublic class RuntimeTestActionServiceImplTest {\n  private RuntimeTestActionHandler runtimeTestActionHandler;\n  private RuntimeTestActionHandler defaultHandler;\n  private RuntimeTestActionServiceImpl runtimeTestActionService;\n  private RuntimeTestAction runtimeTestAction;\n\n  @Before\n  public void setup() {\n    runtimeTestActionHandler = mock( RuntimeTestActionHandler.class );\n    defaultHandler = mock( RuntimeTestActionHandler.class );\n    runtimeTestActionService =\n      new RuntimeTestActionServiceImpl( Arrays.asList( runtimeTestActionHandler ), defaultHandler );\n    runtimeTestAction = mock( RuntimeTestAction.class );\n  }\n\n  @Test\n  public void testHandleDefault() {\n    when( runtimeTestActionHandler.canHandle( runtimeTestAction ) ).thenReturn( false );\n    runtimeTestActionService.handle( runtimeTestAction );\n    verify( runtimeTestActionHandler, never() ).handle( runtimeTestAction );\n    verify( defaultHandler ).handle( runtimeTestAction );\n    verifyNoMoreInteractions( defaultHandler );\n  }\n\n  @Test\n  public void testHandleNormal() {\n    when( runtimeTestActionHandler.canHandle( runtimeTestAction ) ).thenReturn( true );\n    runtimeTestActionService.handle( runtimeTestAction );\n    verify( runtimeTestActionHandler ).handle( runtimeTestAction );\n    verifyNoMoreInteractions( defaultHandler );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/i18n/impl/BaseMessagesMessageGetterFactoryImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/27/15.\n */\npublic class BaseMessagesMessageGetterFactoryImplTest {\n  private BaseMessagesMessageGetterFactoryImpl baseMessagesMessageGetterFactory;\n\n  @Before\n  public void setup() {\n    baseMessagesMessageGetterFactory = new BaseMessagesMessageGetterFactoryImpl();\n  }\n\n  @Test\n  public void testCreate() {\n    assertTrue( baseMessagesMessageGetterFactory\n      .create( BaseMessagesMessageGetterFactoryImplTest.class ) instanceof BaseMessagesMessageGetterImpl );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/i18n/impl/BaseMessagesMessageGetterImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.i18n.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * Created by bryan on 8/27/15.\n */\npublic class BaseMessagesMessageGetterImplTest {\n  private BaseMessagesMessageGetterImpl baseMessagesMessageGetter;\n\n  @Before\n  public void setup() {\n    baseMessagesMessageGetter = new BaseMessagesMessageGetterImpl( BaseMessagesMessageGetterFactoryImplTest.class );\n  }\n\n  @Test\n  public void testGetMesssage() {\n    String message = \"message\";\n    String expected = \"!\" + message + \"!\";\n    assertEquals( expected, baseMessagesMessageGetter.getMessage( message ) );\n    assertEquals( expected, baseMessagesMessageGetter.getMessage( message, \"testParam\" ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/impl/RuntimeTestComparatorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.RuntimeTest;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTestComparatorTest {\n  private RuntimeTestComparator runtimeTestComparator;\n  private Map<String, Integer> orderedModules;\n  private RuntimeTest runtimeTest1;\n  private RuntimeTest runtimeTest2;\n  private String d = \"d\";\n  private String c = \"c\";\n  private String a = \"a\";\n  private String b = \"b\";\n\n  @Before\n  public void setup() {\n    orderedModules = new HashMap<>();\n    orderedModules.put( d, 0 );\n    orderedModules.put( c, 1 );\n    orderedModules.put( a, 2 );\n    orderedModules.put( b, 3 );\n    runtimeTestComparator = new RuntimeTestComparator( orderedModules );\n    runtimeTest1 = mock( RuntimeTest.class );\n    runtimeTest2 = mock( RuntimeTest.class );\n  }\n\n  @Test\n  public void testModuleSameOrderedIdsSame() {\n    when( runtimeTest1.getModule() ).thenReturn( a );\n    when( runtimeTest2.getModule() ).thenReturn( a );\n    when( runtimeTest1.getId() ).thenReturn( b );\n    when( runtimeTest2.getId() ).thenReturn( b );\n    assertEquals( 0, runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) );\n  }\n\n  @Test\n  public void testModuleSameOrderedIdsDifferent1() {\n    when( runtimeTest1.getModule() ).thenReturn( a );\n    when( runtimeTest2.getModule() ).thenReturn( a );\n    when( runtimeTest1.getId() ).thenReturn( a );\n    when( runtimeTest2.getId() ).thenReturn( b );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) < 0 );\n  }\n\n  @Test\n  public void testModuleSameOrderedIdsDifferent2() {\n    when( runtimeTest1.getModule() ).thenReturn( a );\n    when( runtimeTest2.getModule() ).thenReturn( a );\n    when( runtimeTest1.getId() ).thenReturn( b );\n    when( runtimeTest2.getId() ).thenReturn( a );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) > 0 );\n  }\n\n  @Test\n  public void testModuleSameUnrderedIdsSame() {\n    when( runtimeTest1.getModule() ).thenReturn( \"e\" );\n    when( runtimeTest2.getModule() ).thenReturn( \"e\" );\n    when( runtimeTest1.getId() ).thenReturn( b );\n    when( runtimeTest2.getId() ).thenReturn( b );\n    assertEquals( 0, runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) );\n  }\n\n  @Test\n  public void testModuleDifferentOrdered() {\n    when( runtimeTest1.getModule() ).thenReturn( a );\n    when( runtimeTest2.getModule() ).thenReturn( b );\n    when( runtimeTest1.getId() ).thenReturn( d );\n    when( runtimeTest2.getId() ).thenReturn( c );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) < 0 );\n  }\n\n  @Test\n  public void testModuleDifferentFirstOrdered() {\n    orderedModules.remove( a );\n    when( runtimeTest1.getModule() ).thenReturn( b );\n    when( runtimeTest2.getModule() ).thenReturn( a );\n    when( runtimeTest1.getId() ).thenReturn( d );\n    when( runtimeTest2.getId() ).thenReturn( c );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) < 0 );\n  }\n\n  @Test\n  public void testModuleDifferentSecondOrdered() {\n    orderedModules.remove( b );\n    when( runtimeTest1.getModule() ).thenReturn( b );\n    when( runtimeTest2.getModule() ).thenReturn( a );\n    when( runtimeTest1.getId() ).thenReturn( d );\n    when( runtimeTest2.getId() ).thenReturn( c );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) > 0 );\n  }\n\n  @Test\n  public void testModuleDifferentNotOrdered() {\n    orderedModules.remove( a );\n    orderedModules.remove( b );\n    when( runtimeTest1.getModule() ).thenReturn( a );\n    when( runtimeTest2.getModule() ).thenReturn( b );\n    when( runtimeTest1.getId() ).thenReturn( d );\n    when( runtimeTest2.getId() ).thenReturn( c );\n    assertTrue( runtimeTestComparator.compare( runtimeTest1, runtimeTest2 ) < 0 );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/impl/RuntimeTestRunnerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Set;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/12/15.\n */\npublic class RuntimeTestRunnerTest {\n  private ExecutorService executorService;\n  private TestRuntimeTest moduleATestA;\n  private TestRuntimeTest moduleATestB;\n  private TestRuntimeTest moduleATestC;\n  private TestRuntimeTest moduleBTestA;\n  private TestRuntimeTest moduleBTestB;\n  private TestRuntimeTest moduleBTestC;\n  private Object objectUnderTest;\n  private TestRuntimeTest unsatisfiableDependencyA;\n  private TestRuntimeTest moduleCTestA;\n  private TestRuntimeTest moduleATestD;\n\n  private static Set<String> dependenciesToIds( Set<TestRuntimeTest> testRuntimeTests ) {\n    Set<String> result = new HashSet<>();\n    for ( TestRuntimeTest testRuntimeTest : testRuntimeTests ) {\n      result.add( testRuntimeTest.getId() );\n    }\n    return result;\n  }\n\n  @Before\n  public void setup() {\n    executorService = Executors.newCachedThreadPool();\n    RuntimeTestResultEntryImpl overallEntry =\n      new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO, \"testDesc\", \"testMessage\" );\n    unsatisfiableDependencyA = new TestRuntimeTest( \"unsatisfiableDependency\", \"unsatisfiableDependencyTestA\", \"Test A\",\n      new HashSet<>( Arrays.asList(\n        new TestRuntimeTest( \"fake-module\", \"fake-test-id\", \"fake-getName\", new HashSet<TestRuntimeTest>(), 5,\n          overallEntry, new ArrayList<RuntimeTestResultEntry>(), false ) ) ), 5, overallEntry,\n      new ArrayList<RuntimeTestResultEntry>(),\n      false );\n    moduleATestA =\n      new TestRuntimeTest( \"moduleA\", \"moduleATestA\", \"Test A\", new HashSet<>( Arrays.<TestRuntimeTest>asList() ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleATestB =\n      new TestRuntimeTest( \"moduleA\", \"moduleATestB\", \"Test B\", new HashSet<>( Arrays.asList( moduleATestA ) ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleATestC =\n      new TestRuntimeTest( \"moduleA\", \"moduleATestC\", \"Test C\", new HashSet<>( Arrays.asList( moduleATestB ) ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleATestD =\n      new TestRuntimeTest( \"moduleA\", \"moduleATestD\", \"Test D\", new HashSet<>( Arrays.asList( moduleATestB ) ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleBTestA =\n      new TestRuntimeTest( \"moduleB\", \"moduleBTestA\", \"Test A\", new HashSet<>( Arrays.asList( moduleATestA ) ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleBTestB =\n      new TestRuntimeTest( \"moduleB\", \"moduleBTestB\", \"Test B\", new HashSet<>( Arrays.asList( moduleATestC ) ), 5,\n        overallEntry,\n        new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleBTestC =\n      new TestRuntimeTest( \"moduleB\", \"moduleBTestC\", \"Test C\",\n        new HashSet<>( Arrays.asList( moduleBTestB, moduleATestC ) ),\n        5, overallEntry, new ArrayList<RuntimeTestResultEntry>(), true );\n    moduleCTestA = new TestRuntimeTest( \"moduleC\", \"moduleCTestA\", \"Test A\",\n      new HashSet<>( Arrays.asList( moduleBTestC, moduleATestC ) ),\n      5, overallEntry, new ArrayList<RuntimeTestResultEntry>(), true );\n    objectUnderTest = new Object();\n  }\n\n  @After\n  public void tearDown() {\n    executorService.shutdown();\n  }\n\n  @Test\n  public void testSingleTestNoDependencies() {\n    testScenario( Arrays.asList( moduleATestA ) );\n  }\n\n  @Test\n  public void testSingleTestWithDependencies() {\n    testScenario( Arrays.asList( unsatisfiableDependencyA ) );\n  }\n\n  @Test\n  public void testModuleA() {\n    testScenario( Arrays.asList( moduleATestA, moduleATestB, moduleATestC, moduleATestD ) );\n  }\n\n  @Test\n  public void testModuleAAndB() {\n    testScenario( Arrays\n      .asList( moduleATestA, moduleATestB, moduleATestC, moduleATestD, moduleBTestA, moduleBTestB, moduleBTestC ) );\n  }\n\n  @Test\n  public void testModuleAthruC() {\n    testScenario( Arrays\n      .asList( moduleATestA, moduleATestB, moduleATestC, moduleATestD, moduleBTestA, moduleBTestB, moduleBTestC,\n        moduleCTestA ) );\n  }\n\n  @Test\n  public void testModuleAthruCUnsat() {\n    testScenario( Arrays\n      .asList( moduleATestA, moduleATestB, moduleATestC, moduleATestD, moduleBTestA, moduleBTestB, moduleBTestC,\n        moduleCTestA,\n        unsatisfiableDependencyA ) );\n  }\n\n  private void testScenario( List<TestRuntimeTest> runtimeTests ) {\n    final List<RuntimeTestStatus> runtimeTestStatuses = Collections.synchronizedList( new ArrayList\n      <RuntimeTestStatus>() );\n    final RuntimeTestProgressCallback runtimeTestProgressCallback = new RuntimeTestProgressCallback() {\n      @Override public void onProgress( RuntimeTestStatus runtimeTestStatus ) {\n        runtimeTestStatuses.add( runtimeTestStatus );\n        if ( runtimeTestStatus.isDone() ) {\n          synchronized ( this ) {\n            notifyAll();\n          }\n        }\n      }\n    };\n    long before = System.currentTimeMillis();\n    new RuntimeTestRunner( runtimeTests, objectUnderTest, runtimeTestProgressCallback, executorService ).runTests();\n    synchronized ( runtimeTestProgressCallback ) {\n      while ( runtimeTestStatuses.size() == 0 || !runtimeTestStatuses.get( runtimeTestStatuses.size() - 1 ).isDone() ) {\n        try {\n          runtimeTestProgressCallback.wait();\n        } catch ( InterruptedException e ) {\n          // Ignore\n        }\n      }\n    }\n    long after = System.currentTimeMillis();\n    Set<String> doneIds = new HashSet<>();\n    for ( int i = 0; i < runtimeTestStatuses.size(); i++ ) {\n      RuntimeTestStatus runtimeTestStatus = runtimeTestStatuses.get( i );\n      if ( i < runtimeTestStatuses.size() - 1 ) {\n        assertFalse( runtimeTestStatus.isDone() );\n      } else {\n        assertTrue( runtimeTestStatus.isDone() );\n      }\n      Set<String> justDoneIds = new HashSet<>();\n      for ( RuntimeTestModuleResults runtimeTestModuleResults : runtimeTestStatus.getModuleResults() ) {\n        Set<String> outstandingIds = new HashSet<>();\n        Set<String> runningIds = new HashSet<>();\n        for ( RuntimeTest runtimeTest : runtimeTestModuleResults.getOutstandingTests() ) {\n          outstandingIds.add( runtimeTest.getId() );\n        }\n        for ( RuntimeTest runtimeTest : runtimeTestModuleResults.getRunningTests() ) {\n          runningIds.add( runtimeTest.getId() );\n        }\n\n        Set<String> resultIds = new HashSet<>();\n        for ( RuntimeTestResult runtimeTestResult : runtimeTestModuleResults.getRuntimeTestResults() ) {\n          resultIds.add( runtimeTestResult.getRuntimeTest().getId() );\n        }\n        // We should have results for all ids in module\n        assertTrue( resultIds.containsAll( outstandingIds ) );\n        assertTrue( resultIds.containsAll( runningIds ) );\n\n        // No done ides should be in outstanding or running\n        assertTrue( Collections.disjoint( doneIds, outstandingIds ) );\n        assertTrue( Collections.disjoint( doneIds, runningIds ) );\n\n        resultIds.removeAll( outstandingIds );\n        resultIds.removeAll( runningIds );\n        justDoneIds.addAll( resultIds );\n      }\n      // All previously done ids should still be done\n      assertTrue( justDoneIds.containsAll( doneIds ) );\n      // We should get called back for each one that finishes\n      assertTrue( justDoneIds.size() == doneIds.size() || justDoneIds.size() == doneIds.size() + 1 );\n\n      doneIds.addAll( justDoneIds );\n    }\n    for ( TestRuntimeTest runtimeTest : runtimeTests ) {\n      assertTrue( doneIds.contains( runtimeTest.getId() ) );\n      runtimeTest.validateRunState();\n    }\n    System.out.println( \"Ran in \" + ( after - before ) + \" ms\" );\n    System.out.flush();\n  }\n\n  public class TestRuntimeTest extends BaseRuntimeTest {\n    private final long delay;\n    private final Set<TestRuntimeTest> dependencies;\n    private final AtomicBoolean hasRun;\n    private final RuntimeTestResultEntry overallEntry;\n    private final List<RuntimeTestResultEntry> runtimeTestResultEntries;\n    private final boolean shouldRun;\n\n    public TestRuntimeTest( String module, String id, String name, Set<TestRuntimeTest> dependencies,\n                            long delay, RuntimeTestResultEntry overallEntry,\n                            List<RuntimeTestResultEntry> runtimeTestResultEntries, boolean shouldRun ) {\n      super( Object.class, module, id, name, dependenciesToIds( dependencies ) );\n      this.delay = delay;\n      this.dependencies = dependencies;\n      this.overallEntry = overallEntry;\n      this.runtimeTestResultEntries = runtimeTestResultEntries;\n      this.shouldRun = shouldRun;\n      hasRun = new AtomicBoolean( false );\n    }\n\n    public String getLogName() {\n      return getModule() + \":\" + getId();\n    }\n\n    @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n      assertTrue( shouldRun );\n      assertEquals( RuntimeTestRunnerTest.this.objectUnderTest, objectUnderTest );\n      String logName = getLogName();\n      System.out.println( \"Running: \" + logName );\n      for ( TestRuntimeTest dependency : dependencies ) {\n        assertTrue( logName + \" expected dependency \" + dependency.getLogName() + \" to have already run\",\n          dependency.hasRun.get() );\n      }\n      try {\n        Thread.sleep( delay );\n      } catch ( InterruptedException e ) {\n        // Ignore\n      }\n      hasRun.set( true );\n      System.out.println( \"Done running: \" + logName );\n      return new RuntimeTestResultSummaryImpl( overallEntry, runtimeTestResultEntries );\n    }\n\n    public void validateRunState() {\n      String moduleString = getLogName();\n      assertEquals( \"Expected \" + moduleString + \" hasRun value of \" + shouldRun + \" but was \" + hasRun.get(),\n        shouldRun, hasRun.get() );\n      System.out.println( \"Got correct shouldRun value of \" + shouldRun + \" from \" + moduleString );\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/impl/RuntimeTestStatusImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\n\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTestStatusImplTest {\n  private List<RuntimeTestModuleResults> runtimeTestModuleResults;\n  private RuntimeTestStatusImpl runtimeTestStatus;\n  private int testsDone;\n  private int testsRunning;\n  private int testsOutstanding;\n  private boolean done;\n\n  @Before\n  public void setup() {\n    runtimeTestModuleResults = mock( List.class );\n    testsDone = 1011;\n    testsRunning = 11213;\n    testsOutstanding = 12213;\n    done = true;\n    initStatus();\n  }\n\n  private void initStatus() {\n    runtimeTestStatus =\n      new RuntimeTestStatusImpl( runtimeTestModuleResults, testsDone, testsRunning, testsOutstanding, done );\n  }\n\n  @Test\n  public void testConstructor() {\n    assertEquals( runtimeTestModuleResults, runtimeTestStatus.getModuleResults() );\n    assertTrue( runtimeTestStatus.isDone() );\n    assertEquals( testsDone, runtimeTestStatus.getTestsDone() );\n    assertEquals( testsRunning, runtimeTestStatus.getTestsRunning() );\n    assertEquals( testsOutstanding, runtimeTestStatus.getTestsOutstanding() );\n    done = false;\n    initStatus();\n    assertEquals( runtimeTestModuleResults, runtimeTestStatus.getModuleResults() );\n    assertFalse( runtimeTestStatus.isDone() );\n    assertEquals( testsDone, runtimeTestStatus.getTestsDone() );\n    assertEquals( testsRunning, runtimeTestStatus.getTestsRunning() );\n    assertEquals( testsOutstanding, runtimeTestStatus.getTestsOutstanding() );\n  }\n\n  @Test\n  public void testToString() {\n    assertTrue( runtimeTestStatus.toString().contains( runtimeTestModuleResults.toString() ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsDone ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsRunning ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsOutstanding ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( \"done=\" + done ) );\n    done = false;\n    initStatus();\n    assertTrue( runtimeTestStatus.toString().contains( runtimeTestModuleResults.toString() ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsDone ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsRunning ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( Integer.toString( testsOutstanding ) ) );\n    assertTrue( runtimeTestStatus.toString().contains( \"done=\" + done ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/impl/RuntimeTesterImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.concurrent.ExecutorService;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTesterImplTest {\n\n  private RuntimeTesterImpl runtimeTester;\n  private List<RuntimeTest> runtimeTests;\n  private ExecutorService executorService;\n  private String orderedModulesString;\n  private RuntimeTestRunner.Factory runtimeTestRunnerFactory;\n\n  @Before\n  public void setup() {\n    runtimeTests = new ArrayList<>( Arrays.asList( mock( RuntimeTest.class ) ) );\n    executorService = mock( ExecutorService.class );\n    orderedModulesString = \"test-modules\";\n    runtimeTestRunnerFactory = mock( RuntimeTestRunner.Factory.class );\n    runtimeTester =\n      new RuntimeTesterImpl( runtimeTests, executorService, orderedModulesString, runtimeTestRunnerFactory );\n  }\n\n  @Test\n  public void testRunTests() {\n    Object objectUnderTest = new Object();\n    RuntimeTestProgressCallback runtimeTestProgressCallback = mock( RuntimeTestProgressCallback.class );\n    runtimeTester.runtimeTest( objectUnderTest, runtimeTestProgressCallback );\n    ArgumentCaptor<Runnable> runnableArgumentCaptor = ArgumentCaptor.forClass( Runnable.class );\n    verify( executorService ).submit( runnableArgumentCaptor.capture() );\n    RuntimeTestRunner runtimeTestRunner = mock( RuntimeTestRunner.class );\n    when(\n      runtimeTestRunnerFactory.create( runtimeTests, objectUnderTest, runtimeTestProgressCallback, executorService ) )\n      .thenReturn(\n        runtimeTestRunner );\n    runnableArgumentCaptor.getValue().run();\n    verify( runtimeTestRunner ).runTests();\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/module/impl/RuntimeTestModuleResultsImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.module.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Set;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTestModuleResultsImplTest {\n  private String name;\n  private List<RuntimeTestResult> runtimeTestResults;\n  private Set<RuntimeTest> runningTests;\n  private Set<RuntimeTest> outstandingTests;\n  private RuntimeTestModuleResultsImpl runtimeTestModuleResults;\n  private RuntimeTestResult runtimeTestResult;\n  private RuntimeTest runningTest;\n  private RuntimeTest outstandingTest;\n  private RuntimeTestEntrySeverity maxSeverity;\n\n  @Before\n  public void setup() {\n    name = \"testName\";\n    runtimeTestResult = mock( RuntimeTestResult.class );\n    when( runtimeTestResult.isDone() ).thenReturn( true );\n    maxSeverity = RuntimeTestEntrySeverity.INFO;\n    RuntimeTestResultEntry runtimeTestResultEntry = mock( RuntimeTestResultEntry.class );\n    when( runtimeTestResult.getOverallStatusEntry() ).thenReturn( runtimeTestResultEntry );\n    when( runtimeTestResultEntry.getSeverity() ).thenReturn( maxSeverity );\n    runtimeTestResults = new ArrayList<>( Arrays.asList( runtimeTestResult ) );\n    runningTest = mock( RuntimeTest.class );\n    runningTests = new HashSet<>( Arrays.asList( runningTest ) );\n    outstandingTest = mock( RuntimeTest.class );\n    outstandingTests = new HashSet<>( Arrays.asList( outstandingTest ) );\n    runtimeTestModuleResults =\n      new RuntimeTestModuleResultsImpl( name, runtimeTestResults, runningTests, outstandingTests );\n  }\n\n  @Test\n  public void testName() {\n    assertEquals( name, runtimeTestModuleResults.getName() );\n  }\n\n  @Test\n  public void testGetRuntimeTestResults() {\n    assertEquals( runtimeTestResults, runtimeTestModuleResults.getRuntimeTestResults() );\n  }\n\n  @Test\n  public void testGetRunningTests() {\n    assertEquals( runningTests, runtimeTestModuleResults.getRunningTests() );\n  }\n\n  @Test\n  public void testGetOutstandingTests() {\n    assertEquals( outstandingTests, runtimeTestModuleResults.getOutstandingTests() );\n  }\n\n  @Test\n  public void testGetMaxSeverity() {\n    assertEquals( maxSeverity, runtimeTestModuleResults.getMaxSeverity() );\n  }\n\n  @Test\n  public void testToString() {\n    String string = runtimeTestModuleResults.toString();\n    assertTrue( string.contains( name ) );\n    assertTrue( string.contains( runtimeTestResult.toString() ) );\n    assertTrue( string.contains( runningTest.toString() ) );\n    assertTrue( string.contains( outstandingTest.toString() ) );\n    assertTrue( string.contains( maxSeverity.toString() ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/network/impl/ConnectivityTestImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport java.io.IOException;\nimport java.net.InetAddress;\nimport java.net.Socket;\nimport java.net.UnknownHostException;\n\nimport static org.mockito.ArgumentMatchers.anyInt;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class ConnectivityTestImplTest {\n  private String hostname;\n  private String port;\n  private boolean haPossible;\n  private RuntimeTestEntrySeverity severityOfFailures;\n  private ConnectivityTestImpl.SocketFactory socketFactory;\n  private ConnectivityTestImpl.InetAddressFactory inetAddressFactory;\n  private ConnectivityTestImpl connectTest;\n  private MessageGetterFactory messageGetterFactory;\n  private MessageGetter messageGetter;\n  private InetAddress inetAddress;\n  private Socket socket;\n\n  @Before\n  public void setup() throws IOException {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( ConnectivityTestImpl.class );\n    hostname = \"hostname\";\n    port = \"89\";\n    haPossible = false;\n    severityOfFailures = RuntimeTestEntrySeverity.WARNING;\n    socketFactory = mock( ConnectivityTestImpl.SocketFactory.class );\n    socket = mock( Socket.class );\n    when( socketFactory.create( hostname, Integer.valueOf( port ) ) ).thenReturn( socket );\n    inetAddressFactory = mock( ConnectivityTestImpl.InetAddressFactory.class );\n    inetAddress = mock( InetAddress.class );\n    when( inetAddressFactory.create( hostname ) ).thenReturn( inetAddress );\n    when( inetAddress.isReachable( anyInt() ) ).thenReturn( true );\n    init();\n  }\n\n  private void init() {\n    connectTest =\n      new ConnectivityTestImpl( messageGetterFactory, hostname, port, haPossible, severityOfFailures, socketFactory,\n        inetAddressFactory );\n  }\n\n  @Test\n  public void testBlankHostname() {\n    hostname = \"\";\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_HOST_BLANK_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_HOST_BLANK_MESSAGE ) );\n  }\n\n  @Test\n  public void testBlankPortNoHa() {\n    port = \"\";\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_PORT_BLANK_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_PORT_BLANK_MESSAGE ) );\n  }\n\n  @Test\n  public void testBlankPortHa() {\n    port = \"\";\n    haPossible = true;\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.INFO,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_HA_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_HA_MESSAGE, hostname ) );\n  }\n\n  @Test\n  public void testNonNumericPort() {\n    port = \"abc\";\n    haPossible = true;\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_PORT_NUMBER_FORMAT_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_PORT_NUMBER_FORMAT_MESSAGE, port ), NumberFormatException.class );\n  }\n\n  @Test\n  public void testUnreachableHostname() throws IOException {\n    inetAddressFactory = mock( ConnectivityTestImpl.InetAddressFactory.class );\n    inetAddress = mock( InetAddress.class );\n    when( inetAddressFactory.create( hostname ) ).thenReturn( inetAddress );\n    when( inetAddress.isReachable( anyInt() ) ).thenReturn( false );\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_UNREACHABLE_DESC, hostname ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_UNREACHABLE_MESSAGE, hostname ) );\n  }\n\n  @Test\n  public void testUnknownHostException() throws IOException {\n    inetAddressFactory = mock( ConnectivityTestImpl.InetAddressFactory.class );\n    inetAddress = mock( InetAddress.class );\n    when( inetAddressFactory.create( hostname ) ).thenReturn( inetAddress );\n    when( inetAddress.isReachable( anyInt() ) ).thenThrow( new UnknownHostException() );\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_UNKNOWN_HOSTNAME_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_UNKNOWN_HOSTNAME_MESSAGE, hostname ), UnknownHostException.class );\n  }\n\n  @Test\n  public void testReachableIOException() throws IOException {\n    inetAddressFactory = mock( ConnectivityTestImpl.InetAddressFactory.class );\n    inetAddress = mock( InetAddress.class );\n    when( inetAddressFactory.create( hostname ) ).thenReturn( inetAddress );\n    when( inetAddress.isReachable( anyInt() ) ).thenThrow( new IOException() );\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_NETWORK_ERROR_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_NETWORK_ERROR_MESSAGE, hostname, port ), IOException.class );\n  }\n\n  @Test\n  public void testSocketIOException() throws IOException {\n    when( socketFactory.create( hostname, Integer.valueOf( port ) ) ).thenThrow( new IOException() );\n    init();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_CONNECT_FAIL_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_CONNECT_FAIL_MESSAGE, hostname, port ), IOException.class );\n  }\n\n  @Test\n  public void testSuccess() throws IOException {\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.INFO,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_CONNECT_SUCCESS_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_CONNECT_SUCCESS_MESSAGE, hostname, port ) );\n    verify( socket ).close();\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/network/impl/GatewayConnectivityTestImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.network.impl;\n\nimport org.apache.http.HttpResponse;\nimport org.apache.http.StatusLine;\nimport org.apache.http.client.CredentialsProvider;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.methods.HttpUriRequest;\nimport org.apache.http.impl.client.HttpClients;\nimport org.apache.http.protocol.HttpContext;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport javax.net.ssl.SSLContext;\nimport javax.net.ssl.SSLException;\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.UnknownHostException;\nimport java.security.KeyManagementException;\nimport java.security.NoSuchAlgorithmException;\n\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.*;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by dstepanov on 29/04/17.\n */\npublic class GatewayConnectivityTestImplTest {\n\n  public static final String HTTPS = \"https://\";\n  public static final String HTTP = \"http://\";\n  public static final String KETTLE_KNOX_IGNORE_SSL = \"KETTLE_KNOX_IGNORE_SSL\";\n  private String hostname;\n  private String port;\n  private RuntimeTestEntrySeverity severityOfFailures;\n  private ConnectivityTestImpl connectTest;\n  private MessageGetterFactory messageGetterFactory;\n  private MessageGetter messageGetter;\n  private URI uri;\n  private String path;\n  private String topology;\n  private String user;\n  private String password;\n  private HttpClient httpClient;\n\n  @Before\n  public void setup() throws IOException {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( ConnectivityTestImpl.class );\n    hostname = \"hostname\";\n    port = \"8443\";\n    user = \"user\";\n    password = \"password\";\n    topology = \"/gateway/default\";\n    path = \"/testPath\";\n    uri = URI.create( HTTPS + hostname + \":\" + port + topology );\n    severityOfFailures = RuntimeTestEntrySeverity.WARNING;\n    httpClient = mock( HttpClient.class, Mockito.CALLS_REAL_METHODS );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 200 ).when( statusLineMock ).getStatusCode();\n    init();\n    System.setProperty( KETTLE_KNOX_IGNORE_SSL, \"false\" );\n  }\n\n  private void init() {\n    connectTest =\n      new GatewayConnectivityTestImpl( messageGetterFactory, uri, path, user, password, severityOfFailures ) {\n        @Override\n        HttpClient getHttpClient() {\n          return HttpClients.createDefault();\n        }\n      };\n  }\n\n  private void initMock() {\n    connectTest =\n      new GatewayConnectivityTestImpl( messageGetterFactory, uri, path, user, password, severityOfFailures ) {\n        @Override\n        HttpClient getHttpClient() {\n          return httpClient;\n        }\n        @Override\n        HttpClient getHttpClient( String user, String password ) {\n          return httpClient;\n        }\n      };\n  }\n\n  @Test\n  public void testHttp() throws IOException {\n    uri = URI.create( HTTP + hostname + \":\" + port + topology );\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.INFO,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_DESC ),\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_MESSAGE,\n        uri.toString() + path ) );\n  }\n\n  @Test\n  public void testBlankHostname() {\n    uri = URI.create( HTTPS + \"\" + \":\" + port + topology );\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_HOST_BLANK_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_HOST_BLANK_MESSAGE ) );\n  }\n\n  @Test\n  public void testUnknownHostException() throws IOException {\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( ConnectivityTestImpl.CONNECT_TEST_UNKNOWN_HOSTNAME_DESC ), messageGetter.getMessage(\n        ConnectivityTestImpl.CONNECT_TEST_UNKNOWN_HOSTNAME_MESSAGE, hostname ), UnknownHostException.class );\n  }\n\n  @Test\n  public void testIOException() throws IOException {\n    doThrow( new IOException() ).when( httpClient )\n      .execute( any( HttpUriRequest.class ), any( HttpContext.class ) );\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_EXECUTION_FAILED_DESC ),\n      messageGetter.getMessage(\n        GatewayConnectivityTestImpl.GATEWAY_CONNECT_EXECUTION_FAILED_MESSAGE, uri.toString() + path ),\n      IOException.class );\n  }\n\n  @Test\n  public void testSSLException() throws IOException {\n    doThrow( new SSLException( \"errorMessage\" ) ).when( httpClient )\n      .execute( any( HttpUriRequest.class ), any( HttpContext.class ) );\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_SSLEXCEPTION_DESC ),\n      messageGetter.getMessage(\n        GatewayConnectivityTestImpl.GATEWAY_CONNECT_SSLEXCEPTION_MESSAGE, uri.toString() + path, \"errorMessage\" ),\n      SSLException.class );\n  }\n\n  @Test\n  public void testNoSuchAlgorithmException() {\n    System.setProperty( KETTLE_KNOX_IGNORE_SSL, \"true\" );\n    connectTest =\n      new GatewayConnectivityTestImpl( messageGetterFactory, uri, path, user, password, severityOfFailures ) {\n\n        @Override SSLContext getTlsContext() throws NoSuchAlgorithmException {\n          throw new NoSuchAlgorithmException();\n        }\n      };\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TLSCONTEXT_DESC ),\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TLSCONTEXT_MESSAGE ),\n      NoSuchAlgorithmException.class );\n  }\n\n  @Test\n  public void testKeyManagementException() {\n    System.setProperty( KETTLE_KNOX_IGNORE_SSL, \"true\" );\n    connectTest =\n      new GatewayConnectivityTestImpl( messageGetterFactory, uri, path, user, password, severityOfFailures ) {\n\n        @Override void initContextWithTrustAll( SSLContext ctx ) throws KeyManagementException {\n          throw new KeyManagementException();\n        }\n\n      };\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TLSCONTEXTINIT_DESC ),\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TLSCONTEXTINIT_MESSAGE ),\n      KeyManagementException.class );\n  }\n\n  @Test\n  public void testSuccess() throws IOException {\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.INFO,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_DESC ),\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_SUCCESS_MESSAGE,\n        uri.toString() + path ) );\n  }\n\n  @Test\n  public void test401() throws IOException {\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 401 ).when( statusLineMock ).getStatusCode();\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_UNAUTHORIZED_DESC ),\n      messageGetter\n        .getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_UNAUTHORIZED_MESSAGE, uri.toString() + path,\n          user ) );\n  }\n\n  @Test\n  public void test403() throws IOException {\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 403 ).when( statusLineMock ).getStatusCode();\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_FORBIDDEN_DESC ),\n      messageGetter.getMessage(\n        GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_FORBIDDEN_MESSAGE, uri.toString() + path, user ) );\n  }\n\n  @Test\n  public void test404() throws IOException {\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 404 ).when( statusLineMock ).getStatusCode();\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), severityOfFailures,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_DESC ),\n      messageGetter.getMessage(\n        GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_SERVICE_NOT_FOUND_MESSAGE, uri.toString() + path ) );\n  }\n\n  @Test\n  public void testUnknownCode() throws IOException {\n    Integer returnCode = 0;\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class ) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( returnCode ).when( statusLineMock ).getStatusCode();\n\n    initMock();\n    verifyRuntimeTestResultEntry( connectTest.runTest(), RuntimeTestEntrySeverity.WARNING,\n      messageGetter.getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_DESC ),\n      messageGetter\n        .getMessage( GatewayConnectivityTestImpl.GATEWAY_CONNECT_TEST_CONNECT_UNKNOWN_RETURN_CODE_MESSAGE, user,\n          returnCode.toString(), uri.toString() + path ) );\n  }\n\n}"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/result/RuntimeTestEntrySeverityTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.result;\n\nimport org.junit.Test;\n\nimport java.util.Arrays;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/27/15.\n */\npublic class RuntimeTestEntrySeverityTest {\n  @Test\n  public void testMaxSeverityResult() {\n    RuntimeTestResult runtimeTestResult1 = mock( RuntimeTestResult.class );\n    RuntimeTestResult runtimeTestResult2 = mock( RuntimeTestResult.class );\n    RuntimeTestResult runtimeTestResult3 = mock( RuntimeTestResult.class );\n    RuntimeTestResultEntry runtimeTestResultEntry1 = mock( RuntimeTestResultEntry.class );\n    RuntimeTestResultEntry runtimeTestResultEntry2 = mock( RuntimeTestResultEntry.class );\n    RuntimeTestResultEntry runtimeTestResultEntry3 = mock( RuntimeTestResultEntry.class );\n\n    when( runtimeTestResult1.getOverallStatusEntry() ).thenReturn( runtimeTestResultEntry1 );\n    when( runtimeTestResult2.getOverallStatusEntry() ).thenReturn( runtimeTestResultEntry2 );\n    when( runtimeTestResult3.getOverallStatusEntry() ).thenReturn( runtimeTestResultEntry3 );\n\n    when( runtimeTestResultEntry1.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.INFO );\n    when( runtimeTestResultEntry2.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.FATAL );\n    when( runtimeTestResultEntry3.getSeverity() ).thenReturn( null );\n\n    when( runtimeTestResult1.isDone() ).thenReturn( true );\n    when( runtimeTestResult2.isDone() ).thenReturn( false ).thenReturn( true );\n    when( runtimeTestResult3.isDone() ).thenReturn( true );\n\n    assertEquals( RuntimeTestEntrySeverity.INFO, RuntimeTestEntrySeverity\n      .maxSeverityResult( Arrays.asList( runtimeTestResult1, runtimeTestResult2, runtimeTestResult3 ) ) );\n    assertEquals( RuntimeTestEntrySeverity.FATAL, RuntimeTestEntrySeverity\n      .maxSeverityResult( Arrays.asList( runtimeTestResult1, runtimeTestResult2, runtimeTestResult3 ) ) );\n  }\n\n  @Test\n  public void testValuesAndValueOf() {\n    for ( RuntimeTestEntrySeverity runtimeTestEntrySeverity : RuntimeTestEntrySeverity.values() ) {\n      assertEquals( runtimeTestEntrySeverity, RuntimeTestEntrySeverity.valueOf( runtimeTestEntrySeverity.name() ) );\n    }\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/test/impl/BaseRuntimeTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.Arrays;\nimport java.util.HashSet;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class BaseRuntimeTestTest {\n  private String module;\n  private String id;\n  private String name;\n  private boolean configInitTest;\n  private HashSet<String> dependencies;\n  private BaseRuntimeTest baseRuntimeTest;\n\n  @Before\n  public void setup() {\n    module = \"module\";\n    id = \"id\";\n    name = \"name\";\n    configInitTest = true;\n    dependencies = new HashSet<>( Arrays.asList( \"dependency\" ) );\n    baseRuntimeTest = new BaseRuntimeTest( Object.class, module, id, name, configInitTest, dependencies ) {\n      @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n        throw new UnsupportedOperationException( \"This is a test object, don't run it... ever...\" );\n      }\n    };\n  }\n\n  @Test\n  public void testGetModule() {\n    assertEquals( module, baseRuntimeTest.getModule() );\n  }\n\n  @Test\n  public void testGetId() {\n    assertEquals( id, baseRuntimeTest.getId() );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( name, baseRuntimeTest.getName() );\n  }\n\n  @Test\n  public void testIsConfigInitTest() {\n    assertTrue( baseRuntimeTest.isConfigInitTest() );\n    assertFalse( new BaseRuntimeTest( Object.class, module, id, name, dependencies ) {\n      @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n        throw new UnsupportedOperationException( \"This is a test object, don't run it... ever...\" );\n      }\n    }.isConfigInitTest() );\n  }\n\n  @Test\n  public void testToString() {\n    String string = baseRuntimeTest.toString();\n    assertTrue( string.contains( module ) );\n    assertTrue( string.contains( id ) );\n    assertTrue( string.contains( name ) );\n    assertTrue( string.contains( \"true\" ) );\n    assertTrue( string.contains( dependencies.toString() ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/test/impl/RuntimeTestDelegateWithMoreDependenciesTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.RuntimeTest;\n\nimport java.util.Arrays;\nimport java.util.HashSet;\nimport java.util.Set;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTestDelegateWithMoreDependenciesTest {\n  private RuntimeTest delegate;\n  private HashSet<String> extraDependencies;\n  private RuntimeTestDelegateWithMoreDependencies\n  runtimeTestDelegateWithMoreDependencies;\n  private String module;\n  private String id;\n  private String name;\n  private String inheritedDep;\n  private String newDep;\n\n  @Before\n  public void setup() {\n    delegate = mock( RuntimeTest.class );\n    module = \"module\";\n    id = \"id\";\n    name = \"name\";\n    inheritedDep = \"inheritedDep\";\n    newDep = \"newDep\";\n    when( delegate.getModule() ).thenReturn( module );\n    when( delegate.getId() ).thenReturn( id );\n    when( delegate.getName() ).thenReturn( name );\n    when( delegate.getDependencies() ).thenReturn( new HashSet<>( Arrays.asList( inheritedDep ) ) );\n    extraDependencies = new HashSet<>( Arrays.asList( newDep ) );\n    runtimeTestDelegateWithMoreDependencies =\n      new RuntimeTestDelegateWithMoreDependencies( delegate, extraDependencies );\n  }\n\n  @Test\n  public void testGetModule() {\n    assertEquals( module, runtimeTestDelegateWithMoreDependencies.getModule() );\n  }\n\n  @Test\n  public void testGetId() {\n    assertEquals( id, runtimeTestDelegateWithMoreDependencies.getId() );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( name, runtimeTestDelegateWithMoreDependencies.getName() );\n  }\n\n  @Test\n  public void testIsConfigInitTest() {\n    when( delegate.isConfigInitTest() ).thenReturn( false ).thenReturn( true );\n    assertFalse( runtimeTestDelegateWithMoreDependencies.isConfigInitTest() );\n    assertTrue( runtimeTestDelegateWithMoreDependencies.isConfigInitTest() );\n  }\n\n  @Test\n  public void testGetDependencies() {\n    Set<String> dependencies = runtimeTestDelegateWithMoreDependencies.getDependencies();\n    assertTrue( dependencies.contains( inheritedDep ) );\n    assertTrue( dependencies.contains( newDep ) );\n  }\n\n  @Test\n  public void testToString() {\n    String string = runtimeTestDelegateWithMoreDependencies.toString();\n    assertTrue( string.contains( delegate.toString() ) );\n    assertTrue( string.contains( newDep ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/test/impl/RuntimeTestResultEntryImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class RuntimeTestResultEntryImplTest {\n\n  private RuntimeTestEntrySeverity severity;\n  private String description;\n  private String message;\n  private Exception exception;\n  private RuntimeTestResultEntryImpl runtimeTestResultEntry;\n\n  @Before\n  public void setup() {\n    severity = RuntimeTestEntrySeverity.ERROR;\n    description = \"desc\";\n    message = \"msg\";\n    exception = new Exception();\n    runtimeTestResultEntry = new RuntimeTestResultEntryImpl( severity, description, message, exception );\n  }\n\n  @Test\n  public void test3ArgConstructor() {\n    exception = null;\n    runtimeTestResultEntry = new RuntimeTestResultEntryImpl( severity, description, message );\n    testGetSeverity();\n    testGetDescription();\n    testGetMessage();\n    testToString();\n  }\n\n  @Test\n  public void testGetSeverity() {\n    assertEquals( severity, runtimeTestResultEntry.getSeverity() );\n  }\n\n  @Test\n  public void testGetDescription() {\n    assertEquals( description, runtimeTestResultEntry.getDescription() );\n  }\n\n  @Test\n  public void testGetMessage() {\n    assertEquals( message, runtimeTestResultEntry.getMessage() );\n  }\n\n  @Test\n  public void testGetException() {\n    assertEquals( exception, runtimeTestResultEntry.getException() );\n  }\n\n  @Test\n  public void testToString() {\n    String string = runtimeTestResultEntry.toString();\n    assertTrue( string.contains( severity.toString() ) );\n    assertTrue( string.contains( description ) );\n    assertTrue( string.contains( message ) );\n    assertTrue( string.contains( String.valueOf( exception ) ) );\n  }\n}\n"
  },
  {
    "path": "api/runtimeTest/src/test/java/org/pentaho/runtime/test/test/impl/RuntimeTestResultImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.runtime.test.test.impl;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 8/20/15.\n */\npublic class RuntimeTestResultImplTest {\n  private RuntimeTest runtimeTest;\n  private List<RuntimeTestResultEntry> runtimeTestResultEntries;\n  private long timeTaken;\n  private RuntimeTestResultImpl runtimeTestResult;\n  private RuntimeTestResultEntry runtimeTestResultEntry;\n  private RuntimeTestEntrySeverity info;\n\n  @Before\n  public void setup() {\n    runtimeTest = mock( RuntimeTest.class );\n    info = RuntimeTestEntrySeverity.INFO;\n    runtimeTestResultEntry = new RuntimeTestResultEntryImpl( info, \"testDesc\", \"testMessage\" );\n    runtimeTestResultEntries = new ArrayList<>( Arrays.asList( runtimeTestResultEntry ) );\n    timeTaken = 10L;\n    runtimeTestResult = new RuntimeTestResultImpl( runtimeTest, true,\n      new RuntimeTestResultSummaryImpl( runtimeTestResultEntry, runtimeTestResultEntries ), timeTaken );\n  }\n\n  @Test\n  public void testGetMaxSeverity() {\n    assertEquals( info, runtimeTestResult.getOverallStatusEntry().getSeverity() );\n  }\n\n  @Test\n  public void testGetRuntimeTestResultEntries() {\n    assertEquals( runtimeTestResultEntries, runtimeTestResult.getRuntimeTestResultEntries() );\n  }\n\n  @Test\n  public void testGetRuntimeTest() {\n    assertEquals( runtimeTest, runtimeTestResult.getRuntimeTest() );\n  }\n\n  @Test\n  public void testGetTimeTaken() {\n    assertEquals( timeTaken, runtimeTestResult.getTimeTaken() );\n  }\n\n  @Test\n  public void testToString() {\n    String string = runtimeTestResult.toString();\n    assertTrue( string.contains( info.toString() ) );\n    assertTrue( string.contains( runtimeTestResultEntry.toString() ) );\n    assertTrue( string.contains( runtimeTest.toString() ) );\n    assertTrue( string.contains( String.valueOf( timeTaken ) ) );\n  }\n}\n"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"\n         xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-assemblies</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-plugin</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n  <scm>\n    <connection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</connection>\n    <developerConnection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</developerConnection>\n    <url>scm:git:git@github.com:${github.user}/${project.artifactId}.git</url>\n  </scm>\n\n  <!-- Default properties (used when no specific variant profile is active) -->\n  <properties>\n    <hadoop.shim.artifactId>pentaho-hadoop-shims-cdpdc71</hadoop.shim.artifactId>\n    <hadoop.shim.destFileName>cdpdc71</hadoop.shim.destFileName>\n    <plugin.classifier>-cdp</plugin.classifier>\n    <active.hadoop.configuration>cdpdc71</active.hadoop.configuration>\n    <assembly.id>cdp</assembly.id>\n    <big-data-plugin.version>${project.version}</big-data-plugin.version>\n  </properties>\n\n  <profiles>\n    <!-- Profile for HDI variant -->\n    <profile>\n      <id>hdi</id>\n      <properties>\n        <hadoop.shim.artifactId>pentaho-hadoop-shims-hdi40</hadoop.shim.artifactId>\n        <hadoop.shim.destFileName>hdi40</hadoop.shim.destFileName>\n        <plugin.classifier>-hdi</plugin.classifier>\n        <active.hadoop.configuration>hdi40</active.hadoop.configuration>\n        <assembly.id>hdi</assembly.id>\n      </properties>\n      <dependencies>\n        <dependency>\n          <groupId>org.pentaho.hadoop.shims</groupId>\n          <artifactId>pentaho-hadoop-shims-hdi40</artifactId>\n          <version>${pentaho-hadoop-shims.version}</version>\n          <type>zip</type>\n          <scope>provided</scope>\n          <exclusions>\n            <exclusion>\n              <groupId>*</groupId>\n              <artifactId>*</artifactId>\n            </exclusion>\n          </exclusions>\n        </dependency>\n      </dependencies>\n      <build>\n        <plugins>\n          <plugin>\n            <artifactId>maven-assembly-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>pkg-hdi</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>single</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n\n    <!-- Profile for EMR variant -->\n    <profile>\n      <id>emr</id>\n      <properties>\n        <hadoop.shim.artifactId>pentaho-hadoop-shims-emr770</hadoop.shim.artifactId>\n        <hadoop.shim.destFileName>emr770</hadoop.shim.destFileName>\n        <plugin.classifier>-emr</plugin.classifier>\n        <active.hadoop.configuration>emr770</active.hadoop.configuration>\n        <assembly.id>emr</assembly.id>\n      </properties>\n      <dependencies>\n        <dependency>\n          <groupId>org.pentaho.hadoop.shims</groupId>\n          <artifactId>pentaho-hadoop-shims-emr770</artifactId>\n          <version>${pentaho-hadoop-shims.version}</version>\n          <type>zip</type>\n          <scope>provided</scope>\n          <exclusions>\n            <exclusion>\n              <groupId>*</groupId>\n              <artifactId>*</artifactId>\n            </exclusion>\n          </exclusions>\n        </dependency>\n      </dependencies>\n      <build>\n        <plugins>\n          <plugin>\n            <artifactId>maven-assembly-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>pkg-emr</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>single</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n\n    <!-- Profile for Dataproc variant -->\n    <profile>\n      <id>dataproc</id>\n      <properties>\n        <hadoop.shim.artifactId>pentaho-hadoop-shims-dataproc23</hadoop.shim.artifactId>\n        <hadoop.shim.destFileName>dataproc23</hadoop.shim.destFileName>\n        <plugin.classifier>-dataproc</plugin.classifier>\n        <active.hadoop.configuration>dataproc23</active.hadoop.configuration>\n        <assembly.id>dataproc</assembly.id>\n      </properties>\n      <dependencies>\n        <dependency>\n          <groupId>org.pentaho.hadoop.shims</groupId>\n          <artifactId>pentaho-hadoop-shims-dataproc23</artifactId>\n          <version>${pentaho-hadoop-shims.version}</version>\n          <type>zip</type>\n          <scope>provided</scope>\n          <exclusions>\n            <exclusion>\n              <groupId>*</groupId>\n              <artifactId>*</artifactId>\n            </exclusion>\n          </exclusions>\n        </dependency>\n      </dependencies>\n      <build>\n        <plugins>\n          <plugin>\n            <artifactId>maven-assembly-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>pkg-dataproc</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>single</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n\n    <!-- Profile for CDP variant -->\n    <profile>\n      <id>cdp</id>\n      <activation>\n        <activeByDefault>true</activeByDefault>\n      </activation>\n      <properties>\n        <hadoop.shim.artifactId>pentaho-hadoop-shims-cdpdc71</hadoop.shim.artifactId>\n        <hadoop.shim.destFileName>cdpdc71</hadoop.shim.destFileName>\n        <plugin.classifier>-cdp</plugin.classifier>\n        <active.hadoop.configuration>cdpdc71</active.hadoop.configuration>\n        <assembly.id>cdp</assembly.id>\n      </properties>\n      <dependencies>\n        <dependency>\n          <groupId>org.pentaho.hadoop.shims</groupId>\n          <artifactId>pentaho-hadoop-shims-cdpdc71</artifactId>\n          <version>${pentaho-hadoop-shims.version}</version>\n          <type>zip</type>\n          <scope>provided</scope>\n          <exclusions>\n            <exclusion>\n              <groupId>*</groupId>\n              <artifactId>*</artifactId>\n            </exclusion>\n          </exclusions>\n        </dependency>\n      </dependencies>\n      <build>\n        <plugins>\n          <plugin>\n            <artifactId>maven-assembly-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>pkg-cdp</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>single</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n\n    <!-- Profile for ApacheVanilla variant -->\n    <profile>\n      <id>apachevanilla</id>\n      <properties>\n        <hadoop.shim.artifactId>pentaho-hadoop-shims-apachevanilla</hadoop.shim.artifactId>\n        <hadoop.shim.destFileName>apachevanilla</hadoop.shim.destFileName>\n        <plugin.classifier>-apachevanilla</plugin.classifier>\n        <active.hadoop.configuration>apachevanilla</active.hadoop.configuration>\n        <assembly.id>apachevanilla</assembly.id>\n      </properties>\n      <dependencies>\n        <dependency>\n          <groupId>org.pentaho.hadoop.shims</groupId>\n          <artifactId>pentaho-hadoop-shims-apachevanilla</artifactId>\n          <version>${project.version}</version>\n          <type>zip</type>\n          <scope>provided</scope>\n          <exclusions>\n            <exclusion>\n              <groupId>*</groupId>\n              <artifactId>*</artifactId>\n            </exclusion>\n          </exclusions>\n        </dependency>\n      </dependencies>\n      <build>\n        <plugins>\n          <plugin>\n            <artifactId>maven-assembly-plugin</artifactId>\n            <executions>\n              <execution>\n                <id>pkg-apachevanilla</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>single</goal>\n                </goals>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n\n\n  </profiles>\n\n  <dependencyManagement>\n    <dependencies>\n      <!-- SLF4J logging for PDI non-OSGi plugins should rely on the jars being provided in data-integration/lib -->\n      <dependency>\n        <groupId>org.slf4j</groupId>\n        <artifactId>slf4j-api</artifactId>\n        <version>${slf4j.version}</version>\n        <scope>provided</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.logging.log4j</groupId>\n        <artifactId>log4j-slf4j2-impl</artifactId>\n        <scope>provided</scope>\n      </dependency>\n    </dependencies>\n  </dependencyManagement>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-browse</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-job</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.hadoop.shims</groupId>\n      <artifactId>pentaho-hadoop-shims-common-base</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-vfs2-hdfs</artifactId>\n      <version>${commons-vfs2.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-authentication-mapper-impl</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.hadoop.shims</groupId>\n      <artifactId>pentaho-hadoop-shims-common-dependencies</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-authentication-mapper-api</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-mapreduce</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>hadoop-cluster-ui</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-legacy-amazon-core</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-vfs-hdfs-core</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>services-bootstrap</artifactId>\n      <version>${big-data-plugin.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-iam</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-core</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.fasterxml.jackson.dataformat</groupId>\n          <artifactId>jackson-dataformat-cbor</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>com.fasterxml.jackson.core</groupId>\n          <artifactId>jackson-databind</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>software.amazon.ion</groupId>\n          <artifactId>ion-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-emr</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-pricing</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-s3</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>aws-java-sdk-kms</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>joda-time</groupId>\n      <artifactId>joda-time</artifactId>\n      <version>${dependency.joda-time.revision}</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>*</artifactId>\n          <groupId>*</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.avro</groupId>\n      <artifactId>avro</artifactId>\n      <version>${org.apache.avro.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.core</groupId>\n      <artifactId>commands</artifactId>\n      <version>${dependency.commands.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.equinox</groupId>\n      <artifactId>common</artifactId>\n      <version>${dependency.common.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.github.stephenc.high-scale-lib</groupId>\n      <artifactId>high-scale-lib</artifactId>\n      <version>${dependency.high-scale-lib.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpclient</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpcore</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>net.java.dev.jets3t</groupId>\n      <artifactId>jets3t</artifactId>\n      <version>${dependency.jets3t.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>jline</groupId>\n      <artifactId>jline</artifactId>\n      <version>${dependency.jline.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.thrift</groupId>\n      <artifactId>libthrift</artifactId>\n      <version>${org.apache.thrift.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>xmlpull</groupId>\n      <artifactId>xmlpull</artifactId>\n      <version>${dependency.xmlpull.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>xpp3</groupId>\n      <artifactId>xpp3_min</artifactId>\n      <version>${dependency.xpp3.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.thoughtworks.xstream</groupId>\n      <artifactId>xstream</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-pig-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-sqoop-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hbase-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-mapreduce-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hive-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-formats-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hdfs-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-sqoop-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-oozie-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-spark-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-legacy-amazon-plugin</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-assemblies-pmr-libraries</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <classifier>plugin</classifier>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-plugin-samples</artifactId>\n      <version>${big-data-plugin.version}</version>\n      <type>zip</type>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n  <build>\n    <resources>\n      <resource>\n        <filtering>true</filtering>\n        <directory>src/main/resources</directory>\n        <includes>\n          <include>**/*.properties</include>\n        </includes>\n      </resource>\n      <resource>\n        <filtering>false</filtering>\n        <directory>src/main/resources</directory>\n        <excludes>\n          <exclude>**/*.properties</exclude>\n        </excludes>\n      </resource>\n    </resources>\n    <pluginManagement>\n      <plugins>\n        <plugin>\n          <artifactId>maven-assembly-plugin</artifactId>\n          <configuration>\n            <descriptors>\n              <descriptor>${basedir}/src/main/assembly/descriptors/plugin.xml</descriptor>\n            </descriptors>\n          </configuration>\n          <dependencies>\n            <dependency>\n              <groupId>org.codehaus.plexus</groupId>\n              <artifactId>plexus-interpolation</artifactId>\n              <version>1.26</version>\n            </dependency>\n          </dependencies>\n        </plugin>\n      </plugins>\n    </pluginManagement>\n    <plugins>\n      <plugin>\n        <artifactId>maven-dependency-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>unpack</id>\n            <phase>generate-resources</phase>\n            <goals>\n              <goal>unpack</goal>\n            </goals>\n            <configuration>\n              <artifactItems>\n                <artifactItem>\n                  <groupId>org.pentaho.hadoop.shims</groupId>\n                  <artifactId>${hadoop.shim.artifactId}</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/hadoop-configurations</outputDirectory>\n                  <destFileName>${hadoop.shim.destFileName}</destFileName>\n                </artifactItem>\n                <!-- Plugins unpack -->\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-pig-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-hbase-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-hdfs-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-sqoop-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-oozie-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-spark-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-mapreduce-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-hive-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-formats-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n                <artifactItem>\n                  <groupId>pentaho</groupId>\n                  <artifactId>pdi-legacy-amazon-plugin</artifactId>\n                  <type>zip</type>\n                  <outputDirectory>${basedir}/target/plugins/pentaho-big-data-plugin/plugins</outputDirectory>\n                </artifactItem>\n              </artifactItems>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <!-- Copy and filter assembly resources -->\n      <plugin>\n        <artifactId>maven-resources-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>filter-assembly-resources</id>\n            <phase>process-resources</phase>\n            <goals>\n              <goal>copy-resources</goal>\n            </goals>\n            <configuration>\n              <outputDirectory>${basedir}/target/filtered-assembly-resources</outputDirectory>\n              <resources>\n                <resource>\n                  <directory>src/main/assembly/resources</directory>\n                  <filtering>true</filtering>\n                  <includes>\n                    <include>**/*.properties</include>\n                  </includes>\n                </resource>\n                <resource>\n                  <directory>src/main/assembly/resources</directory>\n                  <filtering>false</filtering>\n                  <excludes>\n                    <exclude>**/*.properties</exclude>\n                  </excludes>\n                </resource>\n              </resources>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <artifactId>maven-assembly-plugin</artifactId>\n      </plugin>\n    </plugins>\n  </build>\n</project>"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/descriptors/plugin.xml",
    "content": "<assembly>\n    <id>${assembly.id}</id>\n    <baseDirectory>pentaho-big-data-plugin</baseDirectory>\n    <formats>\n        <format>zip</format>\n    </formats>\n    <fileSets>\n        <!-- Use filtered assembly resources (plugin.properties with variant-specific values) -->\n        <fileSet>\n            <directory>target/filtered-assembly-resources</directory>\n            <outputDirectory>/</outputDirectory>\n        </fileSet>\n        <fileSet>\n            <directory>target/plugins/pentaho-big-data-plugin/hadoop-configurations</directory>\n            <outputDirectory>/hadoop-configurations</outputDirectory>\n        </fileSet>\n        <fileSet>\n            <directory>target/plugins/pentaho-big-data-plugin/plugins</directory>\n            <outputDirectory>/</outputDirectory>\n        </fileSet>\n    </fileSets>\n    <dependencySets>\n        <dependencySet>\n            <includes>\n                <include>pentaho:pentaho-big-data-plugin-samples:zip</include>\n            </includes>\n            <unpack>true</unpack>\n            <outputDirectory>.</outputDirectory>\n            <useTransitiveDependencies>false</useTransitiveDependencies>\n            <useProjectArtifact>false</useProjectArtifact>\n        </dependencySet>\n        <dependencySet>\n            <outputDirectory>/</outputDirectory>\n            <unpack>false</unpack>\n            <scope>runtime</scope>\n            <useProjectArtifact>false</useProjectArtifact>\n            <outputFileNameMapping>${artifact.artifactId}-${artifact.baseVersion}.${artifact.extension}\n            </outputFileNameMapping>\n            <includes>\n              <include>pentaho:pentaho-big-data-legacy</include>\n              <include>pentaho:pentaho-big-data-legacy-amazon</include>\n              <include>pentaho:hadoop-cluster-ui</include>\n              <include>pentaho:services-bootstrap</include>\n                <include>pentaho:pentaho-big-data-kettle-plugins-kafka</include>\n            </includes>\n        </dependencySet>\n        <dependencySet>\n            <outputDirectory>/</outputDirectory>\n            <unpack>false</unpack>\n            <scope>runtime</scope>\n            <useProjectArtifact>false</useProjectArtifact>\n            <outputFileNameMapping>pentaho-mapreduce-libraries.zip</outputFileNameMapping>\n            <includes>\n              <include>pentaho:pentaho-big-data-assemblies-pmr-libraries</include>\n            </includes>\n        </dependencySet>\n        <dependencySet>\n            <outputDirectory>/lib</outputDirectory>\n            <unpack>false</unpack>\n            <scope>runtime</scope>\n            <useProjectArtifact>false</useProjectArtifact>\n            <outputFileNameMapping>${artifact.artifactId}-${artifact.baseVersion}.${artifact.extension}\n            </outputFileNameMapping>\n            <includes>\n                <include>pentaho:pentaho-big-data-legacy-core</include>\n                <include>pentaho:pentaho-big-data-api-runtimeTest</include>\n                <include>pentaho:pentaho-big-data-impl-clusterTests</include>\n                <include>pentaho:pentaho-big-data-impl-cluster</include>\n                <include>pentaho:pentaho-big-data-kettle-plugins-common-ui</include>\n                <include>pentaho:pentaho-big-data-kettle-plugins-common-job</include>\n                <include>pentaho:pentaho-big-data-impl-vfs-hdfs-core</include>\n                <include>pentaho:pentaho-authentication-mapper-impl</include>\n                <include>pentaho:pentaho-big-data-kettle-plugins-browse</include>\n                <include>org.pentaho:shim-api-core</include>\n                <include>org.pentaho.hadoop.shims:pentaho-hadoop-shims-common-base</include>\n                <include>org.pentaho.hadoop.shims:pentaho-hadoop-shims-common-dependencies</include>\n                <include>org.pentaho:pentaho-hadoop-shims-common-services-api</include>\n                <include>org.pentaho:pentaho-hadoop-shims-common-mapreduce</include>\n                <include>com.pentaho:pentaho-yarn-api</include>\n                <include>pentaho:pentaho-authentication-mapper-api</include>\n                <include>org.pentaho:pentaho-hadoop-shims-common-services-api</include>\n                <include>org.apache.avro:avro</include>\n                <include>joda-time:joda-time</include>\n                <include>com.amazonaws:aws-java-sdk-core</include>\n                <include>com.amazonaws:aws-java-sdk-iam</include>\n                <include>com.amazonaws:aws-java-sdk-emr</include>\n                <include>com.amazonaws:aws-java-sdk-s3</include>\n                <include>com.amazonaws:aws-java-sdk-pricing</include>\n                <include>org.eclipse.core:commands</include>\n                <include>org.eclipse.equinox:common</include>\n                <include>commons-cli:commons-cli</include>\n                <include>com.github.stephenc.high-scale-lib:high-scale-lib</include>\n                <include>org.codehaus.jackson:jackson-core-asl</include>\n                <include>net.java.dev.jets3t:jets3t</include>\n                <include>jline:jline</include>\n                <include>com.googlecode.json-simple:json-simple</include>\n                <include>org.apache.commons:commons-vfs2-hdfs</include>\n                <include>pentaho:pentaho-big-data-kettle-plugins-common-ui</include>\n                <include>xmlpull:xmlpull</include>\n                <include>xpp3:xpp3_min</include>\n                <include>com.thoughtworks.xstream:xstream</include>\n                <include>org.slf4j:slf4j-reload4j</include>\n                <include>org.apache.httpcomponents:httpmime</include>\n            </includes>\n        </dependencySet>\n        <dependencySet>\n          <outputDirectory>/</outputDirectory>\n          <useTransitiveDependencies>false</useTransitiveDependencies>\n          <useProjectArtifact>false</useProjectArtifact>\n          <scope>provided</scope>\n          <unpack>true</unpack>\n          <unpackOptions>\n            <includes>\n              <include>PentahoBigDataPlugin_OSS_Licenses.html</include>\n            </includes>\n          </unpackOptions>\n        </dependencySet>\n    </dependencySets>\n</assembly>\n"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/resources/bigdata-logging.properties",
    "content": "# Big Data Plugin Logging Configuration\n# \n# This file defines the loggers that should be configured for Big Data plugin components.\n# Each logger entry should follow the format: logger.<name>=<level>\n# \n# Available log levels: TRACE, DEBUG, INFO, WARN, ERROR, FATAL\n\n# Pentaho Big Data Plugin Loggers (org.* packages only)\nlogger.org.pentaho.big.data=INFO\n\n# Hadoop Core Loggers\nlogger.org.apache.hadoop=INFO\nlogger.org.apache.hadoop.io.retry=WARN\n\n# HBase Loggers\nlogger.org.apache.hbase=INFO\n\n# Hive Loggers\nlogger.org.apache.hive=INFO\n\n# Sqoop Loggers\nlogger.org.apache.sqoop=INFO\n\n# Kafka Loggers\nlogger.org.apache.kafka=WARN\n\n# Spark Loggers\nlogger.org.apache.spark=WARN\n\n# Add custom loggers below as needed\n# logger.my.custom.package=DEBUG\n"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/resources/classpath.properties",
    "content": "classpath=./${hadoop.configurations.path}/${active.hadoop.configuration}"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/resources/hadoop-configurations/.kettle-ignore",
    "content": ""
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/resources/plugin.properties",
    "content": "# The Hadoop Configuration to use when communicating with a Hadoop cluster. This is used for all Hadoop client tools\n# including HDFS, Hive, HBase, and Sqoop.\n# For more configuration options specific to the Hadoop configuration choosen\n# here see the config.properties file in that configuration's directory.\n# Note: should no longer be used and will be ignored after named cluster configurations have been updated for 9.0+\n# Will only be referenced if site configuration files are not found in the expected locations in the metastore folders\nactive.hadoop.configuration=${active.hadoop.configuration}\n\n# If using shim configurations from 8.3 and prior and have not migrated to 9.0 configurations:\n#     Path to the directory that contains the available Hadoop configurations\n# If using shim configurations from 9.0+:\n#     Path to metastore to use when running on a Pentaho slave server (e.g. Pan or Kitchen)\n#     To use an existing PDI metastore, for example, the directory is /home/user/.pentaho\nhadoop.configurations.path=hadoop-configurations\n\n# Version of Kettle to use from the Kettle HDFS installation directory. This can be set globally here or overridden per job\n# as a User Defined property. If not set we will use the version of Kettle that is used to submit the Pentaho MapReduce job.\npmr.kettle.installation.id=\n\n# Installation path in HDFS for the Pentaho MapReduce Hadoop Distribution\n# The directory structure should follow this structure where {version} can be configured through the Pentaho MapReduce\n# User Defined properties as kettle.runtime.version\n#\n# /opt/pentaho/mapreduce/\n#  +- {version}/\n#  |   +- lib/\n#  |       ..\n#  |       +- kettle-core-{version}.jar\n#  |       +- kettle-engine-{version}.jar\n#  |       ..\n#  |   +- plugins/\n#  |       pentaho-big-data-plugin/\n#  |        .. (additional optional plugins)\n#  +- {another-version}/\n#  |   +- lib/\n#  |       ..\n#  |       +- kettle-core-{version}.jar\n#  |       +- kettle-engine-{version}.jar\n#  |       ..\n#  |   +- plugins/\n#  |       +- pentaho-big-data-plugin/\n#  |           ..\n#  |       +- my-custom-plugin/\n#  |           ..\npmr.kettle.dfs.install.dir=/opt/pentaho/mapreduce\n\n# Enables the use of Hadoop's Distributed Cache to store the Kettle environment required to execute Pentaho MapReduce\n# If this is disabled you must configure all TaskTracker nodes with the Pentaho for Hadoop Distribution\n# @deprecated This is deprecated and is provided as a migration path for existing installations.\npmr.use.distributed.cache=true\n\n# Pentaho MapReduce runtime archive to be preloaded into kettle.hdfs.install.dir/pmr.kettle.installation.id\npmr.libraries.archive.file=pentaho-mapreduce-libraries.zip\n\n# Additional plugins to be copied when Pentaho MapReduce's Kettle Environment does not exist on DFS. This should be a comma-separated\n# list of plugin folder names to copy.\n# e.g. pmr.kettle.additional.plugins=my-test-plugin,steps/DummyPlugin\npmr.kettle.additional.plugins=pdi-core-plugins,pentaho-metastore-locator-plugin\n\n# Individual file name prefixes to not include when adding plugins to the Pentaho MapReduce Kettle Environment. This should be a comma-separated\n# list of file prefixes to exclude.  The pdi-core-plugins-ui value should not be removed, only added to as demonstrated below.\n# e.g. to exclude the file some-jar-file-name-9.0.0.0-xxx.jar: pmr.kettle.exclude.plugin.files=pdi-core-plugins-ui,some-jar-file-name\npmr.kettle.exclude.plugin.files=pdi-core-plugins-ui\n\nnotificationsBeforeLoadingShim=1\n\n# pmr.create.unique.metastore.dir:\n# If the property is not present or set to true, a unique metastore directory is created for each execution of a Pentaho MapReduce job\n# If the property is set to false, a single metastore directory is created and is overwritten for each Pentaho MapReduce job\n# Setting this property to false will save space within HDFS but can cause concurrency and security issues if multiple users are using the same \n# pmr.kettle.dfs.install.dir.\npmr.create.unique.metastore.dir=true\n\n# Value to use for the ipc.client.connect.max.retries.on.timeouts Hadoop property when connecting to HDFS\n# Note that this value overrides any ipc.client.connect.max.retries.on.timeouts set in *site.xml files for individual\n# Hadoop cluster definitions\nhadoopfs.ipc.client.connect.max.retries.on.timeouts=5\n\n# These clauses are added on the java command line when executing a \"Start PDI Cluster on Yarn\" step\nyarn.additional.jvm.options=--add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/sun.net.www.protocol.jar=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.net=ALL-UNNAMED --add-opens java.base/java.security=ALL-UNNAMED --add-opens java.base/sun.net.www.protocol.file=ALL-UNNAMED --add-opens java.base/java.security=ALL-UNNAMED --add-opens java.base/sun.net.www.protocol.file=ALL-UNNAMED --add-opens java.base/sun.net.www.protocol.ftp=ALL-UNNAMED --add-opens java.base/sun.net.www.protocol.http=ALL-UNNAMED --add-opens java.base/sun.reflect.misc=ALL-UNNAMED --add-opens java.management/javax.management=ALL-UNNAMED --add-opens java.management/javax.management.openmbean=ALL-UNNAMED --add-opens java.naming/com.sun.jndi.ldap=ALL-UNNAMED --add-opens java.base/java.math=ALL-UNNAMED --add-opens java.base/java.io=ALL-UNNAMED --add-opens java.base/java.lang.Object=ALL-UNNAMED --add-opens java.base/sun.nio.ch=ALL-UNNAMED\n"
  },
  {
    "path": "assemblies/pentaho-big-data-plugin/src/main/assembly/resources/plugins/.gitignore",
    "content": ""
  },
  {
    "path": "assemblies/pmr-libraries/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-assemblies</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-assemblies-pmr-libraries</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <properties>\n    <org.eclipse.swt.gtk.linux.x86_64.version>3.108.0</org.eclipse.swt.gtk.linux.x86_64.version>\n    <javax.activation-api.version>1.1</javax.activation-api.version>\n    <javax.mail.version>1.4.7</javax.mail.version>\n    <jsp-api.version>2.0</jsp-api.version>\n    <javax.servlet-api.version>3.0.1</javax.servlet-api.version>\n    <jakarta.servlet.version>6.0.0</jakarta.servlet.version>\n    <validation-api.version>1.0.0.GA</validation-api.version>\n    <ehcache.version>2.5.1</ehcache.version>\n    <jersey.version>1.19.1</jersey.version>\n    <hibernate-commons-annotations.version>5.1.2.Final</hibernate-commons-annotations.version>\n    <hibernate-core.version>5.6.15.Final</hibernate-core.version>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <blueprints-core.version>2.6.0</blueprints-core.version>\n    <rxjava.version>2.2.3</rxjava.version>\n    <jsch.version>0.2.9</jsch.version>\n    <encryption-support.version>11.1.0.0-SNAPSHOT</encryption-support.version>\n    <pdi-osgi-bridge.version>11.1.0.0-SNAPSHOT</pdi-osgi-bridge.version>\n    <osgi.core.version>8.0.0</osgi.core.version>\n    <gwt.version>2.12.2</gwt.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.osgi</groupId>\n      <artifactId>osgi.core</artifactId>\n      <version>${osgi.core.version}</version>\n      <scope>runtime</scope>\n    </dependency>\n    <dependency>\n      <groupId>io.reactivex.rxjava2</groupId>\n      <artifactId>rxjava</artifactId>\n      <version>${rxjava.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.antlr</groupId>\n      <artifactId>antlr-complete</artifactId>\n      <version>3.5.2</version>\n    </dependency>\n    <dependency>\n      <groupId>ascsapjco3wrp</groupId>\n      <artifactId>ascsapjco3wrp</artifactId>\n      <version>20100529</version>\n    </dependency>\n    <dependency>\n      <groupId>asm</groupId>\n      <artifactId>asm</artifactId>\n      <version>3.2</version>\n    </dependency>\n    <dependency>\n      <groupId>org.bouncycastle</groupId>\n      <artifactId>bcmail-jdk15to18</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.bouncycastle</groupId>\n      <artifactId>bcpkix-jdk15to18</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.bouncycastle</groupId>\n      <artifactId>bcutil-jdk15to18</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.bouncycastle</groupId>\n      <artifactId>bcprov-jdk15to18</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>bsf</groupId>\n      <artifactId>bsf</artifactId>\n      <version>2.4.0</version>\n    </dependency>\n    <dependency>\n      <groupId>cglib</groupId>\n      <artifactId>cglib-nodep</artifactId>\n      <version>2.2</version>\n    </dependency>\n    <dependency>\n      <groupId>com.enterprisedt</groupId>\n      <artifactId>edtftpj</artifactId>\n      <version>2.1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.code.findbugs</groupId>\n      <artifactId>jsr305</artifactId>\n      <version>1.3.9</version>\n    </dependency>\n    <dependency>\n      <groupId>com.googlecode.jsendnsca</groupId>\n      <artifactId>jsendnsca</artifactId>\n      <version>2.0.1</version>\n    </dependency>\n    <dependency>\n      <groupId>com.googlecode.json-simple</groupId>\n      <artifactId>json-simple</artifactId>\n      <version>1.1</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>core</artifactId>\n      <version>1.47.1</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>gdata-analytics</artifactId>\n      <version>2.3.0</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>gdata-analytics-meta</artifactId>\n      <version>2.1</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>gdata-client</artifactId>\n      <version>1.41.4</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>gdata-client-meta</artifactId>\n      <version>1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.gdata</groupId>\n      <artifactId>gdata-core</artifactId>\n      <version>1.41.4</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.healthmarketscience.jackcess</groupId>\n      <artifactId>jackcess</artifactId>\n      <version>1.2.6</version>\n    </dependency>\n    <dependency>\n      <groupId>com.github.mwiede</groupId>\n      <artifactId>jsch</artifactId>\n      <version>${jsch.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.github.librepdf</groupId>\n      <artifactId>openpdf</artifactId>\n      <version>${openpdf.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>bouncycastle</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>org.bouncycastle</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.github.librepdf</groupId>\n      <artifactId>openrtf</artifactId>\n      <version>${openrtf.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>bouncycastle</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>org.bouncycastle</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.tinkerpop.blueprints</groupId>\n      <artifactId>blueprints-core</artifactId>\n      <version>${blueprints-core.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-beanutils</groupId>\n      <artifactId>commons-beanutils</artifactId>\n      <version>${commons-beanutils.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-cli</groupId>\n      <artifactId>commons-cli</artifactId>\n      <version>1.2</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-collections</groupId>\n      <artifactId>commons-collections</artifactId>\n      <version>3.2.2</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-configuration2</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>1.6</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-digester</groupId>\n      <artifactId>commons-digester</artifactId>\n      <version>1.8</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-discovery</groupId>\n      <artifactId>commons-discovery</artifactId>\n      <version>0.4</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-fileupload</groupId>\n      <artifactId>commons-fileupload</artifactId>\n      <version>${commons-fileupload-osgi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-io</groupId>\n      <artifactId>commons-io</artifactId>\n      <version>${commons-io.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-lang</groupId>\n      <artifactId>commons-lang</artifactId>\n      <version>2.6</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-math</groupId>\n      <artifactId>commons-math</artifactId>\n      <version>1.1</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-validator</groupId>\n      <artifactId>commons-validator</artifactId>\n      <version>1.3.1</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n      <version>${log4j.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-api</artifactId>\n      <version>${log4j.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-vfs2</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>com.sun.jersey.contribs</groupId>\n      <artifactId>jersey-apache-client</artifactId>\n      <version>${jersey.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.sun.jersey</groupId>\n      <artifactId>jersey-bundle</artifactId>\n      <version>${jersey.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.sun.jersey</groupId>\n      <artifactId>jersey-client</artifactId>\n      <version>${jersey.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.sun.jersey</groupId>\n      <artifactId>jersey-core</artifactId>\n      <version>${jersey.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.thoughtworks.xstream</groupId>\n      <artifactId>xstream</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.dom4j</groupId>\n      <artifactId>dom4j</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>eigenbase</groupId>\n      <artifactId>eigenbase-properties</artifactId>\n      <version>1.1.2</version>\n    </dependency>\n    <dependency>\n      <groupId>eigenbase</groupId>\n      <artifactId>eigenbase-resgen</artifactId>\n      <version>1.3.1</version>\n    </dependency>\n    <dependency>\n      <groupId>eigenbase</groupId>\n      <artifactId>eigenbase-xom</artifactId>\n      <version>1.3.5</version>\n    </dependency>\n    <dependency>\n      <groupId>feed4j</groupId>\n      <artifactId>feed4j</artifactId>\n      <version>1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>ftp4che</groupId>\n      <artifactId>ftp4che</artifactId>\n      <version>0.7.1</version>\n    </dependency>\n    <dependency>\n      <groupId>infobright</groupId>\n      <artifactId>infobright-core</artifactId>\n      <version>3.4</version>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.janino</groupId>\n      <artifactId>janino</artifactId>\n      <version>${janino.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javacup</groupId>\n      <artifactId>javacup</artifactId>\n      <version>10k</version>\n    </dependency>\n    <dependency>\n      <groupId>javadbf</groupId>\n      <artifactId>javadbf</artifactId>\n      <version>20081125</version>\n    </dependency>\n    <dependency>\n      <groupId>org.javassist</groupId>\n      <artifactId>javassist</artifactId>\n      <version>3.20.0-GA</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.activation</groupId>\n      <artifactId>activation</artifactId>\n      <version>${javax.activation-api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.mail</groupId>\n      <artifactId>mail</artifactId>\n      <version>${javax.mail.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.servlet</groupId>\n      <artifactId>jsp-api</artifactId>\n      <version>${jsp-api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.servlet</groupId>\n      <artifactId>javax.servlet-api</artifactId>\n      <version>${javax.servlet-api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>jakarta.servlet</groupId>\n      <artifactId>jakarta.servlet-api</artifactId>\n      <version>${jakarta.servlet.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.validation</groupId>\n      <artifactId>validation-api</artifactId>\n      <version>${validation-api.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>javax.xml</groupId>\n      <artifactId>jaxrpc-api</artifactId>\n      <version>1.1</version>\n    </dependency>\n    <dependency>\n      <groupId>jaxen</groupId>\n      <artifactId>jaxen</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>jcifs</groupId>\n      <artifactId>jcifs</artifactId>\n      <version>1.3.3</version>\n    </dependency>\n    <dependency>\n      <groupId>jexcelapi</groupId>\n      <artifactId>jxl</artifactId>\n      <version>2.6.12</version>\n    </dependency>\n    <dependency>\n      <groupId>jfree</groupId>\n      <artifactId>jcommon</artifactId>\n      <version>1.0.16</version>\n    </dependency>\n    <dependency>\n      <groupId>jfree</groupId>\n      <artifactId>jfreechart</artifactId>\n      <version>1.0.13</version>\n    </dependency>\n    <dependency>\n      <groupId>jsonpath</groupId>\n      <artifactId>jsonpath</artifactId>\n      <version>1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>jug-lgpl</groupId>\n      <artifactId>jug-lgpl</artifactId>\n      <version>2.0.0</version>\n    </dependency>\n    <dependency>\n      <groupId>ldapjdk</groupId>\n      <artifactId>ldapjdk</artifactId>\n      <version>20000524</version>\n    </dependency>\n    <dependency>\n      <groupId>monetdb</groupId>\n      <artifactId>monetdb-jdbc</artifactId>\n      <version>2.8</version>\n    </dependency>\n    <dependency>\n      <groupId>net.java.dev.javacc</groupId>\n      <artifactId>javacc</artifactId>\n      <version>5.0</version>\n    </dependency>\n    <dependency>\n      <groupId>net.sf.ehcache</groupId>\n      <artifactId>ehcache-core</artifactId>\n      <version>${ehcache.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>net.sf.saxon</groupId>\n      <artifactId>Saxon-HE</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.xmlresolver</groupId>\n      <artifactId>xmlresolver</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>net.sf.scannotation</groupId>\n      <artifactId>scannotation</artifactId>\n      <version>1.0.2</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>javassist</artifactId>\n          <groupId>javassist</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>ognl</groupId>\n      <artifactId>ognl</artifactId>\n      <version>2.6.9</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-compress</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-collections4</artifactId>\n      <version>4.1</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlbeans</groupId>\n      <artifactId>xmlbeans</artifactId>\n      <version>${xmlbeans.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-anim</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-awt-util</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-bridge</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-codec</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-css</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-dom</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-ext</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-gui-util</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-gvt</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-parser</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-script</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-svg-dom</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-transcoder</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-util</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-xml</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-constants</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-i18n</artifactId>\n      <version>${batik.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache-extras.beanshell</groupId>\n      <artifactId>bsh</artifactId>\n      <version>${beanshell.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.groovy</groupId>\n      <artifactId>groovy</artifactId>\n      <version>2.4.21</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>*</artifactId>\n          <groupId>*</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.fasterxml.jackson.jaxrs</groupId>\n      <artifactId>jackson-jaxrs-json-provider</artifactId>\n      <version>${fasterxml-jackson.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.drools</groupId>\n      <artifactId>drools-ruleunits-engine</artifactId>\n      <version>8.44.0.Final</version>\n    </dependency>\n    <dependency>\n      <groupId>org.drools</groupId>\n      <artifactId>drools-compiler</artifactId>\n      <version>8.44.0.Final</version>\n    </dependency>\n    <dependency>\n      <groupId>org.drools</groupId>\n      <artifactId>drools-core</artifactId>\n      <version>8.44.0.Final</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eobjects.sassyreader</groupId>\n      <artifactId>SassyReader</artifactId>\n      <version>0.5</version>\n    </dependency>\n    <dependency>\n      <groupId>org.fife.ui</groupId>\n      <artifactId>rsyntaxtextarea</artifactId>\n      <version>1.3.2</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hibernate.common</groupId>\n      <artifactId>hibernate-commons-annotations</artifactId>\n      <version>${hibernate-commons-annotations.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hibernate</groupId>\n      <artifactId>hibernate-core</artifactId>\n      <version>${hibernate-core.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hibernate</groupId>\n      <artifactId>hibernate-ehcache</artifactId>\n      <version>${hibernate-core.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>json</artifactId>\n      <version>${pentaho-json.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.mnode.mstor</groupId>\n      <artifactId>mstor</artifactId>\n      <version>0.9.13</version>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.mvel</groupId>\n      <artifactId>mvel2</artifactId>\n      <version>2.0.10</version>\n    </dependency>\n    <dependency>\n      <groupId>org.odftoolkit</groupId>\n      <artifactId>odfdom-java</artifactId>\n      <version>0.8.6</version>\n    </dependency>\n    <dependency>\n      <groupId>org.olap4j</groupId>\n      <artifactId>olap4j</artifactId>\n      <version>1.2.0</version>\n    </dependency>\n    <dependency>\n      <groupId>org.olap4j</groupId>\n      <artifactId>olap4j-xmla</artifactId>\n      <version>1.2.0</version>\n    </dependency>\n    <dependency>\n      <groupId>org.owasp.encoder</groupId>\n      <artifactId>encoder</artifactId>\n      <version>1.2</version>\n    </dependency>\n    <dependency>\n      <groupId>org.postgresql</groupId>\n      <artifactId>postgresql</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.safehaus.jug</groupId>\n      <artifactId>jug-lgpl</artifactId>\n      <version>2.0.0</version>\n    </dependency>\n    <dependency>\n      <groupId>org.samba.jcifs</groupId>\n      <artifactId>jcifs</artifactId>\n      <version>1.3.3</version>\n    </dependency>\n    <dependency>\n      <groupId>org.scannotation</groupId>\n      <artifactId>scannotation</artifactId>\n      <version>1.0.2</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>javassist</artifactId>\n          <groupId>javassist</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.snmp4j</groupId>\n      <artifactId>snmp4j</artifactId>\n      <version>1.9.3d</version>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework.security</groupId>\n      <artifactId>spring-security-core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework</groupId>\n      <artifactId>spring-core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework</groupId>\n      <artifactId>spring-beans</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework</groupId>\n      <artifactId>spring-context</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework</groupId>\n      <artifactId>spring-aop</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.springframework</groupId>\n      <artifactId>spring-expression</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.syslog4j</groupId>\n      <artifactId>syslog4j</artifactId>\n      <version>0.9.34</version>\n    </dependency>\n    <dependency>\n      <groupId>org.w3c.css</groupId>\n      <artifactId>sac</artifactId>\n      <version>1.3</version>\n    </dependency>\n    <dependency>\n      <groupId>org.xerial.snappy</groupId>\n      <artifactId>snappy-java</artifactId>\n      <version>${snappy-java.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.yaml</groupId>\n      <artifactId>snakeyaml</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-encryption-support</artifactId>\n      <version>${encryption-support.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n\t  <exclusions>\n        <exclusion>\n          <artifactId>jackson-jaxrs</artifactId>\n          <groupId>org.codehaus.jackson</groupId>\n        </exclusion>\n        <exclusion>\n          <artifactId>jackson-core-asl</artifactId>\n          <groupId>org.codehaus.jackson</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>flute</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libbase</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libdocbundle</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libfonts</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libformat</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libformula</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libloader</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libpixie</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>librepository</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libserializer</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libswing</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.library</groupId>\n      <artifactId>libxml</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>simple-jndi</artifactId>\n      <version>${simple-jndi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-vfs-browser</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>metastore</artifactId>\n      <version>${metastore.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>mondrian</artifactId>\n      <version>${mondrian.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>commons-database-model</artifactId>\n      <version>${commons-database.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-metadata</artifactId>\n      <version>${pentaho-metadata.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-registry</artifactId>\n      <version>${pentaho-registry.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.reporting.engine</groupId>\n      <artifactId>classic-core</artifactId>\n      <version>${pentaho-reporting.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>commons-xul-core</artifactId>\n      <version>${commons-xul.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.mozilla</groupId>\n      <artifactId>rhino</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>stax</groupId>\n      <artifactId>stax</artifactId>\n      <version>1.2.0</version>\n    </dependency>\n    <dependency>\n      <groupId>stax</groupId>\n      <artifactId>stax-api</artifactId>\n      <version>1.0.1</version>\n    </dependency>\n    <dependency>\n      <groupId>sun</groupId>\n      <artifactId>jlfgr</artifactId>\n      <version>1.0</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.sshd</groupId>\n      <artifactId>sshd-core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.sshd</groupId>\n      <artifactId>sshd-sftp</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>woodstox</groupId>\n      <artifactId>wstx-asl</artifactId>\n      <version>3.2.4</version>\n    </dependency>\n    <dependency>\n      <groupId>wsdl4j</groupId>\n      <artifactId>wsdl4j</artifactId>\n      <version>1.6.2</version>\n    </dependency>\n    <dependency>\n      <groupId>wsdl4j</groupId>\n      <artifactId>wsdl4j-qname</artifactId>\n      <version>1.6.1</version>\n    </dependency>\n    <dependency>\n      <groupId>xml-apis</groupId>\n      <artifactId>xml-apis</artifactId>\n      <version>2.0.2</version>\n    </dependency>\n    <dependency>\n      <groupId>xml-apis</groupId>\n      <artifactId>xml-apis-ext</artifactId>\n      <version>1.3.04</version>\n    </dependency>\n    <dependency>\n      <groupId>xmlpull</groupId>\n      <artifactId>xmlpull</artifactId>\n      <version>1.1.3.1</version>\n    </dependency>\n    <dependency>\n      <groupId>xpp3</groupId>\n      <artifactId>xpp3_min</artifactId>\n      <version>1.1.4c</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-logging</groupId>\n      <artifactId>commons-logging</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpclient</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents.core5</groupId>\n      <artifactId>httpcore5</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents.client5</groupId>\n      <artifactId>httpclient5</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.platform</groupId>\n      <artifactId>org.eclipse.swt.gtk.linux.x86_64</artifactId>\n      <version>${org.eclipse.swt.gtk.linux.x86_64.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-http</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-continuation</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-io</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-plus</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-security</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-server</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-servlet</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-util</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.jetty</groupId>\n      <artifactId>jetty-xml</artifactId>\n      <version>${jetty-hadoop.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-slf4j2-impl</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-api</artifactId>\n      <version>${platform.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-extensions</artifactId>\n      <version>${platform.version}</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>xbean</artifactId>\n          <groupId>org.apache.xbean</groupId>\n        </exclusion>\n        <exclusion>\n          <artifactId>jackson-core-asl</artifactId>\n          <groupId>org.codehaus.jackson</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-metaverse-api</artifactId>\n      <version>${pentaho-metaverse.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.avro</groupId>\n      <artifactId>avro-mapred</artifactId>\n      <version>${org.apache.avro.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-kettle-repository-locator-api</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.gwtproject</groupId>\n      <artifactId>gwt-user</artifactId>\n      <version>${gwt.version}</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <plugins>\n      <plugin>\n        <artifactId>maven-resources-plugin</artifactId>\n        <version>${dependency.maven-resources-plugin.revision}</version>\n        <executions>\n          <execution>\n            <id>filter</id>\n            <phase>generate-resources</phase>\n            <goals>\n              <goal>resources</goal>\n            </goals>\n          </execution>\n        </executions>\n        <inherited>false</inherited>\n      </plugin>\n      <plugin>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>assembly</id>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n            <configuration>\n              <descriptors>\n                <descriptor>src/main/descriptors/assembly.xml</descriptor>\n              </descriptors>\n              <finalName>${project.artifactId}</finalName>\n              <appendAssemblyId>true</appendAssemblyId>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/descriptors/assembly.xml",
    "content": "<assembly>\n  <id>plugin</id>\n  <baseDirectory></baseDirectory>\n  <formats>\n    <format>zip</format>\n  </formats>\n  <fileSets>\n    <fileSet>\n      <directory>target/classes</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n    <fileSet>\n      <directory>target/simple-jndi</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>lib</outputDirectory>\n      <unpack>false</unpack>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputFileNameMapping>${artifact.artifactId}-${artifact.baseVersion}.${artifact.extension}\n      </outputFileNameMapping>\n      <includes>\n        <include>io.reactivex.rxjava2:rxjava</include>\n        <include>ascsapjco3wrp:ascsapjco3wrp</include>\n        <include>asm:asm</include>\n        <include>org.bouncycastle:bcmail-jdk15to18</include>\n        <include>org.bouncycastle:bcprov-jdk15to18</include>\n        <include>org.bouncycastle:bcutil-jdk15to18</include>\n        <include>org.bouncycastle:bcpkix-jdk15to18</include>\n        <include>bsf:bsf</include>\n        <include>cglib:cglib-nodep</include>\n        <include>com.enterprisedt:edtftpj</include>\n        <include>com.google.code.findbugs:jsr305</include>\n        <include>com.googlecode.jsendnsca:jsendnsca</include>\n        <include>com.googlecode.json-simple:json-simple</include>\n        <include>com.google.gdata:core</include>\n        <include>com.google.gdata:gdata-analytics</include>\n        <include>com.google.gdata:gdata-analytics-meta</include>\n        <include>com.google.gdata:gdata-client</include>\n        <include>com.google.gdata:gdata-client-meta</include>\n        <include>com.google.gdata:gdata-core</include>\n        <include>com.google.guava:guava</include>\n        <include>com.healthmarketscience.jackcess:jackcess</include>\n        <include>com.github.mwiede:jsch</include>\n        <include>com.github.librepdf:openpdf</include>\n        <include>com.github.librepdf:openrtf</include>\n        <include>com.tinkerpop.blueprints:blueprints-core</include>\n        <include>commons-beanutils:commons-beanutils</include>\n        <include>commons-cli:commons-cli</include>\n        <include>commons-collections:commons-collections</include>\n        <include>commons-configuration:commons-configuration</include>\n        <include>org.apache.commons:commons-dbcp2</include>\n        <include>commons-digester:commons-digester</include>\n        <include>commons-discovery:commons-discovery</include>\n        <include>commons-fileupload:commons-fileupload</include>\n        <include>org.apache.httpcomponents:httpclient</include>\n        <include>org.apache.httpcomponents:httpcore</include>\n        <include>org.apache.httpcomponents.client5:httpclient5</include>\n        <include>org.apache.httpcomponents.core5:httpcore5</include>\n        <include>commons-io:commons-io</include>\n        <include>commons-lang:commons-lang</include>\n        <include>commons-logging:commons-logging</include>\n        <include>commons-math:commons-math</include>\n        <include>org.apache.commons:commons-pool2</include>\n        <include>commons-validator:commons-validator</include>\n        <include>org.apache.commons:commons-configuration2</include>\n        <include>org.apache.commons:commons-vfs2</include>\n        <include>org.apache.logging.log4j:log4j-core</include>\n        <include>org.apache.logging.log4j:log4j-api</include>\n        <include>com.sun.jersey.contribs:jersey-apache-client</include>\n        <include>com.sun.jersey:jersey-bundle</include>\n        <include>com.sun.jersey:jersey-client</include>\n        <include>com.sun.jersey:jersey-core</include>\n        <include>xmlpull:xmlpull</include>\n        <include>xpp3:xpp3_min</include>\n        <include>com.thoughtworks.xstream:xstream</include>\n        <include>org.dom4j:dom4j</include>\n        <include>eigenbase:eigenbase-properties</include>\n        <include>eigenbase:eigenbase-resgen</include>\n        <include>eigenbase:eigenbase-xom</include>\n        <include>feed4j:feed4j</include>\n        <include>ftp4che:ftp4che</include>\n        <include>infobright:infobright-core</include>\n        <include>org.codehaus.janino:janino</include>\n        <include>org.codehaus.janino:commons-compiler</include>\n        <include>javacup:javacup</include>\n        <include>javadbf:javadbf</include>\n        <include>org.javassist:javassist</include>\n        <include>javax.activation:activation</include>\n        <include>com.sun.mail:javax.mail</include>\n        <include>javax.servlet:jsp-api</include>\n        <include>javax.servlet:javax.servlet-api</include>\n        <include>javax.validation:validation-api</include>\n        <include>javax.xml:jaxrpc-api</include>\n        <include>jakarta.servlet:jakarta.servlet-api</include>\n        <include>jaxen:jaxen</include>\n        <include>jcifs:jcifs</include>\n        <include>jexcelapi:jxl</include>\n        <include>jfree:jcommon</include>\n        <include>jfree:jfreechart</include>\n        <include>jsonpath:jsonpath</include>\n        <include>jug-lgpl:jug-lgpl</include>\n        <include>ldapjdk:ldapjdk</include>\n        <include>monetdb:monetdb-jdbc</include>\n        <include>net.java.dev.javacc:javacc</include>\n        <include>net.sf.ehcache:ehcache-core</include>\n        <include>net.sf.saxon:Saxon-HE</include>\n        <include>org.xmlresolver:xmlresolver</include>\n        <include>net.sf.scannotation:scannotation</include>\n        <include>ognl:ognl</include>\n        <include>org.antlr:antlr-complete</include>\n        <include>org.apache.commons:commons-compress</include>\n        <include>org.apache.poi:poi</include>\n        <include>org.apache.poi:poi-ooxml</include>\n        <include>org.apache.poi:poi-ooxml-schemas</include>\n        <include>org.apache.commons:commons-collections4</include>\n        <include>org.apache.xmlbeans:xmlbeans</include>\n        <include>org.apache.xmlgraphics:batik-anim</include>\n        <include>org.apache.xmlgraphics:batik-awt-util</include>\n        <include>org.apache.xmlgraphics:batik-bridge</include>\n        <include>org.apache.xmlgraphics:batik-codec</include>\n        <include>org.apache.xmlgraphics:batik-css</include>\n        <include>org.apache.xmlgraphics:batik-dom</include>\n        <include>org.apache.xmlgraphics:batik-ext</include>\n        <include>org.apache.xmlgraphics:batik-gui-util</include>\n        <include>org.apache.xmlgraphics:batik-gvt</include>\n        <include>org.apache.xmlgraphics:batik-parser</include>\n        <include>org.apache.xmlgraphics:batik-script</include>\n        <include>org.apache.xmlgraphics:batik-svg-dom</include>\n        <include>org.apache.xmlgraphics:batik-transcoder</include>\n        <include>org.apache.xmlgraphics:batik-util</include>\n        <include>org.apache.xmlgraphics:batik-xml</include>\n        <include>org.apache.xmlgraphics:batik-constants</include>\n        <include>org.apache.xmlgraphics:batik-i18n</include>\n        <include>org.apache-extras.beanshell:bsh</include>\n        <include>org.codehaus.groovy:groovy</include>\n        <include>org.codehaus.jackson:jackson-core-asl</include>\n        <include>org.codehaus.jackson:jackson-jaxrs</include>\n        <include>org.drools:drools-ruleunits-engine</include>\n        <include>org.drools:drools-compiler</include>\n        <include>org.drools:drools-core</include>\n        <include>org.eclipse.platform:org.eclipse.swt.gtk.linux.x86_64</include>\n        <include>org.eclipse.jetty:jetty-continuation</include>\n        <include>org.eclipse.jetty:jetty-http</include>\n        <include>org.eclipse.jetty:jetty-io</include>\n        <include>org.eclipse.jetty:jetty-plus</include>\n        <include>org.eclipse.jetty:jetty-security</include>\n        <include>org.eclipse.jetty:jetty-server</include>\n        <include>org.eclipse.jetty:jetty-servlet</include>\n        <include>org.eclipse.jetty:jetty-util</include>\n        <include>org.eobjects.sassyreader:SassyReader</include>\n        <include>org.fife.ui:rsyntaxtextarea</include>\n        <include>org.hibernate.common:hibernate-commons-annotations</include>\n        <include>org.hibernate:hibernate-core</include>\n        <include>org.hibernate:hibernate-ehcache</include>\n        <include>org.pentaho:json</include>\n        <include>org.apache.logging.log4j:log4j-core</include>\n        <include>org.mnode.mstor:mstor</include>\n        <include>org.mvel:mvel2</include>\n        <include>org.odftoolkit:odfdom-java</include>\n        <include>org.olap4j:olap4j</include>\n        <include>org.olap4j:olap4j-xmla</include>\n        <include>org.owasp.encoder:encoder</include>\n        <include>org.postgresql:postgresql</include>\n        <include>org.safehaus.jug:jug-lgpl</include>\n        <include>org.samba.jcifs:jcifs</include>\n        <include>org.scannotation:scannotation</include>\n        <include>org.slf4j:slf4j-api</include>\n        <include>org.apache.logging.log4j:log4j-slf4j2-impl</include>\n        <include>org.snmp4j:snmp4j</include>\n        <include>org.springframework.security:spring-security-core</include>\n        <include>org.springframework:spring-core</include>\n        <include>org.springframework:spring-beans</include>\n        <include>org.springframework:spring-context</include>\n        <include>org.springframework:spring-aop</include>\n        <include>org.springframework:spring-expression</include>\n        <include>org.syslog4j:syslog4j</include>\n        <include>org.w3c.css:sac</include>\n        <include>org.xerial.snappy:snappy-java</include>\n        <include>org.yaml:snakeyaml</include>\n        <include>pentaho-kettle:kettle-core</include>\n        <include>pentaho-kettle:kettle-engine</include>\n        <include>pentaho-kettle:kettle-ui-swt</include>\n        <include>org.osgi:osgi.core</include>\n        <include>org.pentaho:commons-xul-core</include>\n        <include>org.pentaho.reporting.library:flute</include>\n        <include>org.pentaho.reporting.library:libbase</include>\n        <include>org.pentaho.reporting.library:libdocbundle</include>\n        <include>org.pentaho.reporting.library:libfonts</include>\n        <include>org.pentaho.reporting.library:libformat</include>\n        <include>org.pentaho.reporting.library:libformula</include>\n        <include>org.pentaho.reporting.library:libloader</include>\n        <include>org.pentaho.reporting.library:libpixie</include>\n        <include>org.pentaho.reporting.library:librepository</include>\n        <include>org.pentaho.reporting.library:libserializer</include>\n        <include>org.pentaho.reporting.library:libswing</include>\n        <include>org.pentaho.reporting.library:libxml</include>\n        <include>pentaho:pentaho-vfs-browser</include>\n        <include>pentaho:pentaho-platform-api</include>\n        <include>pentaho:pentaho-platform-core</include>\n        <include>pentaho:pentaho-platform-extensions</include>\n        <include>pentaho:pentaho-xul-core</include>\n        <include>pentaho:metastore</include>\n        <include>pentaho:mondrian</include>\n        <include>org.pentaho:commons-database-model</include>\n        <include>org.pentaho:pentaho-metadata</include>\n        <include>org.pentaho:shim-api</include>\n        <include>org.pentaho:shim-api-core</include>\n        <include>org.pentaho:pentaho-registry</include>\n        <include>pentaho:simple-jndi</include>\n        <include>org.pentaho:pentaho-encryption-support</include>\n        <include>org.pentaho.reporting.engine:classic-core</include>\n        <include>org.mozilla:rhino</include>\n        <include>com.wcohen:com.wcohen.secondstring</include>\n        <include>stax:stax</include>\n        <include>stax:stax-api</include>\n        <include>sun:jlfgr</include>\n        <include>org.apache.sshd:sshd-core</include>\n        <include>org.apache.sshd:sshd-sftp</include>\n        <include>woodstox:wstx-asl</include>\n        <include>wsdl4j:wsdl4j</include>\n        <include>wsdl4j:wsdl4j-qname</include>\n        <include>xml-apis:xml-apis</include>\n        <include>xml-apis:xml-apis-ext</include>\n        <include>org.apache.kafka:kafka-clients</include>\n        <include>org.mortbay.jetty:servlet-api</include>\n        <include>pentaho:pentaho-metaverse-api</include>\n        <include>org.apache.avro:avro-mapred</include>\n        <include>org.pentaho.di.plugins:pentaho-metastore-locator-api</include>\n        <include>org.pentaho.di.plugins:pentaho-kettle-repository-locator-api</include>\n        <include>pentaho:pentaho-authentication-mapper-api</include>\n        <include>pentaho:pentaho-authentication-mapper-impl</include>\n        <include>org.pentaho.hadoop.shims:pentaho-hadoop-shims-common-dependencies</include>\n        <include>org.pentaho.hadoop.shims:pentaho-hadoop-shims-common-base</include>\n        <include>org.gwtproject:gwt-user</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>\n"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/kettle-lifecycle-listeners.xml",
    "content": "<listeners>\n\n</listeners>"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/kettle-password-encoder-plugins.xml",
    "content": "<password-encoder-plugins>\n\n  <password-encoder-plugin id=\"Kettle\">\n    <description>Kettle Password Encoder</description>\n\t<classname>org.pentaho.support.encryption.KettleTwoWayPasswordEncoder</classname>\n  </password-encoder-plugin>\n \n</password-encoder-plugins>"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/kettle-registry-extensions.xml",
    "content": "<registry-extensions>\n</registry-extensions>"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/log4j2.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<Configuration status=\"Warn\" monitorInterval=\"30\">\n\n    <Appenders>\n\n        <RollingFile name=\"PENTAHOFILE\"\n                     fileName=\"logs/pdi.log\"\n                     filePattern=\"pdi_%d{MM-dd-yyyy}-%i.log\"\n                     append=\"false\">\n            <PatternLayout>\n                <Pattern>%d %-5p [%c] %m%n</Pattern>\n            </PatternLayout>\n            <Policies>\n                <OnStartupTriggeringPolicy />\n                <SizeBasedTriggeringPolicy size=\"20 MB\" />\n                <TimeBasedTriggeringPolicy interval=\"1\"/>\n            </Policies>\n        </RollingFile>\n\n        <Console name=\"PENTAHOCONSOLE\">\n            <PatternLayout>\n                <Pattern>%d{ABSOLUTE} %-5p [%c{1}] %m%n</Pattern>\n            </PatternLayout>\n        </Console>\n    </Appenders>\n\n    <Loggers>\n        <Logger>\n            <Logger name=\"pentaho\" level=\"info\" additivity=\"false\">\n                <appender-ref ref=\"PENTAHOFILE\" />\n            </Logger>\n            <Root level=\"info\">\n                <appender-ref ref=\"PENTAHOCONSOLE\" />\n            </Root>\n        </Logger>\n    </Loggers>\n</Configuration>"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/org/apache/commons/vfs2/impl/providers.xml",
    "content": "<!--\n    Licensed to the Apache Software Foundation (ASF) under one or more\n    contributor license agreements.  See the NOTICE file distributed with\n    this work for additional information regarding copyright ownership.\n    The ASF licenses this file to You under the Apache License, Version 2.0\n    (the \"License\"); you may not use this file except in compliance with\n    the License.  You may obtain a copy of the License at\n\n         http://www.apache.org/licenses/LICENSE-2.0\n\n    Unless required by applicable law or agreed to in writing, software\n    distributed under the License is distributed on an \"AS IS\" BASIS,\n    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n    See the License for the specific language governing permissions and\n    limitations under the License.\n-->\n<providers>\n  <default-provider class-name=\"org.apache.commons.vfs2.provider.url.UrlFileProvider\">\n  </default-provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.local.DefaultLocalFileProvider\">\n    <scheme name=\"file\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.zip.ZipFileProvider\">\n    <scheme name=\"zip\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.tar.TarFileProvider\">\n    <scheme name=\"tar\"/>\n    <if-available class-name=\"org.apache.commons.compress.archivers.tar.TarArchiveOutputStream\"/>\n  </provider>\n\n  <provider class-name=\"org.apache.commons.vfs2.provider.bzip2.Bzip2FileProvider\">\n    <scheme name=\"bz2\"/>\n    <if-available class-name=\"org.apache.commons.compress.compressors.bzip2.BZip2CompressorInputStream\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.gzip.GzipFileProvider\">\n    <scheme name=\"gz\"/>\n  </provider>\n\n  <provider class-name=\"org.apache.commons.vfs2.provider.jar.JarFileProvider\">\n    <scheme name=\"jar\"/>\n    <scheme name=\"sar\"/>\n    <scheme name=\"ear\"/>\n    <scheme name=\"par\"/>\n    <scheme name=\"ejb3\"/>\n    <scheme name=\"war\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.temp.TemporaryFileProvider\">\n    <scheme name=\"tmp\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.ftp.FtpFileProvider\">\n    <scheme name=\"ftp\"/>\n    <if-available class-name=\"org.apache.commons.net.ftp.FTPFile\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.ftps.FtpsFileProvider\">\n    <scheme name=\"ftps\"/>\n    <if-available class-name=\"org.apache.commons.net.ftp.FTPFile\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.http4.Http4FileProvider\">\n    <scheme name=\"http\"/>\n    <if-available class-name=\"org.apache.http.client.HttpClient\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.http4s.Http4sFileProvider\">\n    <scheme name=\"https\"/>\n    <if-available class-name=\"org.apache.http.client.HttpClient\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.sftp.SftpFileProvider\">\n    <scheme name=\"sftp\"/>\n    <if-available class-name=\"javax.crypto.Cipher\"/>\n    <if-available class-name=\"com.jcraft.jsch.JSch\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.res.ResourceFileProvider\">\n    <scheme name=\"res\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.webdav.WebdavFileProvider\">\n    <scheme name=\"webdav\"/>\n    <if-available class-name=\"org.apache.http.client.HttpClient\"/>\n    <if-available class-name=\"org.apache.jackrabbit.webdav.client.methods.DavMethod\"/>\n  </provider>\n  <!--\n  <provider class-name=\"org.apache.commons.vfs2.provider.svn.SvnFileProvider\">\n      <scheme name=\"svnhttps\"/>\n  </provider>\n  -->\n  <!--\n      <provider class-name=\"org.apache.commons.vfs2.provider.tar.TgzFileProvider\">\n          <scheme name=\"tgz\"/>\n          <if-available scheme=\"gz\"/>\n          <if-available scheme=\"tar\"/>\n      </provider>\n      <provider class-name=\"org.apache.commons.vfs2.provider.tar.Tbz2FileProvider\">\n          <scheme name=\"tbz2\"/>\n          <if-available scheme=\"bz2\"/>\n          <if-available scheme=\"tar\"/>\n      </provider>\n  -->\n  <provider class-name=\"org.apache.commons.vfs2.provider.tar.TarFileProvider\">\n    <scheme name=\"tgz\"/>\n    <if-available scheme=\"gz\"/>\n    <if-available scheme=\"tar\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.tar.TarFileProvider\">\n    <scheme name=\"tbz2\"/>\n    <if-available scheme=\"bz2\"/>\n    <if-available scheme=\"tar\"/>\n  </provider>\n  <provider class-name=\"org.apache.commons.vfs2.provider.ram.RamFileProvider\">\n    <scheme name=\"ram\"/>\n  </provider>\n\n  <extension-map extension=\"zip\" scheme=\"zip\"/>\n  <extension-map extension=\"tar\" scheme=\"tar\"/>\n  <mime-type-map mime-type=\"application/zip\" scheme=\"zip\"/>\n  <mime-type-map mime-type=\"application/x-tar\" scheme=\"tar\"/>\n  <mime-type-map mime-type=\"application/x-gzip\" scheme=\"gz\"/>\n  <!--\n  <mime-type-map mime-type=\"application/x-tgz\" scheme=\"tgz\"/>\n  -->\n  <extension-map extension=\"jar\" scheme=\"jar\"/>\n  <extension-map extension=\"bz2\" scheme=\"bz2\"/>\n  <extension-map extension=\"gz\" scheme=\"gz\"/>\n  <!--\n  <extension-map extension=\"tgz\" scheme=\"tgz\"/>\n  <extension-map extension=\"tbz2\" scheme=\"tbz2\"/>\n  -->\n  <extension-map extension=\"tgz\" scheme=\"tar\"/>\n  <extension-map extension=\"tbz2\" scheme=\"tar\"/>\n\n  <!--\n  <filter-map class-name=\"org.apache.commons.vfs2.content.bzip2.Bzip2Compress\">\n      <extension name=\"bz2\"/>\n      <extension name=\"tbz2\"/>\n      <if-available class-name=\"org.apache.commons.compress.bzip2.CBZip2InputStream\"/>\n  </filter-map>\n  <filter-map class-name=\"org.apache.commons.vfs2.content.gzip.GzipCompress\">\n      <extension name=\"gz\"/>\n      <extension name=\"tgz\"/>\n      <mime-type name=\"application/x-tgz\" />\n  </filter-map>\n  -->\n</providers>\n"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/classes/pmr.properties",
    "content": "isPmr=true\nnotificationsBeforeLoadingShim=1\n"
  },
  {
    "path": "assemblies/pmr-libraries/src/main/resources/simple-jndi/jdbc.properties",
    "content": "SampleData/type=javax.sql.DataSource\nSampleData/driver=org.h2.Driver\nSampleData/url=jdbc:h2:file:samples/db/sampledb;IFEXISTS=TRUE\nSampleData/user=PENTAHO_USER\nSampleData/password=PASSWORD\nQuartz/type=javax.sql.DataSource\nQuartz/driver=org.hsqldb.jdbcDriver\nQuartz/url=jdbc:hsqldb:hsql://localhost/quartz\nQuartz/user=pentaho_user\nQuartz/password=password\nHibernate/type=javax.sql.DataSource\nHibernate/driver=org.hsqldb.jdbcDriver\nHibernate/url=jdbc:hsqldb:hsql://localhost/hibernate\nHibernate/user=hibuser\nHibernate/password=password\nShark/type=javax.sql.DataSource\nShark/driver=org.hsqldb.jdbcDriver\nShark/url=jdbc:hsqldb:hsql://localhost/shark\nShark/user=sa\nShark/password=\nPDI_Operations_Mart/type=javax.sql.DataSource\nPDI_Operations_Mart/driver=org.postgresql.Driver\nPDI_Operations_Mart/url=jdbc:postgresql://localhost:5432/hibernate?searchpath=pentaho_operations_mart\nPDI_Operations_Mart/user=hibuser\nPDI_Operations_Mart/password=password\nlive_logging_info/type=javax.sql.DataSource\nlive_logging_info/driver=org.postgresql.Driver\nlive_logging_info/url=jdbc:postgresql://localhost:5432/hibernate?searchpath=pentaho_dilogs\nlive_logging_info/user=hibuser\nlive_logging_info/password=password\n"
  },
  {
    "path": "assemblies/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-assemblies</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n\n  <modules>\n    <module>samples</module>\n    <module>pmr-libraries</module>\n    <!-- pentaho-big-data-plugin must be last since it depends on samples and pmr-libraries -->\n    <module>pentaho-big-data-plugin</module>\n  </modules>\n\n  <build>\n    <plugins>\n      <!-- Ensure proper build order and validate Maven/Java versions -->\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <version>3.0.0</version>\n        <executions>\n          <execution>\n            <id>enforce-versions</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <configuration>\n              <rules>\n                <requireMavenVersion>\n                  <version>[3.6.0,)</version>\n                </requireMavenVersion>\n                <requireJavaVersion>\n                  <version>[11,)</version>\n                </requireJavaVersion>\n              </rules>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n\n  <profiles>\n    <!-- Profile to skip tests during assembly builds -->\n    <profile>\n      <id>assembly-skip-tests</id>\n      <activation>\n        <property>\n          <name>skipTests</name>\n          <value>true</value>\n        </property>\n      </activation>\n      <properties>\n        <maven.test.skip>true</maven.test.skip>\n      </properties>\n    </profile>\n\n    <!-- Profile to build only pentaho-big-data-plugin assembly -->\n    <profile>\n      <id>plugin-only</id>\n      <modules>\n        <module>pentaho-big-data-plugin</module>\n      </modules>\n    </profile>\n    <!-- Profile to build only pentaho-big-data-plugin assembly -->\n    <profile>\n      <id>non-big-data-plugin</id>\n      <modules>\n        <module>samples</module>\n        <module>pmr-libraries</module>\n      </modules>\n    </profile>\n  </profiles>\n</project>\n"
  },
  {
    "path": "assemblies/samples/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-assemblies</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-plugin-samples</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n  <scm>\n    <connection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</connection>\n    <developerConnection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</developerConnection>\n    <url>scm:git:git@github.com:${github.user}/${project.artifactId}.git</url>\n  </scm>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <build>\n    <plugins>\n      <plugin>\n        <artifactId>maven-assembly-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>pkg</id>\n            <phase>package</phase>\n            <goals>\n              <goal>single</goal>\n            </goals>\n          </execution>\n        </executions>\n        <configuration>\n          <appendAssemblyId>false</appendAssemblyId>\n          <descriptors>\n            <descriptor>${basedir}/src/main/assembly/descriptors/samples.xml</descriptor>\n          </descriptors>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "assemblies/samples/src/main/assembly/descriptors/samples.xml",
    "content": "<assembly>\n    <id>package</id>\n    <baseDirectory>samples</baseDirectory>\n    <formats>\n        <format>zip</format>\n    </formats>\n    <fileSets>\n        <fileSet>\n            <directory>${project.basedir}/src/main/resources</directory>\n            <outputDirectory>.</outputDirectory>\n        </fileSet>\n    </fileSets>\n</assembly>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/.kettle-ignore",
    "content": ""
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Hadoop Job Executor 2 adv.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Hadoop Job Executor 2 adv</name>\n  <description/>\n  <extended_description/>\n  <job_version/>\n  <job_status>0</job_status>\n  <directory>/</directory>\n  <created_user>-</created_user>\n  <created_date>2010/07/12 13:45:27.737</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010/07/12 13:45:27.737</modified_date>\n  <parameters>\n    </parameters>\n  <slaveservers>\n    </slaveservers>\n  <job-log-table>\n    <connection/>\n    <schema/>\n    <table/>\n    <size_limit_lines/>\n    <interval/>\n    <timeout_days/>\n    <field>\n      <id>ID_JOB</id>\n      <enabled>Y</enabled>\n      <name>ID_JOB</name>\n    </field>\n    <field>\n      <id>CHANNEL_ID</id>\n      <enabled>Y</enabled>\n      <name>CHANNEL_ID</name>\n    </field>\n    <field>\n      <id>JOBNAME</id>\n      <enabled>Y</enabled>\n      <name>JOBNAME</name>\n    </field>\n    <field>\n      <id>STATUS</id>\n      <enabled>Y</enabled>\n      <name>STATUS</name>\n    </field>\n    <field>\n      <id>LINES_READ</id>\n      <enabled>Y</enabled>\n      <name>LINES_READ</name>\n    </field>\n    <field>\n      <id>LINES_WRITTEN</id>\n      <enabled>Y</enabled>\n      <name>LINES_WRITTEN</name>\n    </field>\n    <field>\n      <id>LINES_UPDATED</id>\n      <enabled>Y</enabled>\n      <name>LINES_UPDATED</name>\n    </field>\n    <field>\n      <id>LINES_INPUT</id>\n      <enabled>Y</enabled>\n      <name>LINES_INPUT</name>\n    </field>\n    <field>\n      <id>LINES_OUTPUT</id>\n      <enabled>Y</enabled>\n      <name>LINES_OUTPUT</name>\n    </field>\n    <field>\n      <id>LINES_REJECTED</id>\n      <enabled>Y</enabled>\n      <name>LINES_REJECTED</name>\n    </field>\n    <field>\n      <id>ERRORS</id>\n      <enabled>Y</enabled>\n      <name>ERRORS</name>\n    </field>\n    <field>\n      <id>STARTDATE</id>\n      <enabled>Y</enabled>\n      <name>STARTDATE</name>\n    </field>\n    <field>\n      <id>ENDDATE</id>\n      <enabled>Y</enabled>\n      <name>ENDDATE</name>\n    </field>\n    <field>\n      <id>LOGDATE</id>\n      <enabled>Y</enabled>\n      <name>LOGDATE</name>\n    </field>\n    <field>\n      <id>DEPDATE</id>\n      <enabled>Y</enabled>\n      <name>DEPDATE</name>\n    </field>\n    <field>\n      <id>REPLAYDATE</id>\n      <enabled>Y</enabled>\n      <name>REPLAYDATE</name>\n    </field>\n    <field>\n      <id>LOG_FIELD</id>\n      <enabled>Y</enabled>\n      <name>LOG_FIELD</name>\n    </field>\n    <field>\n      <id>EXECUTING_SERVER</id>\n      <enabled>N</enabled>\n      <name>EXECUTING_SERVER</name>\n    </field>\n    <field>\n      <id>EXECUTING_USER</id>\n      <enabled>N</enabled>\n      <name>EXECUTING_USER</name>\n    </field>\n    <field>\n      <id>START_JOB_ENTRY</id>\n      <enabled>N</enabled>\n      <name>START_JOB_ENTRY</name>\n    </field>\n    <field>\n      <id>CLIENT</id>\n      <enabled>N</enabled>\n      <name>CLIENT</name>\n    </field>\n  </job-log-table>\n  <jobentry-log-table>\n    <connection/>\n    <schema/>\n    <table/>\n    <timeout_days/>\n    <field>\n      <id>ID_BATCH</id>\n      <enabled>Y</enabled>\n      <name>ID_BATCH</name>\n    </field>\n    <field>\n      <id>CHANNEL_ID</id>\n      <enabled>Y</enabled>\n      <name>CHANNEL_ID</name>\n    </field>\n    <field>\n      <id>LOG_DATE</id>\n      <enabled>Y</enabled>\n      <name>LOG_DATE</name>\n    </field>\n    <field>\n      <id>JOBNAME</id>\n      <enabled>Y</enabled>\n      <name>TRANSNAME</name>\n    </field>\n    <field>\n      <id>JOBENTRYNAME</id>\n      <enabled>Y</enabled>\n      <name>STEPNAME</name>\n    </field>\n    <field>\n      <id>LINES_READ</id>\n      <enabled>Y</enabled>\n      <name>LINES_READ</name>\n    </field>\n    <field>\n      <id>LINES_WRITTEN</id>\n      <enabled>Y</enabled>\n      <name>LINES_WRITTEN</name>\n    </field>\n    <field>\n      <id>LINES_UPDATED</id>\n      <enabled>Y</enabled>\n      <name>LINES_UPDATED</name>\n    </field>\n    <field>\n      <id>LINES_INPUT</id>\n      <enabled>Y</enabled>\n      <name>LINES_INPUT</name>\n    </field>\n    <field>\n      <id>LINES_OUTPUT</id>\n      <enabled>Y</enabled>\n      <name>LINES_OUTPUT</name>\n    </field>\n    <field>\n      <id>LINES_REJECTED</id>\n      <enabled>Y</enabled>\n      <name>LINES_REJECTED</name>\n    </field>\n    <field>\n      <id>ERRORS</id>\n      <enabled>Y</enabled>\n      <name>ERRORS</name>\n    </field>\n    <field>\n      <id>RESULT</id>\n      <enabled>Y</enabled>\n      <name>RESULT</name>\n    </field>\n    <field>\n      <id>NR_RESULT_ROWS</id>\n      <enabled>Y</enabled>\n      <name>NR_RESULT_ROWS</name>\n    </field>\n    <field>\n      <id>NR_RESULT_FILES</id>\n      <enabled>Y</enabled>\n      <name>NR_RESULT_FILES</name>\n    </field>\n    <field>\n      <id>LOG_FIELD</id>\n      <enabled>N</enabled>\n      <name>LOG_FIELD</name>\n    </field>\n    <field>\n      <id>COPY_NR</id>\n      <enabled>N</enabled>\n      <name>COPY_NR</name>\n    </field>\n  </jobentry-log-table>\n  <channel-log-table>\n    <connection/>\n    <schema/>\n    <table/>\n    <timeout_days/>\n    <field>\n      <id>ID_BATCH</id>\n      <enabled>Y</enabled>\n      <name>ID_BATCH</name>\n    </field>\n    <field>\n      <id>CHANNEL_ID</id>\n      <enabled>Y</enabled>\n      <name>CHANNEL_ID</name>\n    </field>\n    <field>\n      <id>LOG_DATE</id>\n      <enabled>Y</enabled>\n      <name>LOG_DATE</name>\n    </field>\n    <field>\n      <id>LOGGING_OBJECT_TYPE</id>\n      <enabled>Y</enabled>\n      <name>LOGGING_OBJECT_TYPE</name>\n    </field>\n    <field>\n      <id>OBJECT_NAME</id>\n      <enabled>Y</enabled>\n      <name>OBJECT_NAME</name>\n    </field>\n    <field>\n      <id>OBJECT_COPY</id>\n      <enabled>Y</enabled>\n      <name>OBJECT_COPY</name>\n    </field>\n    <field>\n      <id>REPOSITORY_DIRECTORY</id>\n      <enabled>Y</enabled>\n      <name>REPOSITORY_DIRECTORY</name>\n    </field>\n    <field>\n      <id>FILENAME</id>\n      <enabled>Y</enabled>\n      <name>FILENAME</name>\n    </field>\n    <field>\n      <id>OBJECT_ID</id>\n      <enabled>Y</enabled>\n      <name>OBJECT_ID</name>\n    </field>\n    <field>\n      <id>OBJECT_REVISION</id>\n      <enabled>Y</enabled>\n      <name>OBJECT_REVISION</name>\n    </field>\n    <field>\n      <id>PARENT_CHANNEL_ID</id>\n      <enabled>Y</enabled>\n      <name>PARENT_CHANNEL_ID</name>\n    </field>\n    <field>\n      <id>ROOT_CHANNEL_ID</id>\n      <enabled>Y</enabled>\n      <name>ROOT_CHANNEL_ID</name>\n    </field>\n  </channel-log-table>\n  <checkpoint-log-table>\n    <connection/>\n    <schema/>\n    <table/>\n    <timeout_days/>\n    <max_nr_retries/>\n    <run_retry_period/>\n    <namespace_parameter/>\n    <save_parameters/>\n    <save_result_rows/>\n    <save_result_files/>\n    <field>\n      <id>ID_JOB_RUN</id>\n      <enabled>Y</enabled>\n      <name>ID_JOB_RUN</name>\n    </field>\n    <field>\n      <id>ID_JOB</id>\n      <enabled>Y</enabled>\n      <name>ID_JOB</name>\n    </field>\n    <field>\n      <id>JOBNAME</id>\n      <enabled>Y</enabled>\n      <name>JOBNAME</name>\n    </field>\n    <field>\n      <id>NAMESPACE</id>\n      <enabled>Y</enabled>\n      <name>NAMESPACE</name>\n    </field>\n    <field>\n      <id>CHECKPOINT_NAME</id>\n      <enabled>Y</enabled>\n      <name>CHECKPOINT_NAME</name>\n    </field>\n    <field>\n      <id>CHECKPOINT_COPYNR</id>\n      <enabled>Y</enabled>\n      <name>CHECKPOINT_COPYNR</name>\n    </field>\n    <field>\n      <id>ATTEMPT_NR</id>\n      <enabled>Y</enabled>\n      <name>ATTEMPT_NR</name>\n    </field>\n    <field>\n      <id>JOB_RUN_START_DATE</id>\n      <enabled>Y</enabled>\n      <name>JOB_RUN_START_DATE</name>\n    </field>\n    <field>\n      <id>LOGDATE</id>\n      <enabled>Y</enabled>\n      <name>LOGDATE</name>\n    </field>\n    <field>\n      <id>RESULT_XML</id>\n      <enabled>Y</enabled>\n      <name>RESULT_XML</name>\n    </field>\n    <field>\n      <id>PARAMETER_XML</id>\n      <enabled>Y</enabled>\n      <name>PARAMETER_XML</name>\n    </field>\n  </checkpoint-log-table>\n  <pass_batchid>N</pass_batchid>\n  <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>WordCount - Advanced</name>\n      <description/>\n      <type>HadoopJobExecutorPlugin</type>\n      <hadoop_job_name>hadoopjob - wordcount2</hadoop_job_name>\n      <simple>N</simple>\n      <jar_url>./samples/jobs/hadoop/pentaho-mapreduce2-sample.jar</jar_url>\n      <driver_class>org.pentaho.hadoop.sample.wordcount.WordCount2</driver_class>\n      <command_line_args>/wordcount/input /wordcount/output</command_line_args>\n      <simple_blocking>N</simple_blocking>\n      <blocking>Y</blocking>\n      <logging_interval>5</logging_interval>\n      <simple_logging_interval>60</simple_logging_interval>\n      <hadoop_job_name>hadoopjob - wordcount2</hadoop_job_name>\n      <mapper_class>org.pentaho.hadoop.sample.wordcount.WordCount2$Map</mapper_class>\n      <combiner_class>org.pentaho.hadoop.sample.wordcount.WordCount2$Reduce</combiner_class>\n      <reducer_class>org.pentaho.hadoop.sample.wordcount.WordCount2$Reduce</reducer_class>\n      <input_path>/wordcount/input</input_path>\n      <input_format_class>org.apache.hadoop.mapreduce.lib.input.TextInputFormat</input_format_class>\n      <output_path>/wordcount/output</output_path>\n      <output_key_class>org.apache.hadoop.io.Text</output_key_class>\n      <output_value_class>org.apache.hadoop.io.IntWritable</output_value_class>\n      <output_format_class>org.apache.hadoop.mapreduce.lib.output.TextOutputFormat</output_format_class>\n      <hdfs_hostname>localhost</hdfs_hostname>\n      <hdfs_port>8020</hdfs_port>\n      <job_tracker_hostname>localhost</job_tracker_hostname>\n      <job_tracker_port>8032</job_tracker_port>\n      <num_map_tasks>2</num_map_tasks>\n      <num_reduce_tasks>1</num_reduce_tasks>\n      <user_defined_list>\n      </user_defined_list>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>478</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>56</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>Clean Output</name>\n      <description/>\n      <type>DELETE_FOLDERS</type>\n      <arg_from_previous>N</arg_from_previous>\n      <success_condition>success_if_no_errors</success_condition>\n      <limit_folders>10</limit_folders>\n      <fields>\n        <field>\n          <name>hdfs://hadoop-server:8020/wordcount/output</name>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>252</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>678</xloc>\n      <yloc>290</yloc>\n    </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>START</from>\n      <to>Clean Output</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Clean Output</from>\n      <to>WordCount - Advanced</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>WordCount - Advanced</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>Cleans up the output directory</note>\n      <xloc>191</xloc>\n      <yloc>236</yloc>\n      <width>183</width>\n      <heigth>23</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Hadoop WordCount MapReduce Job\n- Edit the Input and Output directory paths\n- Choose Hadoop Cluster</note>\n      <xloc>399</xloc>\n      <yloc>353</yloc>\n      <width>250</width>\n      <heigth>49</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Monitor the logs for progress\n(if blocking option is selected)</note>\n      <xloc>610</xloc>\n      <yloc>222</yloc>\n      <width>178</width>\n      <heigth>36</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>SETUP INSTRUCTIONS:\n1. Update the HDFS path within the 'Clean Output' step to match your Hadoop server location and path to where you intend to generate output from the wordcount example\n2. Create an input directory in HDFS and place text file(s) in the input directory that you want to use to test the wordcount example\n3. Update the 'Wordcount - Advanced' step (Job Setup and Cluster tabs) to configure the correct paths and server name including:\n    - Input Path - the path in HDFS from which to read files for counting\n    - Output Path - where the processed count of words will be placed\n\n*Note: Source code for the sample jar can be found alongside this sample in your samples directory.</note>\n      <xloc>15</xloc>\n      <yloc>26</yloc>\n      <width>991</width>\n      <heigth>114</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n  <attributes>\n    <group>\n      <name>JobRestart</name>\n      <attribute>\n        <key>UniqueConnections</key>\n        <value>N</value>\n      </attribute>\n    </group>\n  </attributes>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Hadoop Job Executor adv.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Hadoop Job Executor adv</name>\n  <description/>\n  <extended_description/>\n  <job_version/>\n  <job_status>0</job_status>\n  <directory>&#x2f;</directory>\n  <created_user>-</created_user>\n  <created_date>2010&#x2f;07&#x2f;12 13&#x3a;45&#x3a;27.737</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#x2f;07&#x2f;12 13&#x3a;45&#x3a;27.737</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n      <job-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>START_JOB_ENTRY</id>\n          <enabled>N</enabled>\n          <name>START_JOB_ENTRY</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </job-log-table>\n      <jobentry-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>JOBENTRYNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>RESULT</id>\n          <enabled>Y</enabled>\n          <name>RESULT</name>\n        </field>\n        <field>\n          <id>NR_RESULT_ROWS</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_ROWS</name>\n        </field>\n        <field>\n          <id>NR_RESULT_FILES</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_FILES</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>COPY_NR</id>\n          <enabled>N</enabled>\n          <name>COPY_NR</name>\n        </field>\n      </jobentry-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n<checkpoint-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<max_nr_retries/>\n<run_retry_period/>\n<namespace_parameter/>\n<save_parameters/>\n<save_result_rows/>\n<save_result_files/>\n        <field>\n          <id>ID_JOB_RUN</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB_RUN</name>\n        </field>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>NAMESPACE</id>\n          <enabled>Y</enabled>\n          <name>NAMESPACE</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_NAME</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_NAME</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_COPYNR</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_COPYNR</name>\n        </field>\n        <field>\n          <id>ATTEMPT_NR</id>\n          <enabled>Y</enabled>\n          <name>ATTEMPT_NR</name>\n        </field>\n        <field>\n          <id>JOB_RUN_START_DATE</id>\n          <enabled>Y</enabled>\n          <name>JOB_RUN_START_DATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>RESULT_XML</id>\n          <enabled>Y</enabled>\n          <name>RESULT_XML</name>\n        </field>\n        <field>\n          <id>PARAMETER_XML</id>\n          <enabled>Y</enabled>\n          <name>PARAMETER_XML</name>\n        </field>\n</checkpoint-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>WordCount - Advanced</name>\n      <description/>\n      <type>HadoopJobExecutorPlugin</type>\n      <hadoop_job_name>hadoopjob - wordcount</hadoop_job_name>\n      <simple>N</simple>\n      <jar_url>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;pentaho-mapreduce-sample.jar</jar_url>\n      <driver_class/>\n      <command_line_args>--input&#x3d;&#x2f;wordcount&#x2f;input --output&#x3d;&#x2f;wordcount&#x2f;output --hdfsHost&#x3d;hadoop-server&#x3a;8020 --jobTrackerHost&#x3d;hadoop-server&#x3a;8021</command_line_args>\n      <simple_blocking>N</simple_blocking>\n      <blocking>Y</blocking>\n      <logging_interval>5</logging_interval>\n      <simple_logging_interval/>\n      <hadoop_job_name>hadoopjob - wordcount</hadoop_job_name>\n      <mapper_class>org.pentaho.hadoop.sample.wordcount.WordCountMapper</mapper_class>\n      <combiner_class>org.pentaho.hadoop.sample.wordcount.WordCountReducer</combiner_class>\n      <reducer_class>org.pentaho.hadoop.sample.wordcount.WordCountReducer</reducer_class>\n      <input_path>&#x2f;wordcount&#x2f;input</input_path>\n      <input_format_class>org.apache.hadoop.mapred.TextInputFormat</input_format_class>\n      <output_path>&#x2f;wordcount&#x2f;output</output_path>\n      <output_key_class>org.apache.hadoop.io.Text</output_key_class>\n      <output_value_class>org.apache.hadoop.io.IntWritable</output_value_class>\n      <output_format_class>org.apache.hadoop.mapred.TextOutputFormat</output_format_class>\n      <hdfs_hostname>hadoop-server</hdfs_hostname>\n      <hdfs_port>8020</hdfs_port>\n      <job_tracker_hostname>hadoop-server</job_tracker_hostname>\n      <job_tracker_port>8021</job_tracker_port>\n      <num_map_tasks>2</num_map_tasks>\n      <num_reduce_tasks>1</num_reduce_tasks>\n      <user_defined_list>\n        <user_defined>\n          <name>pentaho.hadoop.property.name1</name>\n          <value>pentaho.hadoop.property.value1</value>\n        </user_defined>\n        <user_defined>\n          <name>pentaho.hadoop.property.name2</name>\n          <value>pentaho.hadoop.property.value2</value>\n        </user_defined>\n      </user_defined_list>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>478</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>56</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>Clean Output</name>\n      <description/>\n      <type>DELETE_FOLDERS</type>\n      <arg_from_previous>N</arg_from_previous>\n      <success_condition>success_if_no_errors</success_condition>\n      <limit_folders>10</limit_folders>\n      <fields>\n        <field>\n          <name>hdfs&#x3a;&#x2f;&#x2f;hadoop-server&#x3a;8020&#x2f;wordcount&#x2f;output</name>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>252</xloc>\n      <yloc>290</yloc>\n    </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>678</xloc>\n      <yloc>290</yloc>\n    </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>START</from>\n      <to>Clean Output</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Clean Output</from>\n      <to>WordCount - Advanced</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>WordCount - Advanced</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>Cleans up the output directory</note>\n      <xloc>191</xloc>\n      <yloc>236</yloc>\n      <width>156</width>\n      <heigth>22</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Hadoop WordCount MapReduce Job&#xa;- Edit the Input and Output directory paths&#xa;- Edit the HDFS hostname&#xa;- Edit the Job Tracker hostname&#xa;</note>\n      <xloc>399</xloc>\n      <yloc>353</yloc>\n      <width>219</width>\n      <heigth>60</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Monitor the logs for progress&#xa;&#x28;if blocking option is selected&#x29;</note>\n      <xloc>610</xloc>\n      <yloc>222</yloc>\n      <width>156</width>\n      <heigth>35</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>SETUP INSTRUCTIONS&#x3a;&#xa;1. Update the HDFS path within the &#x27;Clean Output&#x27; step to match your Hadoop server location and path to where you intend to generate output from the wordcount example&#xa;2. Create an input directory in HDFS and place text file&#x28;s&#x29; in the input directory that you want to use to test the wordcount example&#xa;3. Update the &#x27;Wordcount - Advanced&#x27; step &#x28;Job Setup and Cluster tabs&#x29; to configure the correct paths and server name including&#x3a;&#xa;    - Input Path - the path in HDFS from which to read files for counting&#xa;    - Output Path - where the processed count of words will be placed&#xa;    - HDFS Hostname&#xa;    - Job Tracker Hostname&#xa;&#xa;&#x2a;Note&#x3a; Source code for the sample jar can be found alongside this sample in your samples directory.</note>\n      <xloc>15</xloc>\n      <yloc>26</yloc>\n      <width>578</width>\n      <heigth>100</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Hadoop Job Executor simple.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Hadoop Job Executor simple</name>\n    <description/>\n    <extended_description/>\n    <job_version/>\n    <job_status>0</job_status>\n  <directory>&#47;</directory>\n  <created_user>-</created_user>\n  <created_date>2010&#47;07&#47;12 13:45:27.737</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#47;07&#47;12 13:45:27.737</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n<job-log-table><connection/>\n<schema/>\n<table/>\n<size_limit_lines/>\n<interval/>\n<timeout_days/>\n<field><id>ID_JOB</id><enabled>Y</enabled><name>ID_JOB</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>JOBNAME</name></field><field><id>STATUS</id><enabled>Y</enabled><name>STATUS</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>STARTDATE</id><enabled>Y</enabled><name>STARTDATE</name></field><field><id>ENDDATE</id><enabled>Y</enabled><name>ENDDATE</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>DEPDATE</id><enabled>Y</enabled><name>DEPDATE</name></field><field><id>REPLAYDATE</id><enabled>Y</enabled><name>REPLAYDATE</name></field><field><id>LOG_FIELD</id><enabled>Y</enabled><name>LOG_FIELD</name></field></job-log-table>\n<jobentry-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>JOBENTRYNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>RESULT</id><enabled>Y</enabled><name>RESULT</name></field><field><id>NR_RESULT_ROWS</id><enabled>Y</enabled><name>NR_RESULT_ROWS</name></field><field><id>NR_RESULT_FILES</id><enabled>Y</enabled><name>NR_RESULT_FILES</name></field><field><id>LOG_FIELD</id><enabled>N</enabled><name>LOG_FIELD</name></field><field><id>COPY_NR</id><enabled>N</enabled><name>COPY_NR</name></field></jobentry-log-table>\n<channel-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>LOGGING_OBJECT_TYPE</id><enabled>Y</enabled><name>LOGGING_OBJECT_TYPE</name></field><field><id>OBJECT_NAME</id><enabled>Y</enabled><name>OBJECT_NAME</name></field><field><id>OBJECT_COPY</id><enabled>Y</enabled><name>OBJECT_COPY</name></field><field><id>REPOSITORY_DIRECTORY</id><enabled>Y</enabled><name>REPOSITORY_DIRECTORY</name></field><field><id>FILENAME</id><enabled>Y</enabled><name>FILENAME</name></field><field><id>OBJECT_ID</id><enabled>Y</enabled><name>OBJECT_ID</name></field><field><id>OBJECT_REVISION</id><enabled>Y</enabled><name>OBJECT_REVISION</name></field><field><id>PARENT_CHANNEL_ID</id><enabled>Y</enabled><name>PARENT_CHANNEL_ID</name></field><field><id>ROOT_CHANNEL_ID</id><enabled>Y</enabled><name>ROOT_CHANNEL_ID</name></field></channel-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>WordCount - Simple</name>\n      <description/>\n      <type>HadoopJobExecutorPlugin</type>\n      <hadoop_job_name>PDI Hadoop - WordCount - Simple</hadoop_job_name>\n      <simple>Y</simple>\n      <jar_url>.&#47;samples&#47;jobs&#47;hadoop&#47;pentaho-mapreduce-sample.jar</jar_url>\n      <driver_class>org.pentaho.hadoop.sample.wordcount.WordCount</driver_class>\n      <command_line_args>--input=&#47;wordcount&#47;input --output=&#47;wordcount&#47;output --hdfsHost=hadoop-server:8020 --jobTrackerHost=hadoop-server:8021</command_line_args>\n      <simple_blocking>Y</simple_blocking>\n      <blocking>Y</blocking>\n      <logging_interval>5</logging_interval>\n      <simple_logging_interval>1</simple_logging_interval>\n      <hadoop_job_name>PDI Hadoop - WordCount - Simple</hadoop_job_name>\n      <mapper_class>org.pentaho.hadoop.sample.wordcount.WordCountMapper</mapper_class>\n      <combiner_class>org.pentaho.hadoop.sample.wordcount.WordCountReducer</combiner_class>\n      <reducer_class>org.pentaho.hadoop.sample.wordcount.WordCountReducer</reducer_class>\n      <input_path>&#47;wordcount&#47;input</input_path>\n      <input_format_class>org.apache.hadoop.mapred.TextInputFormat</input_format_class>\n      <output_path>&#47;wordcount&#47;output</output_path>\n      <output_key_class>org.apache.hadoop.io.Text</output_key_class>\n      <output_value_class>org.apache.hadoop.io.IntWritable</output_value_class>\n      <output_format_class>org.apache.hadoop.mapred.TextOutputFormat</output_format_class>\n      <hdfs_hostname>hadoop-server</hdfs_hostname>\n      <hdfs_port>8020</hdfs_port>\n      <job_tracker_hostname>hadoop-server</job_tracker_hostname>\n      <job_tracker_port>8021</job_tracker_port>\n      <num_map_tasks>2</num_map_tasks>\n      <num_reduce_tasks>1</num_reduce_tasks>\n      <user_defined_list>\n        <user_defined>\n          <name>pentaho.hadoop.property.name1</name>\n          <value>pentaho.hadoop.property.value1</value>\n        </user_defined>\n        <user_defined>\n          <name>pentaho.hadoop.property.name2</name>\n          <value>pentaho.hadoop.property.value2</value>\n        </user_defined>\n      </user_defined_list>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>508</xloc>\n      <yloc>208</yloc>\n      </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>87</xloc>\n      <yloc>208</yloc>\n      </entry>\n    <entry>\n      <name>Clean Output</name>\n      <description/>\n      <type>DELETE_FOLDERS</type>\n      <arg_from_previous>N</arg_from_previous>\n      <success_condition>success_if_no_errors</success_condition>\n      <limit_folders>10</limit_folders>\n      <fields>\n        <field>\n          <name>hdfs:&#47;&#47;hadoop-server:8020&#47;wordcount&#47;output</name>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>283</xloc>\n      <yloc>208</yloc>\n      </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>709</xloc>\n      <yloc>208</yloc>\n      </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>START</from>\n      <to>Clean Output</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Clean Output</from>\n      <to>WordCount - Simple</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>WordCount - Simple</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>Cleans up the output directory</note>\n      <xloc>216</xloc>\n      <yloc>145</yloc>\n      <width>161</width>\n      <heigth>28</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Hadoop WordCount MapReduce Job</note>\n      <xloc>421</xloc>\n      <yloc>275</yloc>\n      <width>197</width>\n      <heigth>28</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>SETUP INSTRUCTIONS:\n1. Update the HDFS path within the &apos;Clean Output&apos; step to match your Hadoop server location and path to where you intend to generate output from the wordcount example\n2. Create an input directory in HDFS and place text file(s) in the input directory that you want to use to test the wordcount example\n3. Update the &apos;Wordcount - Simple&apos; step to provide the appropriate input and output directory locations in HDFS (defined in command line interface)\n\n*Note: Source code for the sample jar can be found alongside this sample in your samples directory.</note>\n      <xloc>22</xloc>\n      <yloc>28</yloc>\n      <width>807</width>\n      <heigth>76</heigth>\n      <fontname>Arial</fontname>\n      <fontsize>10</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Pentaho MapReduce - weblogs.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Pentaho MapReduce - weblogs</name>\n    <description/>\n    <extended_description/>\n    <job_version/>\n    <job_status>0</job_status>\n  <directory>&#47;</directory>\n  <created_user>-</created_user>\n  <created_date>2010&#47;07&#47;19 21:35:45.843</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#47;07&#47;19 21:35:45.843</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n<job-log-table><connection/>\n<schema/>\n<table/>\n<size_limit_lines/>\n<interval/>\n<timeout_days/>\n<field><id>ID_JOB</id><enabled>Y</enabled><name>ID_JOB</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>JOBNAME</name></field><field><id>STATUS</id><enabled>Y</enabled><name>STATUS</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>STARTDATE</id><enabled>Y</enabled><name>STARTDATE</name></field><field><id>ENDDATE</id><enabled>Y</enabled><name>ENDDATE</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>DEPDATE</id><enabled>Y</enabled><name>DEPDATE</name></field><field><id>REPLAYDATE</id><enabled>Y</enabled><name>REPLAYDATE</name></field><field><id>LOG_FIELD</id><enabled>Y</enabled><name>LOG_FIELD</name></field></job-log-table>\n<jobentry-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>JOBENTRYNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>RESULT</id><enabled>Y</enabled><name>RESULT</name></field><field><id>NR_RESULT_ROWS</id><enabled>Y</enabled><name>NR_RESULT_ROWS</name></field><field><id>NR_RESULT_FILES</id><enabled>Y</enabled><name>NR_RESULT_FILES</name></field><field><id>LOG_FIELD</id><enabled>N</enabled><name>LOG_FIELD</name></field><field><id>COPY_NR</id><enabled>N</enabled><name>COPY_NR</name></field></jobentry-log-table>\n<channel-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>LOGGING_OBJECT_TYPE</id><enabled>Y</enabled><name>LOGGING_OBJECT_TYPE</name></field><field><id>OBJECT_NAME</id><enabled>Y</enabled><name>OBJECT_NAME</name></field><field><id>OBJECT_COPY</id><enabled>Y</enabled><name>OBJECT_COPY</name></field><field><id>REPOSITORY_DIRECTORY</id><enabled>Y</enabled><name>REPOSITORY_DIRECTORY</name></field><field><id>FILENAME</id><enabled>Y</enabled><name>FILENAME</name></field><field><id>OBJECT_ID</id><enabled>Y</enabled><name>OBJECT_ID</name></field><field><id>OBJECT_REVISION</id><enabled>Y</enabled><name>OBJECT_REVISION</name></field><field><id>PARENT_CHANNEL_ID</id><enabled>Y</enabled><name>PARENT_CHANNEL_ID</name></field><field><id>ROOT_CHANNEL_ID</id><enabled>Y</enabled><name>ROOT_CHANNEL_ID</name></field></channel-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>Pentaho MapReduce (Web Log Parsing)</name>\n      <description/>\n      <type>HadoopTransJobExecutorPlugin</type>\n      <hadoop_job_name>Web Logs- Number of HTTP Methods by Month</hadoop_job_name>\n      <map_trans_repo_dir/>\n      <map_trans_repo_file/>\n      <map_trans_repo_reference/>\n      <map_trans>${Internal.Job.Filename.Directory}&#47;weblogs-mapper.ktr</map_trans>\n      <combiner_trans_repo_dir/>\n      <combiner_trans_repo_file/>\n      <combiner_trans_repo_reference/>\n      <combiner_single_threaded>N</combiner_single_threaded>\n      <combiner_trans/>\n      <reduce_trans_repo_dir/>\n      <reduce_trans_repo_file/>\n      <reduce_trans_repo_reference/>\n      <reduce_trans>${Internal.Job.Filename.Directory}&#47;weblogs-reducer.ktr</reduce_trans>\n      <reduce_single_threaded>N</reduce_single_threaded>\n      <map_input_step_name>Hadoop Input</map_input_step_name>\n      <map_output_step_name>Hadoop Output</map_output_step_name>\n      <combiner_input_step_name/>\n      <combiner_output_step_name/>\n      <reduce_input_step_name>Hadoop Input</reduce_input_step_name>\n      <reduce_output_step_name>Hadoop Output</reduce_output_step_name>\n      <blocking>Y</blocking>\n      <logging_interval>5</logging_interval>\n      <input_path>&#47;weblogs&#47;input</input_path>\n      <input_format_class>org.apache.hadoop.mapred.TextInputFormat</input_format_class>\n      <output_path>&#47;weblogs&#47;output</output_path>\n      <clean_output_path>Y</clean_output_path>\n      <suppress_output_map_key>N</suppress_output_map_key>\n      <suppress_output_map_value>N</suppress_output_map_value>\n      <suppress_output_key>N</suppress_output_key>\n      <suppress_output_value>N</suppress_output_value>\n      <output_format_class>org.apache.hadoop.mapred.TextOutputFormat</output_format_class>\n      <hdfs_hostname>hadoop-server</hdfs_hostname>\n      <hdfs_port>8020</hdfs_port>\n      <job_tracker_hostname>hadoop-server</job_tracker_hostname>\n      <job_tracker_port>8021</job_tracker_port>\n      <num_map_tasks>2</num_map_tasks>\n      <num_reduce_tasks>1</num_reduce_tasks>\n      <working_dir>&#47;var&#47;tmp</working_dir>\n      <user_defined_list>\n      </user_defined_list>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>337</xloc>\n      <yloc>286</yloc>\n      </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>25</xloc>\n      <yloc>286</yloc>\n      </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>493</xloc>\n      <yloc>286</yloc>\n      </entry>\n    <entry>\n      <name>Copy input files to HDFS</name>\n      <description/>\n      <type>HadoopCopyFilesPlugin</type>\n      <copy_empty_folders>Y</copy_empty_folders>\n      <arg_from_previous>N</arg_from_previous>\n      <overwrite_files>N</overwrite_files>\n      <include_subfolders>N</include_subfolders>\n      <remove_source_files>N</remove_source_files>\n      <add_result_filesname>N</add_result_filesname>\n      <destination_is_a_file>N</destination_is_a_file>\n      <create_destination_folder>Y</create_destination_folder>\n      <fields>\n        <field>\n          <source_filefolder>${Internal.Job.Filename.Directory}&#47;files&#47;</source_filefolder>\n          <destination_filefolder>hdfs:&#47;&#47;hadoop-server&#47;weblogs&#47;input</destination_filefolder>\n          <wildcard>([^\\s]+(\\.(?i)(log))$)</wildcard>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>181</xloc>\n      <yloc>286</yloc>\n      </entry>\n    <entry>\n      <name>Failure</name>\n      <description/>\n      <type>ABORT</type>\n      <message/>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>493</xloc>\n      <yloc>408</yloc>\n      </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>Copy input files to HDFS</from>\n      <to>Pentaho MapReduce (Web Log Parsing)</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>START</from>\n      <to>Copy input files to HDFS</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Pentaho MapReduce (Web Log Parsing)</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>Pentaho MapReduce (Web Log Parsing)</from>\n      <to>Failure</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>N</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>This example:\n  - Reads one or more weblogs from an input directory in HDFS\n  - Uses the Transformation Job Executor to generate a new MapReduce job in Hadoop calling\n    - &apos;weblogs-mapper.ktr&apos; to parse the weblog and generate keys based on the Year and Month as part of the mapping phase\n    - &apos;weblogs-reducer.ktr&apos; to aggregate all page hits by Year and Month (our key) as part of the reducing phase\n\nSETUP INSTRUCTIONS:\n1. Update the &apos;Copy input files to HDFS&apos; step\n     - update the source path to match the location of your PDI installation directory\n     - update the target path to be the location of your input directory in HDFS where the job will read the weblog files from\n2. Update the &apos;Pentaho MapReduce&apos; step (Job Setup and Cluster tabs) to configure the correct paths and server names including:\n    - Input Path - the path in HDFS from which to read files for counting\n    - Output Path - where the processed count of words will be placed\n    - HDFS Hostname\n    - Job Tracker Hostname\n3. Update the &apos;Delete input files&apos; step to point to your input directory location in HDFS</note>\n      <xloc>12</xloc>\n      <yloc>4</yloc>\n      <width>707</width>\n      <heigth>234</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>12</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Pentaho MapReduce - wordcount.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Pentaho MapReduce - wordcount</name>\n    <description/>\n    <extended_description/>\n    <job_version/>\n    <job_status>0</job_status>\n  <directory>&#47;</directory>\n  <created_user>-</created_user>\n  <created_date>2010&#47;07&#47;19 21:35:45.843</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#47;07&#47;19 21:35:45.843</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n<job-log-table><connection/>\n<schema/>\n<table/>\n<size_limit_lines/>\n<interval/>\n<timeout_days/>\n<field><id>ID_JOB</id><enabled>Y</enabled><name>ID_JOB</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>JOBNAME</name></field><field><id>STATUS</id><enabled>Y</enabled><name>STATUS</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>STARTDATE</id><enabled>Y</enabled><name>STARTDATE</name></field><field><id>ENDDATE</id><enabled>Y</enabled><name>ENDDATE</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>DEPDATE</id><enabled>Y</enabled><name>DEPDATE</name></field><field><id>REPLAYDATE</id><enabled>Y</enabled><name>REPLAYDATE</name></field><field><id>LOG_FIELD</id><enabled>Y</enabled><name>LOG_FIELD</name></field></job-log-table>\n<jobentry-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>JOBENTRYNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>RESULT</id><enabled>Y</enabled><name>RESULT</name></field><field><id>NR_RESULT_ROWS</id><enabled>Y</enabled><name>NR_RESULT_ROWS</name></field><field><id>NR_RESULT_FILES</id><enabled>Y</enabled><name>NR_RESULT_FILES</name></field><field><id>LOG_FIELD</id><enabled>N</enabled><name>LOG_FIELD</name></field><field><id>COPY_NR</id><enabled>N</enabled><name>COPY_NR</name></field></jobentry-log-table>\n<channel-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>LOGGING_OBJECT_TYPE</id><enabled>Y</enabled><name>LOGGING_OBJECT_TYPE</name></field><field><id>OBJECT_NAME</id><enabled>Y</enabled><name>OBJECT_NAME</name></field><field><id>OBJECT_COPY</id><enabled>Y</enabled><name>OBJECT_COPY</name></field><field><id>REPOSITORY_DIRECTORY</id><enabled>Y</enabled><name>REPOSITORY_DIRECTORY</name></field><field><id>FILENAME</id><enabled>Y</enabled><name>FILENAME</name></field><field><id>OBJECT_ID</id><enabled>Y</enabled><name>OBJECT_ID</name></field><field><id>OBJECT_REVISION</id><enabled>Y</enabled><name>OBJECT_REVISION</name></field><field><id>PARENT_CHANNEL_ID</id><enabled>Y</enabled><name>PARENT_CHANNEL_ID</name></field><field><id>ROOT_CHANNEL_ID</id><enabled>Y</enabled><name>ROOT_CHANNEL_ID</name></field></channel-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>Pentaho MapReduce (Wordcount)</name>\n      <description/>\n      <type>HadoopTransJobExecutorPlugin</type>\n      <hadoop_job_name>Pentaho MapReduce (Wordcount)</hadoop_job_name>\n      <map_trans_repo_dir/>\n      <map_trans_repo_file/>\n      <map_trans_repo_reference/>\n      <map_trans>${Internal.Job.Filename.Directory}&#47;wordcount-mapper.ktr</map_trans>\n      <combiner_trans_repo_dir/>\n      <combiner_trans_repo_file/>\n      <combiner_trans_repo_reference/>\n      <combiner_trans/>\n      <combiner_single_threaded>N</combiner_single_threaded>\n      <reduce_trans_repo_dir/>\n      <reduce_trans_repo_file/>\n      <reduce_trans_repo_reference/>\n      <reduce_trans>${Internal.Job.Filename.Directory}&#47;wordcount-reducer.ktr</reduce_trans>\n      <reduce_single_threaded>N</reduce_single_threaded>\n      <map_input_step_name>Hadoop Input</map_input_step_name>\n      <map_output_step_name>Hadoop Output</map_output_step_name>\n      <combiner_input_step_name/>\n      <combiner_output_step_name/>\n      <reduce_input_step_name>Hadoop Input</reduce_input_step_name>\n      <reduce_output_step_name>Hadoop Output</reduce_output_step_name>\n      <blocking>Y</blocking>\n      <logging_interval>5</logging_interval>\n      <input_path>&#47;wordcount&#47;input</input_path>\n      <input_format_class>org.apache.hadoop.mapred.TextInputFormat</input_format_class>\n      <output_path>&#47;wordcount&#47;output</output_path>\n      <clean_output_path>Y</clean_output_path>\n      <suppress_output_map_key>N</suppress_output_map_key>\n      <suppress_output_map_value>N</suppress_output_map_value>\n      <suppress_output_key>N</suppress_output_key>\n      <suppress_output_value>N</suppress_output_value>\n      <output_format_class>org.apache.hadoop.mapred.TextOutputFormat</output_format_class>\n      <hdfs_hostname>hadoop-server</hdfs_hostname>\n      <hdfs_port>8020</hdfs_port>\n      <job_tracker_hostname>hadoop-server</job_tracker_hostname>\n      <job_tracker_port>8021</job_tracker_port>\n      <num_map_tasks>2</num_map_tasks>\n      <num_reduce_tasks>1</num_reduce_tasks>\n      <working_dir>&#47;var&#47;tmp</working_dir>\n      <user_defined_list>\n      </user_defined_list>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>299</xloc>\n      <yloc>158</yloc>\n      </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>19</xloc>\n      <yloc>158</yloc>\n      </entry>\n    <entry>\n      <name>Failure</name>\n      <description/>\n      <type>ABORT</type>\n      <message/>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>440</xloc>\n      <yloc>292</yloc>\n      </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>439</xloc>\n      <yloc>158</yloc>\n      </entry>\n    <entry>\n      <name>Copy Files to HDFS</name>\n      <description/>\n      <type>HadoopCopyFilesPlugin</type>\n      <copy_empty_folders>Y</copy_empty_folders>\n      <arg_from_previous>N</arg_from_previous>\n      <overwrite_files>N</overwrite_files>\n      <include_subfolders>N</include_subfolders>\n      <remove_source_files>N</remove_source_files>\n      <add_result_filesname>N</add_result_filesname>\n      <destination_is_a_file>N</destination_is_a_file>\n      <create_destination_folder>Y</create_destination_folder>\n      <fields>\n        <field>\n          <source_filefolder>.&#47;</source_filefolder>\n          <destination_filefolder>hdfs:&#47;&#47;hadoop-server:8020&#47;wordcount&#47;input</destination_filefolder>\n          <wildcard>.*README.*</wildcard>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>159</xloc>\n      <yloc>158</yloc>\n      </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>Pentaho MapReduce (Wordcount)</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>Pentaho MapReduce (Wordcount)</from>\n      <to>Failure</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>N</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>Copy Files to HDFS</from>\n      <to>Pentaho MapReduce (Wordcount)</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>START</from>\n      <to>Copy Files to HDFS</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>SETUP INSTRUCTIONS:\n1. Create an input directory in HDFS and place text file(s) in the input directory that you want to use to test the wordcount example\n2. Update the &apos;Pentaho MapReduce&apos; step (Job Setup and Cluster tabs) to configure the correct paths and server names including:\n    - Input Path - the path in HDFS from which to read files for counting\n    - Output Path - where the processed count of words will be placed\n    - HDFS Hostname\n    - Job Tracker Hostname</note>\n      <xloc>20</xloc>\n      <yloc>40</yloc>\n      <width>443</width>\n      <heigth>73</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Pig Script Executor tutorial local.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Pig Script Executor tutorial local</name>\n  <description/>\n  <extended_description/>\n  <job_version/>\n  <job_status>0</job_status>\n  <directory>&#x2f;</directory>\n  <created_user>-</created_user>\n  <created_date>2011&#x2f;08&#x2f;01 15&#x3a;02&#x3a;59.357</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2011&#x2f;08&#x2f;01 15&#x3a;02&#x3a;59.357</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n      <job-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>START_JOB_ENTRY</id>\n          <enabled>N</enabled>\n          <name>START_JOB_ENTRY</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </job-log-table>\n      <jobentry-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>JOBENTRYNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>RESULT</id>\n          <enabled>Y</enabled>\n          <name>RESULT</name>\n        </field>\n        <field>\n          <id>NR_RESULT_ROWS</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_ROWS</name>\n        </field>\n        <field>\n          <id>NR_RESULT_FILES</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_FILES</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>COPY_NR</id>\n          <enabled>N</enabled>\n          <name>COPY_NR</name>\n        </field>\n      </jobentry-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n<checkpoint-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<max_nr_retries/>\n<run_retry_period/>\n<namespace_parameter/>\n<save_parameters/>\n<save_result_rows/>\n<save_result_files/>\n        <field>\n          <id>ID_JOB_RUN</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB_RUN</name>\n        </field>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>NAMESPACE</id>\n          <enabled>Y</enabled>\n          <name>NAMESPACE</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_NAME</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_NAME</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_COPYNR</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_COPYNR</name>\n        </field>\n        <field>\n          <id>ATTEMPT_NR</id>\n          <enabled>Y</enabled>\n          <name>ATTEMPT_NR</name>\n        </field>\n        <field>\n          <id>JOB_RUN_START_DATE</id>\n          <enabled>Y</enabled>\n          <name>JOB_RUN_START_DATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>RESULT_XML</id>\n          <enabled>Y</enabled>\n          <name>RESULT_XML</name>\n        </field>\n        <field>\n          <id>PARAMETER_XML</id>\n          <enabled>Y</enabled>\n          <name>PARAMETER_XML</name>\n        </field>\n</checkpoint-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>Pig Script Executor</name>\n      <description/>\n      <type>HadoopPigScriptExecutorPlugin</type>\n    <hdfs_hostname>localhost</hdfs_hostname>\n    <hdfs_port>8020</hdfs_port>\n    <jobtracker_hostname>localhost</jobtracker_hostname>\n    <jobtracker_port>8021</jobtracker_port>\n    <script_file>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;script1-local-mod.pig</script_file>\n    <enable_blocking>Y</enable_blocking>\n    <local_execution>Y</local_execution>\n    <script_parameters>\n      <parameter>\n        <name>excite_small</name>\n        <value>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;excite-small.log</value>\n      </parameter>\n      <parameter>\n        <name>udf_jar</name>\n        <value>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;tutorial.jar</value>\n      </parameter>\n    </script_parameters>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>560</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>730</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>40</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>Clean Output</name>\n      <description/>\n      <type>DELETE_FOLDERS</type>\n      <arg_from_previous>N</arg_from_previous>\n      <success_condition>success_if_no_errors</success_condition>\n      <limit_folders>10</limit_folders>\n      <fields>\n        <field>\n          <name>.&#x2f;script1-local-results.txt</name>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>400</xloc>\n      <yloc>220</yloc>\n    </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>Pig Script Executor</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>Clean Output</from>\n      <to>Pig Script Executor</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>START</from>\n      <to>Clean Output</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>Cleans the output directory.</note>\n      <xloc>360</xloc>\n      <yloc>180</yloc>\n      <width>102</width>\n      <heigth>20</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>This job runs the excite log processing example &#x28;script1&#x29; from the Pig tutorial &#x28;http&#x3a;&#x2f;&#x2f;wiki.apache.org&#x2f;pig&#x2f;PigTutorial&#x29; locally - i.e. no hadoop server needed&#xa;&#xa;</note>\n      <xloc>40</xloc>\n      <yloc>70</yloc>\n      <width>526</width>\n      <heigth>37</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/Pig Script Executor tutorial.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>Pig Script Executor tutorial</name>\n  <description/>\n  <extended_description/>\n  <job_version/>\n  <job_status>0</job_status>\n  <directory>&#x2f;</directory>\n  <created_user>-</created_user>\n  <created_date>2011&#x2f;08&#x2f;01 15&#x3a;02&#x3a;59.357</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2011&#x2f;08&#x2f;01 15&#x3a;02&#x3a;59.357</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n      <job-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>START_JOB_ENTRY</id>\n          <enabled>N</enabled>\n          <name>START_JOB_ENTRY</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </job-log-table>\n      <jobentry-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>JOBENTRYNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>RESULT</id>\n          <enabled>Y</enabled>\n          <name>RESULT</name>\n        </field>\n        <field>\n          <id>NR_RESULT_ROWS</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_ROWS</name>\n        </field>\n        <field>\n          <id>NR_RESULT_FILES</id>\n          <enabled>Y</enabled>\n          <name>NR_RESULT_FILES</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>COPY_NR</id>\n          <enabled>N</enabled>\n          <name>COPY_NR</name>\n        </field>\n      </jobentry-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n<checkpoint-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<max_nr_retries/>\n<run_retry_period/>\n<namespace_parameter/>\n<save_parameters/>\n<save_result_rows/>\n<save_result_files/>\n        <field>\n          <id>ID_JOB_RUN</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB_RUN</name>\n        </field>\n        <field>\n          <id>ID_JOB</id>\n          <enabled>Y</enabled>\n          <name>ID_JOB</name>\n        </field>\n        <field>\n          <id>JOBNAME</id>\n          <enabled>Y</enabled>\n          <name>JOBNAME</name>\n        </field>\n        <field>\n          <id>NAMESPACE</id>\n          <enabled>Y</enabled>\n          <name>NAMESPACE</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_NAME</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_NAME</name>\n        </field>\n        <field>\n          <id>CHECKPOINT_COPYNR</id>\n          <enabled>Y</enabled>\n          <name>CHECKPOINT_COPYNR</name>\n        </field>\n        <field>\n          <id>ATTEMPT_NR</id>\n          <enabled>Y</enabled>\n          <name>ATTEMPT_NR</name>\n        </field>\n        <field>\n          <id>JOB_RUN_START_DATE</id>\n          <enabled>Y</enabled>\n          <name>JOB_RUN_START_DATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>RESULT_XML</id>\n          <enabled>Y</enabled>\n          <name>RESULT_XML</name>\n        </field>\n        <field>\n          <id>PARAMETER_XML</id>\n          <enabled>Y</enabled>\n          <name>PARAMETER_XML</name>\n        </field>\n</checkpoint-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>Pig Script Executor</name>\n      <description/>\n      <type>HadoopPigScriptExecutorPlugin</type>\n    <hdfs_hostname>hadoop-server</hdfs_hostname>\n    <hdfs_port>8020</hdfs_port>\n    <jobtracker_hostname>hadoop-server</jobtracker_hostname>\n    <jobtracker_port>8021</jobtracker_port>\n    <script_file>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;script1-hadoop-mod.pig</script_file>\n    <enable_blocking>Y</enable_blocking>\n    <local_execution>N</local_execution>\n    <script_parameters>\n      <parameter>\n        <name>udf_jar</name>\n        <value>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;tutorial.jar</value>\n      </parameter>\n    </script_parameters>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>560</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>730</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>40</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>Hadoop Copy Files</name>\n      <description/>\n      <type>HadoopCopyFilesPlugin</type>\n      <copy_empty_folders>Y</copy_empty_folders>\n      <arg_from_previous>N</arg_from_previous>\n      <overwrite_files>N</overwrite_files>\n      <include_subfolders>N</include_subfolders>\n      <remove_source_files>N</remove_source_files>\n      <add_result_filesname>N</add_result_filesname>\n      <destination_is_a_file>N</destination_is_a_file>\n      <create_destination_folder>N</create_destination_folder>\n      <fields>\n        <field>\n          <source_filefolder>.&#x2f;samples&#x2f;jobs&#x2f;hadoop&#x2f;excite.log.bz2</source_filefolder>\n          <source_configuration_name/>\n          <destination_filefolder>hdfs&#x3a;&#x2f;&#x2f;hadoop-server&#x3a;8020&#x2f;.</destination_filefolder>\n          <destination_configuration_name/>\n          <wildcard/>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>220</xloc>\n      <yloc>220</yloc>\n    </entry>\n    <entry>\n      <name>Clean Output</name>\n      <description/>\n      <type>DELETE_FOLDERS</type>\n      <arg_from_previous>N</arg_from_previous>\n      <success_condition>success_if_no_errors</success_condition>\n      <limit_folders>10</limit_folders>\n      <fields>\n        <field>\n          <name>hdfs&#x3a;&#x2f;&#x2f;hadoop-server&#x3a;8020&#x2f;script1-hadoop-results</name>\n        </field>\n      </fields>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>400</xloc>\n      <yloc>220</yloc>\n    </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>Pig Script Executor</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n    <hop>\n      <from>START</from>\n      <to>Hadoop Copy Files</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Hadoop Copy Files</from>\n      <to>Clean Output</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Clean Output</from>\n      <to>Pig Script Executor</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n    <notepad>\n      <note>Copies the excite log file to hdfs.</note>\n      <xloc>180</xloc>\n      <yloc>180</yloc>\n      <width>118</width>\n      <heigth>20</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Cleans the output directory.</note>\n      <xloc>360</xloc>\n      <yloc>180</yloc>\n      <width>102</width>\n      <heigth>20</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>This job runs the excite log processing example &#x28;script1&#x29; from the Pig tutorial&#x3a; http&#x3a;&#x2f;&#x2f;wiki.apache.org&#x2f;pig&#x2f;PigTutorial&#xa;&#xa;SETUP INSTRUCTIONS&#x3a;&#xa;1. Update the HDFS path within the &#x27;Hadoop Copy Files&#x27; and &#x27;Clean Output&#x27; step to match your Hadoop server location and path to where you intend to generate output&#xa;2. Update the &#x27;Pig Script Executor&#x27; step with the names and ports of the host&#x28;s&#x29; running your HDFS name node and Job Tracker&#xa;</note>\n      <xloc>30</xloc>\n      <yloc>30</yloc>\n      <width>565</width>\n      <heigth>64</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>165</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/emr_job.kjb",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<job>\n  <name>emr_job</name>\n    <description/>\n    <extended_description/>\n    <job_version/>\n  <directory>&#47;</directory>\n  <created_user>-</created_user>\n  <created_date>2010&#47;09&#47;05 20:16:13.032</created_date>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#47;09&#47;05 20:16:13.032</modified_date>\n    <parameters>\n    </parameters>\n    <slaveservers>\n    </slaveservers>\n<job-log-table><connection/>\n<schema/>\n<table/>\n<size_limit_lines/>\n<interval/>\n<timeout_days/>\n<field><id>ID_JOB</id><enabled>Y</enabled><name>ID_JOB</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>JOBNAME</name></field><field><id>STATUS</id><enabled>Y</enabled><name>STATUS</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>STARTDATE</id><enabled>Y</enabled><name>STARTDATE</name></field><field><id>ENDDATE</id><enabled>Y</enabled><name>ENDDATE</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>DEPDATE</id><enabled>Y</enabled><name>DEPDATE</name></field><field><id>REPLAYDATE</id><enabled>Y</enabled><name>REPLAYDATE</name></field><field><id>LOG_FIELD</id><enabled>Y</enabled><name>LOG_FIELD</name></field></job-log-table>\n<jobentry-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>JOBNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>JOBENTRYNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>RESULT</id><enabled>Y</enabled><name>RESULT</name></field><field><id>NR_RESULT_ROWS</id><enabled>Y</enabled><name>NR_RESULT_ROWS</name></field><field><id>NR_RESULT_FILES</id><enabled>Y</enabled><name>NR_RESULT_FILES</name></field><field><id>LOG_FIELD</id><enabled>N</enabled><name>LOG_FIELD</name></field></jobentry-log-table>\n<channel-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>LOGGING_OBJECT_TYPE</id><enabled>Y</enabled><name>LOGGING_OBJECT_TYPE</name></field><field><id>OBJECT_NAME</id><enabled>Y</enabled><name>OBJECT_NAME</name></field><field><id>OBJECT_COPY</id><enabled>Y</enabled><name>OBJECT_COPY</name></field><field><id>REPOSITORY_DIRECTORY</id><enabled>Y</enabled><name>REPOSITORY_DIRECTORY</name></field><field><id>FILENAME</id><enabled>Y</enabled><name>FILENAME</name></field><field><id>OBJECT_ID</id><enabled>Y</enabled><name>OBJECT_ID</name></field><field><id>OBJECT_REVISION</id><enabled>Y</enabled><name>OBJECT_REVISION</name></field><field><id>PARENT_CHANNEL_ID</id><enabled>Y</enabled><name>PARENT_CHANNEL_ID</name></field><field><id>ROOT_CHANNEL_ID</id><enabled>Y</enabled><name>ROOT_CHANNEL_ID</name></field></channel-log-table>\n   <pass_batchid>N</pass_batchid>\n   <shared_objects_file/>\n  <entries>\n    <entry>\n      <name>Amazon EMR Job Executor</name>\n      <description/>\n      <type>EMRJobExecutorPlugin</type>\n      <hadoop_job_name>emr job executor3</hadoop_job_name>\n      <jar_url>.&#47;samples&#47;jobs&#47;hadoop&#47;wordcount.jar</jar_url>\n      <access_key></access_key>\n      <secret_key></secret_key>\n      <staging_dir>s3:&#47;&#47;s3&#47;&lt;bucket_name></staging_dir>\n      <num_instances>1</num_instances>\n      <master_instance_type></master_instance_type>\n      <slave_instance_type></slave_instance_type>\n      <command_line_args>--input s3:&#47;&#47;mddwordcount&#47;input --output s3:&#47;&#47;mddwordcount&#47;output</command_line_args>\n      <blocking>Y</blocking>\n      <logging_interval>15</logging_interval>\n      <hadoop_job_name>emr job executor3</hadoop_job_name>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>274</xloc>\n      <yloc>66</yloc>\n      </entry>\n    <entry>\n      <name>START</name>\n      <description/>\n      <type>SPECIAL</type>\n      <start>Y</start>\n      <dummy>N</dummy>\n      <repeat>N</repeat>\n      <schedulerType>0</schedulerType>\n      <intervalSeconds>0</intervalSeconds>\n      <intervalMinutes>60</intervalMinutes>\n      <hour>12</hour>\n      <minutes>0</minutes>\n      <weekDay>1</weekDay>\n      <DayOfMonth>1</DayOfMonth>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>66</xloc>\n      <yloc>66</yloc>\n      </entry>\n    <entry>\n      <name>Success</name>\n      <description/>\n      <type>SUCCESS</type>\n      <parallel>N</parallel>\n      <draw>Y</draw>\n      <nr>0</nr>\n      <xloc>457</xloc>\n      <yloc>66</yloc>\n      </entry>\n  </entries>\n  <hops>\n    <hop>\n      <from>START</from>\n      <to>Amazon EMR Job Executor</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>Y</unconditional>\n    </hop>\n    <hop>\n      <from>Amazon EMR Job Executor</from>\n      <to>Success</to>\n      <from_nr>0</from_nr>\n      <to_nr>0</to_nr>\n      <enabled>Y</enabled>\n      <evaluation>Y</evaluation>\n      <unconditional>N</unconditional>\n    </hop>\n  </hops>\n  <notepads>\n  </notepads>\n</job>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/excite-small.log",
    "content": "2A9EABFB35F5B954\t970916105432\t+md foods +proteins\nBED75271605EBD0C\t970916001949\tyahoo chat\nBED75271605EBD0C\t970916001954\tyahoo chat\nBED75271605EBD0C\t970916003523\tyahoo chat\nBED75271605EBD0C\t970916011322\tyahoo search\nBED75271605EBD0C\t970916011404\tyahoo chat\nBED75271605EBD0C\t970916011422\tyahoo chat\nBED75271605EBD0C\t970916012756\tyahoo caht\nBED75271605EBD0C\t970916012816\tyahoo chat\nBED75271605EBD0C\t970916023603\tyahoo chat\nBED75271605EBD0C\t970916025458\tyahoo caht\nBED75271605EBD0C\t970916025516\tyahoo chat\nBED75271605EBD0C\t970916030348\tyahoo chat\nBED75271605EBD0C\t970916034807\tyahoo chat\nBED75271605EBD0C\t970916040755\tyahoo chat\nBED75271605EBD0C\t970916090700\thawaii chat universe\nBED75271605EBD0C\t970916094445\tyahoo chat\nBED75271605EBD0C\t970916191427\tyahoo chat\nBED75271605EBD0C\t970916201045\tyahoo chat\nBED75271605EBD0C\t970916201050\tyahoo chat\nBED75271605EBD0C\t970916201927\tyahoo chat\n824F413FA37520BF\t970916184809\tgarter belts\n824F413FA37520BF\t970916184818\tgarter belts\n824F413FA37520BF\t970916184939\tlingerie\n824F413FA37520BF\t970916185051\tspiderman\n824F413FA37520BF\t970916185155\ttommy hilfiger\n824F413FA37520BF\t970916185257\tcalgary\n824F413FA37520BF\t970916185513\tcalgary\n824F413FA37520BF\t970916185605\texhibitionists\n824F413FA37520BF\t970916190220\texhibitionists\n824F413FA37520BF\t970916191233\texhibitionists\n7A8D9CFC957C7FCA\t970916064707\tduron paint\n7A8D9CFC957C7FCA\t970916064731\tduron paint\nA25C8C765238184A\t970916103534\tbrookings\nA25C8C765238184A\t970916104751\tbreton liberation front\nA25C8C765238184A\t970916105238\tbreton \nA25C8C765238184A\t970916105322\tbreton liberation front\nA25C8C765238184A\t970916105539\tbreton \nA25C8C765238184A\t970916105628\tbreton \nA25C8C765238184A\t970916105723\tfront de liberation de la bretagne\nA25C8C765238184A\t970916105857\tfront de liberation de la bretagne\n6589F4342B215FD4\t970916125147\tafghanistan\n6589F4342B215FD4\t970916125158\tafghanistan\n6589F4342B215FD4\t970916125407\tafghanistan\n16DE160B4FFE3B85\t970916050356\teia rs232-c\n16DE160B4FFE3B85\t970916050645\tnullmodem\n16DE160B4FFE3B85\t970916050807\tnullmodem\n563FC9A7E8A9022A\t970916042826\torganizational chart\n563FC9A7E8A9022A\t970916221456\torganizational chart of uae's companies\n563FC9A7E8A9022A\t970916221611\torganizational chart of dubai dutiy free\n563FC9A7E8A9022A\t970916221717\torganizational charts in dubai\n73D74648CC2CA35E\t970916122841\tsalvia trip reports\n893C3ADD0EFBBECB\t970916074735\tfleetwood mac\n893C3ADD0EFBBECB\t970916074811\tfleetwood mac\n893C3ADD0EFBBECB\t970916074828\t\n893C3ADD0EFBBECB\t970916074912\t\n893C3ADD0EFBBECB\t970916074932\t\n893C3ADD0EFBBECB\t970916074936\t\n893C3ADD0EFBBECB\t970916074958\t\n893C3ADD0EFBBECB\t970916075000\t\n893C3ADD0EFBBECB\t970916075021\t\n893C3ADD0EFBBECB\t970916075042\t\n7B99756B742D8E89\t970916070712\tlionking\n7B99756B742D8E89\t970916070724\tlionking\n7B99756B742D8E89\t970916072057\tlionking\n7B99756B742D8E89\t970916072058\tlionking\n7B99756B742D8E89\t970916072351\tlionking\n736D28D439E9FE2D\t970916130425\tzelle and larson\nFB3EA7AB8B51C95D\t970916105346\thifly\nFB3EA7AB8B51C95D\t970916105408\thifly\n729B745475893F44\t970916025053\tearth pictures\n729B745475893F44\t970916025113\tearth pictures planet \n575DC994A9CF8D36\t970916110752\tevgeny kissin\n575DC994A9CF8D36\t970916110817\tevgeny kissin yevgeny\n575DC994A9CF8D36\t970916113040\tevgeny kissin yevgeny\n575DC994A9CF8D36\t970916113057\t\n575DC994A9CF8D36\t970916113423\t\n575DC994A9CF8D36\t970916113453\t\n575DC994A9CF8D36\t970916113556\t\n575DC994A9CF8D36\t970916113705\t\n575DC994A9CF8D36\t970916113742\t\n575DC994A9CF8D36\t970916113955\tevgeny kissin yevgeny\n39BF6DD421B71387\t970916171516\thouse o fun\n39BF6DD421B71387\t970916171606\thouse o fun\n519AC93F468A4BF4\t970916232453\tann nicole smith\n519AC93F468A4BF4\t970916232544\tpamela anderson\n81CC31A8588135F2\t970916102352\tlooney tunes\n81CC31A8588135F2\t970916102645\tlooney tunes daffy toons \nFFCA848089F3BA8C\t970916100905\tmarilyn manson\n7AE07E7F0053F0A9\t970916194214\tgis digitizing\nC1C4228EA191F401\t970916082442\t\"bentley's luggage\"\nC1C4228EA191F401\t970916082516\tluggage\nC1C4228EA191F401\t970916083327\t\"tumi luggage\"\nC1C4228EA191F401\t970916083849\t\"tumi\"\n7C60C0A2EBF7A3E7\t970916012828\tnorthwestern university\n7C60C0A2EBF7A3E7\t970916234912\tgangs\n9EAF527F15CABB79\t970916074242\t�\nBCD90B7247D8FC7C\t970916195153\tcar rental companies\nAF7BBE7E92E62D6E\t970916123811\tnew jersey resources\nAF7BBE7E92E62D6E\t970916203918\tnew jersey resources\n320475401250000D\t970916080124\tnorthwest+ airline\n2C7FDB7C4C41215D\t970916184521\tcrowell and weedon\n2C7FDB7C4C41215D\t970916185017\tcrowell and weedon\n11971F6093701B3C\t970916093302\teconomia torino\n11971F6093701B3C\t970916093413\tuniversita di torino\n11971F6093701B3C\t970916093506\tandrea belratti\n11971F6093701B3C\t970916093524\tandrea beltratti\n11971F6093701B3C\t970916093810\thonig\n11971F6093701B3C\t970916093903\tcar hommes\n11971F6093701B3C\t970916095055\twinzip\n11971F6093701B3C\t970916095506\tmartin lettau\n11971F6093701B3C\t970916095605\tpatrick de fontnouvelle\n11971F6093701B3C\t970916095633\tde fontnouvelle\n11971F6093701B3C\t970916095758\tghysels\n11971F6093701B3C\t970916132437\ticon librarian\n11971F6093701B3C\t970916132922\ticon librarian\nE84833C4A26D6818\t970916202849\tnature's pet marketplace\nE84833C4A26D6818\t970916204115\ttop drawer\nE84833C4A26D6818\t970916204218\ttopdrawer\nE84833C4A26D6818\t970916214604\ttim erickson\n54E8C79987B6F2F3\t970916064158\tquiz\n54E8C79987B6F2F3\t970916064229\tquiz\n54E8C79987B6F2F3\t970916064305\tpaper clay\n54E8C79987B6F2F3\t970916064854\tbath and body works\n54E8C79987B6F2F3\t970916065614\tpregnancy\n54E8C79987B6F2F3\t970916074831\tpregnancy\n54E8C79987B6F2F3\t970916075153\tpregnancy\n54E8C79987B6F2F3\t970916075210\tpregnancy\n54E8C79987B6F2F3\t970916075236\tpregnancy\n54E8C79987B6F2F3\t970916075302\tpregnancy pregnant \n54E8C79987B6F2F3\t970916080128\tpregnancy pregnant \n54E8C79987B6F2F3\t970916080202\tpregnancy pregnant \n54E8C79987B6F2F3\t970916080240\tconceiving\n54E8C79987B6F2F3\t970916080404\tconceiving\n54E8C79987B6F2F3\t970916080455\tconceiving\n54E8C79987B6F2F3\t970916081215\tconceiving\n54E8C79987B6F2F3\t970916081303\tconceiving conceive \n54E8C79987B6F2F3\t970916081324\tconceiving conceive \n54E8C79987B6F2F3\t970916081416\tconceiving conceive \n54E8C79987B6F2F3\t970916081511\tconceiving conceive \n54E8C79987B6F2F3\t970916214401\tbaby\n54E8C79987B6F2F3\t970916214732\tbaby\n54E8C79987B6F2F3\t970916214921\tpregnant\n54E8C79987B6F2F3\t970916215423\tpregnant\n54E8C79987B6F2F3\t970916220045\tpregnant\n54E8C79987B6F2F3\t970916220458\tpregnant\n54E8C79987B6F2F3\t970916220600\tpregnant\n54E8C79987B6F2F3\t970916220637\tpregnant\n54E8C79987B6F2F3\t970916220842\tpregnant\n54E8C79987B6F2F3\t970916221032\tpregnant\n54E8C79987B6F2F3\t970916221119\tpregnant\n54E8C79987B6F2F3\t970916221147\tpregnant\n54E8C79987B6F2F3\t970916221245\tpregnant\n54E8C79987B6F2F3\t970916221339\tpregnant\n54E8C79987B6F2F3\t970916221607\tpregnant\n54E8C79987B6F2F3\t970916221631\tpregnant\n54E8C79987B6F2F3\t970916221832\tpregnant\n54E8C79987B6F2F3\t970916222232\tpregnant\n54E8C79987B6F2F3\t970916222621\tpregnant\n54E8C79987B6F2F3\t970916222716\tpregnant\n54E8C79987B6F2F3\t970916222754\tpregnant\n54E8C79987B6F2F3\t970916222833\tpregnant\n54E8C79987B6F2F3\t970916222858\tpregnant pregnancy \n54E8C79987B6F2F3\t970916222938\tpregnant pregnancy \n54E8C79987B6F2F3\t970916223107\tpregnant pregnancy \n54E8C79987B6F2F3\t970916223708\tpregnant pregnancy \n3F8AAC2372F6941C\t970916091301\tbac\n3F8AAC2372F6941C\t970916091354\tblood alcohol content\n3F8AAC2372F6941C\t970916091425\t\n3F8AAC2372F6941C\t970916091545\t\n3F8AAC2372F6941C\t970916093448\t\n3F8AAC2372F6941C\t970916093544\tbreathalizers\n3F8AAC2372F6941C\t970916093551\tbreathalizers\n3F8AAC2372F6941C\t970916093642\tbreathalizers\n3F8AAC2372F6941C\t970916093724\tminors in possesion\n3F8AAC2372F6941C\t970916093848\tminors in possesion\n3F8AAC2372F6941C\t970916093904\tmip\n3F8AAC2372F6941C\t970916093932\te. lansing laws\n3F8AAC2372F6941C\t970916094043\teast lansing laws\n3F8AAC2372F6941C\t970916094111\teast lansing laws\n3F8AAC2372F6941C\t970916185828\tbusiness homepage\n3F8AAC2372F6941C\t970916185908\tbusiness homepage\n3F8AAC2372F6941C\t970916185926\tintel homepage\n3F8AAC2372F6941C\t970916190005\tintel homepage\n3F8AAC2372F6941C\t970916190032\tintel homepage pentium processor \n3F8AAC2372F6941C\t970916191238\told psycological contract\n3F8AAC2372F6941C\t970916191308\t\"old psycological contract\"\n3F8AAC2372F6941C\t970916191358\t+old +psycological +contract\n3F8AAC2372F6941C\t970916191436\t+new+psycological +contract\n3F8AAC2372F6941C\t970916191443\t+new +psycological +contract\n3F8AAC2372F6941C\t970916191720\tg.m\n3F8AAC2372F6941C\t970916191735\tgeneral motors homepage\n3F8AAC2372F6941C\t970916191812\tgeneral motors +homepage\n3F8AAC2372F6941C\t970916191833\tford motor company\n3F8AAC2372F6941C\t970916192820\tassembly plant\n0FB5866123ADFDFF\t970916112103\tblackpool tourist information\n4B6A008308C6DE7E\t970916072324\tmagnetic strip\n4B6A008308C6DE7E\t970916072408\tauto lockout tools\n0F8EA93654516937\t970916144849\ttuxedo park\nD4DA409F40BB9102\t970916140853\tsan juan island\nD4DA409F40BB9102\t970916140855\tsan juan island\nD4DA409F40BB9102\t970916140950\tsan juan island\n01F6B9CA495576BA\t970916121358\tsalary canada\n01F6B9CA495576BA\t970916123855\tsalary canada\n01F6B9CA495576BA\t970916124000\tsalary canada\n01F6B9CA495576BA\t970916124134\tsalary canada\n01F6B9CA495576BA\t970916124134\tsalary canada\n01F6B9CA495576BA\t970916124156\tsalary canada\n01F6B9CA495576BA\t970916124432\tsalary canada\n7F88C9EC4CD0BB3A\t970916104228\tkcchief.com\n7F88C9EC4CD0BB3A\t970916104256\tkcchiefs.com\n7F88C9EC4CD0BB3A\t970916105714\tpittsburgh steelers\n7F88C9EC4CD0BB3A\t970916105729\tpittsburgh steelers\nE55487B7296ED015\t970916102620\tm�nchen AND hotel\nE55487B7296ED015\t970916102744\tm�nchen AND hotel\nE55487B7296ED015\t970916102853\tst. AND paul AND hotel\nE55487B7296ED015\t970916103125\tst. AND paul AND hotel\nE55487B7296ED015\t970916103450\tst. AND paul AND hotel\nE55487B7296ED015\t970916103614\tmenneapolis  AND hotel\nE55487B7296ED015\t970916103627\tminneapolis  AND hotel\n513FCC6548E3F36D\t970916153728\tstealth crash air show\nBB925FF85FF44849\t970916061832\tplant cell journal\nBB925FF85FF44849\t970916061841\tplant cell journal\n15165C4C19E63B0A\t970916201155\tnetmeeting\n15165C4C19E63B0A\t970916201320\tnetmeeting\n15165C4C19E63B0A\t970916201456\tnetmeeting msdownload \nB0274667D0A700A8\t970916131323\tfree games\nB0274667D0A700A8\t970916131746\tbingo zone\n514008DD5C88BB30\t970916112944\thome page proctology\n514008DD5C88BB30\t970916112952\thome page proctology\nDB0CC854B82A662C\t970916064732\talligator graphics\nDB0CC854B82A662C\t970916064751\talligator graphics\nDB0CC854B82A662C\t970916064824\talligator graphics\nDB0CC854B82A662C\t970916064847\talligator \n9A10B373FA529557\t970916070208\tgolf pro shops\n9A10B373FA529557\t970916131403\t\n9A10B373FA529557\t970916131404\t\n550D31C646EE3CFE\t970916155237\tmedicare, certificate of need\n550D31C646EE3CFE\t970916155315\tnew york times\n72270DEAFE0BF9FC\t970916002029\tcuhk\n72270DEAFE0BF9FC\t970916002038\tcuhk\n72270DEAFE0BF9FC\t970916002221\tfin6060a\n3F59FEC0AD9851A5\t970916053428\tveltins AND bier\n3F59FEC0AD9851A5\t970916053552\tveltins AND bier\n3F59FEC0AD9851A5\t970916053618\tveltins AND bier\n3F59FEC0AD9851A5\t970916083304\tdavid hare\n3F59FEC0AD9851A5\t970916083618\t\n3F59FEC0AD9851A5\t970916083745\t\n3F59FEC0AD9851A5\t970916083807\tplenty hare\n3F59FEC0AD9851A5\t970916083950\t\n3F59FEC0AD9851A5\t970916084207\tplenty hare\n3F59FEC0AD9851A5\t970916084340\tdavidhare\n3F59FEC0AD9851A5\t970916084929\tdavidhare\n3F59FEC0AD9851A5\t970916084949\thare plenty\n3F59FEC0AD9851A5\t970916085005\thare plenty david\n3F59FEC0AD9851A5\t970916085043\thare plenty david\n3F59FEC0AD9851A5\t970916091357\tdavid hare\n3F59FEC0AD9851A5\t970916091428\tre: hamill\n3F59FEC0AD9851A5\t970916091449\t\n3F59FEC0AD9851A5\t970916091459\t\n3F59FEC0AD9851A5\t970916091605\tre: hamill mark\n3F59FEC0AD9851A5\t970916091611\t\n3F59FEC0AD9851A5\t970916091635\t\n3F59FEC0AD9851A5\t970916091738\tfaq hamill re:\n3F59FEC0AD9851A5\t970916091803\tfaq hamill \n3F59FEC0AD9851A5\t970916091838\t\n3F59FEC0AD9851A5\t970916092544\tplays\n3F59FEC0AD9851A5\t970916092605\tplays plenty\n3F59FEC0AD9851A5\t970916092628\tplays plenty hare\n3F59FEC0AD9851A5\t970916092633\t\n3F59FEC0AD9851A5\t970916093104\ttheatre hare\n3F59FEC0AD9851A5\t970916093137\thamill\n3F59FEC0AD9851A5\t970916093154\thamill mark\n3F59FEC0AD9851A5\t970916095418\tre. hamill\n3F59FEC0AD9851A5\t970916095428\tre: hamill\n3F59FEC0AD9851A5\t970916095435\tre: hamill\n3F59FEC0AD9851A5\t970916095454\t\n3F59FEC0AD9851A5\t970916095459\t\n3F59FEC0AD9851A5\t970916095801\tdavid hare\n3F59FEC0AD9851A5\t970916095828\tmark hamill\nE131BFC55AF4CDCE\t970916200044\tc:windows\nE131BFC55AF4CDCE\t970916200609\tc:windows\n9CF1A20154759F8F\t970916091938\t\"university of texas at arlington\"\n4F3CA140D72441BC\t970916114733\t\"canadian conquest\"\n4F3CA140D72441BC\t970916114940\t\n4F3CA140D72441BC\t970916132635\teyelogic\n27D0C3A6B4BE62E5\t970916203809\tlibra\n27D0C3A6B4BE62E5\t970916204304\tsagitarius\n27D0C3A6B4BE62E5\t970916205125\thoroscopes\n349396224ECBDCBE\t970916153625\t\"denise fitzpatrick\"\n349396224ECBDCBE\t970916153747\t\"ann michael gilliam\"\n349396224ECBDCBE\t970916153815\t\"ann gilliam\"\n349396224ECBDCBE\t970916153910\t\"jerry noblin\"\n349396224ECBDCBE\t970916153949\t\"j. noblin\"\n66B377662547D14A\t970916065206\tdr. ronald ennis\n1A75CBE7DA62BD5F\t970916183012\tsugar ray guitar taps\n1A75CBE7DA62BD5F\t970916183025\tsugar ray guitar taps floored \nFCE735441720FBE8\t970916112750\t\"little people of america\"\nFCE735441720FBE8\t970916112802\t\"little people of america\"\nFCE735441720FBE8\t970916160435\t\"little people of america\"\nFCE735441720FBE8\t970916160449\t\"little people of america\"\nEC6E91864359DD8D\t970916164740\tmaytag\nEC6E91864359DD8D\t970916164827\tmaytag\nEC6E91864359DD8D\t970916165136\tmaytag\nEC6E91864359DD8D\t970916165243\tmaytag\nEC6E91864359DD8D\t970916165317\tmaytag\nEC6E91864359DD8D\t970916165335\tmaytag\nEC6E91864359DD8D\t970916171635\tmaytag\nEC6E91864359DD8D\t970916171744\tmaytag\nEC6E91864359DD8D\t970916171901\tmaytag\nEC6E91864359DD8D\t970916172001\tmaytag\nEC6E91864359DD8D\t970916172011\tmaytag\nEC6E91864359DD8D\t970916172031\tmaytag\nEC6E91864359DD8D\t970916172038\tmaytag\nEC6E91864359DD8D\t970916172128\tmaytag\nEC6E91864359DD8D\t970916172250\tmaytag\nEC6E91864359DD8D\t970916172403\tmaytag\nEC6E91864359DD8D\t970916172503\tmaytag\nEC6E91864359DD8D\t970916172503\tmaytag\nEC6E91864359DD8D\t970916172517\tmaytag\nEC6E91864359DD8D\t970916172526\tmaytag\nEC6E91864359DD8D\t970916172621\tmaytag\nEC6E91864359DD8D\t970916172625\tmaytag\nEC6E91864359DD8D\t970916172656\tmaytag\nEC6E91864359DD8D\t970916172657\tmaytag\nEC6E91864359DD8D\t970916172751\tmaytag\nEC6E91864359DD8D\t970916172754\tmaytag\nEC6E91864359DD8D\t970916172838\tmaytag\nEC6E91864359DD8D\t970916172842\tmaytag\nEC6E91864359DD8D\t970916172919\tmaytag\nEC6E91864359DD8D\t970916173015\tmaytag\nEC6E91864359DD8D\t970916173027\tmaytag\nEC6E91864359DD8D\t970916173050\tmaytag\nEC6E91864359DD8D\t970916173050\tmaytag\nEC6E91864359DD8D\t970916173111\tmaytag\nEC6E91864359DD8D\t970916173114\tmaytag\nEC6E91864359DD8D\t970916173253\tmaytag\nEC6E91864359DD8D\t970916173452\tmaytag\nEC6E91864359DD8D\t970916173454\tmaytag\nEC6E91864359DD8D\t970916173505\tmaytag\nEC6E91864359DD8D\t970916173515\tmaytag\nEC6E91864359DD8D\t970916173542\tmaytag\nEC6E91864359DD8D\t970916173624\tcar\nEC6E91864359DD8D\t970916173711\tcar\nEC6E91864359DD8D\t970916173747\tcar\nEC6E91864359DD8D\t970916173858\tcar\nEC6E91864359DD8D\t970916173905\tcar\nEC6E91864359DD8D\t970916173910\tcar\nE0D12FA14991D2D9\t970916143958\tati technogies\nE0D12FA14991D2D9\t970916144028\tati technogies\nF559561E697722BB\t970916223640\tnoriko+sakai\n9912390F5E1D690F\t970916213048\tamsterdam\n9912390F5E1D690F\t970916213111\tamsterdam noord \n9912390F5E1D690F\t970916213225\tamsterdam noord \n8E1A8EA81FEA8A30\t970916001026\tstiler+skole\n8E1A8EA81FEA8A30\t970916002122\tskolestil\n8E1A8EA81FEA8A30\t970916060031\tlau tak wah\n8E1A8EA81FEA8A30\t970916060140\tlau tak wah\n383A51DC0D94C7F7\t970916182139\tbus schedule greyhound\n383A51DC0D94C7F7\t970916182147\tbus schedule greyhound\n383A51DC0D94C7F7\t970916183408\tbus schedule greyhound\n383A51DC0D94C7F7\t970916183438\tfullington bus schedule\nCBAEB52E28985C5E\t970916110525\tbmi publishing\n266C99B4834F4675\t970916124711\tprobate records\n266C99B4834F4675\t970916124737\t\n266C99B4834F4675\t970916124756\tprobate records county \n266C99B4834F4675\t970916125012\t\n9A33FFD53E103291\t970916073204\tshapeup\n8CDEE772A295AA02\t970916190227\tren faire\n8CDEE772A295AA02\t970916191951\tblackpoint navato\n8CDEE772A295AA02\t970916192030\tren faire\n8CDEE772A295AA02\t970916192500\tliving history center\n8CDEE772A295AA02\t970916192553\tliving history center ren\n6946DB5A812D6EFB\t970916072247\tcucumber AND daughter AND belly\nC68A35C476240F3D\t970916130001\tjacaranda AND radio\nC68A35C476240F3D\t970916131150\tjacaranda AND radio\nC68A35C476240F3D\t970916131223\thighveld stereo\nC68A35C476240F3D\t970916131256\thighveld AND stereo\nC68A35C476240F3D\t970916131420\thighveld AND stereo\nC68A35C476240F3D\t970916131429\thighveld AND stereo\nC68A35C476240F3D\t970916131433\thighveld AND stereo\nC68A35C476240F3D\t970916131435\thighveld AND stereo\nC68A35C476240F3D\t970916131439\thighveld AND stereo\nC68A35C476240F3D\t970916133355\tdenmark\n8B2065581C770F50\t970916101241\tmartha stuart\n8B2065581C770F50\t970916101438\tmartha stuart\n8B2065581C770F50\t970916101522\twww.martha stuart\n8B2065581C770F50\t970916101645\tmartha stuart\n4077443B5801F0C3\t970916164055\t\"garth brooks tickets\"\n4077443B5801F0C3\t970916164359\tgarth brooks tickets\n4077443B5801F0C3\t970916182623\tjob openings\n4077443B5801F0C3\t970916182752\tjob openings listings \n4077443B5801F0C3\t970916182823\tagricultural job listings\n4077443B5801F0C3\t970916182834\tagricultural job listings employment \n4077443B5801F0C3\t970916182942\tjob listings\n43A83F326ED8A531\t970916135831\tcoldwater creek\n449C383378F5C37D\t970916173829\ttelluride.co.\n449C383378F5C37D\t970916173905\ttelluride.co.\n449C383378F5C37D\t970916173932\ttelluride.co.\n414D8301A62279D8\t970916200349\tcasino\n2601DDA407398E5E\t970916064850\t\nADDF71B56E078EC1\t970916194616\tpentium ii 266 setup\nADDF71B56E078EC1\t970916194755\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916194829\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916194921\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916194943\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916195134\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916195232\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916195348\tpentium ii 266 setup problems\nADDF71B56E078EC1\t970916195408\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195433\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195511\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195529\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195551\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195606\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195626\tpentium ii 266 problems\nADDF71B56E078EC1\t970916195726\tpentium ii 266 problems\nADDF71B56E078EC1\t970916200042\tpentium ii 266 problems\nADDF71B56E078EC1\t970916200102\tpentium ii 266 problems\n27F5BE2A36039395\t970916124539\trainforest art\n27F5BE2A36039395\t970916124606\trainforest,art\n27F5BE2A36039395\t970916124631\trainforest,art\n27F5BE2A36039395\t970916124652\trainforest,art\n27F5BE2A36039395\t970916124750\tart,rainforest\n27F5BE2A36039395\t970916125342\tart,rainforest\n27F5BE2A36039395\t970916125431\tart,science fiction\n69A46F05C734BF2F\t970916074141\tmeijiro\n69A46F05C734BF2F\t970916074223\t\"meijiro kogu\"\n69A46F05C734BF2F\t970916074256\ttokushima\n69A46F05C734BF2F\t970916074418\ttokushima\n69A46F05C734BF2F\t970916074504\ttokushima\n69A46F05C734BF2F\t970916074823\t\"bamboo in shikoku prefecture, japan\"\n69A46F05C734BF2F\t970916074844\t\"bamboo shikoku , japan\"\n69A46F05C734BF2F\t970916074900\t\"bamboo japan\"\nDAF7A3D38ED9A343\t970916033027\thiro\nDAF7A3D38ED9A343\t970916033138\thiro yamagata\nDAF7A3D38ED9A343\t970916033904\t\n0C48BBEE45E646AF\t970916072450\tclip art\n0C48BBEE45E646AF\t970916072532\tclip art globe\\\n0C48BBEE45E646AF\t970916072627\tclip art \n0C48BBEE45E646AF\t970916080404\tclip art\n0C48BBEE45E646AF\t970916125938\t\n0C48BBEE45E646AF\t970916144028\tgifted and talented adaptations\n2B737CAD0C4B125A\t970916145506\t\n2B737CAD0C4B125A\t970916145924\thttp://www.dcadnet.com/cum.html\n0567639EB8F3751C\t970916161410\t\"conan o'brien\"\n0567639EB8F3751C\t970916161413\t\"conan o'brien\"\nC771C1E3DF333CDC\t970916191209\tessays- why can't we just be friends, the female perspective\nC771C1E3DF333CDC\t970916191339\tessays\nC771C1E3DF333CDC\t970916192530\tessays aed \nC86AA16FFD90B66C\t970916035144\t\nC86AA16FFD90B66C\t970916035348\ttreatment cystic hygroma\nC86AA16FFD90B66C\t970916035433\ttreatment cystic hygroma picture\nC86AA16FFD90B66C\t970916035915\t\nC86AA16FFD90B66C\t970916040243\tcystic hygroma excision pictures\nC86AA16FFD90B66C\t970916040353\t\nC86AA16FFD90B66C\t970916040500\t\nC86AA16FFD90B66C\t970916040525\tcystic hygroma video pictures\nC86AA16FFD90B66C\t970916040559\tcystic hygroma clips \nC86AA16FFD90B66C\t970916040803\tcystic hygroma clips \nC86AA16FFD90B66C\t970916041014\tcystic hygroma after surgery \nC86AA16FFD90B66C\t970916041226\tplastic surgery cystic hygroma \nC86AA16FFD90B66C\t970916041430\tcystic hygroma photograph\nC86AA16FFD90B66C\t970916041516\taltavista\nA93156BD79F164A4\t970916140244\tleafs summary\nA93156BD79F164A4\t970916140343\trangers 3, leafs 2\nA93156BD79F164A4\t970916140428\trangers 3, leafs 2\nA93156BD79F164A4\t970916140546\t\nA93156BD79F164A4\t970916140723\t\nA93156BD79F164A4\t970916140809\tleafs summary for 09/16/97\nA93156BD79F164A4\t970916140849\trecent hockey summaries\nA93156BD79F164A4\t970916140930\trecent hockey summaries\nA93156BD79F164A4\t970916141017\ttornoto maple leafs boxscores\nA93156BD79F164A4\t970916141541\t\nA93156BD79F164A4\t970916141716\tleafs summary for last night\nA93156BD79F164A4\t970916141745\tpreseason leafs summary \nA93156BD79F164A4\t970916141803\tpreseason leafs summary \nA93156BD79F164A4\t970916141835\tpreseason leafs summary \nA93156BD79F164A4\t970916141900\ttoronto star\n2833FEAF16BCF190\t970916071424\t\"pacificnet\" AND webtalk\n2833FEAF16BCF190\t970916071556\t\"pacificnet.com\" AND webtalk\n2833FEAF16BCF190\t970916071635\t198.316.217.831/webtalk.html\n201490742D23909B\t970916191353\t\"adult kiss data\"\n201490742D23909B\t970916191442\t\"kiss data\"\n201490742D23909B\t970916192154\t\"kiss data\"\n201490742D23909B\t970916192803\t\"kiss data\"\n201490742D23909B\t970916192920\t\"kiss data\"\nC989A6531FD9EEC8\t970916063105\tmhsaa\nC989A6531FD9EEC8\t970916063336\tmhsaa\nC989A6531FD9EEC8\t970916063400\tmhsaa\nC989A6531FD9EEC8\t970916063413\tmhsaa\nC989A6531FD9EEC8\t970916063423\tmhsaa\nC989A6531FD9EEC8\t970916063434\tmhsaa\nC989A6531FD9EEC8\t970916063452\tmhsaa\nC989A6531FD9EEC8\t970916063502\tmhsaa\nC989A6531FD9EEC8\t970916063511\tmhsaa\nA127C018E4812A29\t970916015816\tfrance\nA127C018E4812A29\t970916015922\tfrench quotidiennement \n645C8DD38C387A92\t970916144421\twww.emu.com\n645C8DD38C387A92\t970916144723\twww.emu.com\n645C8DD38C387A92\t970916144752\twww.emu.com\n645C8DD38C387A92\t970916144820\twww.emu.com\n645C8DD38C387A92\t970916144922\twww.emu.com\n645C8DD38C387A92\t970916145006\twww.emu.com\n645C8DD38C387A92\t970916145100\twww.emu.com\n645C8DD38C387A92\t970916145137\twww.emu.com\n645C8DD38C387A92\t970916145314\t\n645C8DD38C387A92\t970916145459\t\n645C8DD38C387A92\t970916145525\twww.emu.com\n645C8DD38C387A92\t970916145550\twww.emu.com\n645C8DD38C387A92\t970916145728\twww.emu.com\nDA94D4B5A7C0D1AF\t970916154114\twilshire financial services\nDA94D4B5A7C0D1AF\t970916154142\twilshire financial services group\nDA94D4B5A7C0D1AF\t970916154245\twilshire financial services group companies \nAA716408D075660C\t970916195520\tadmiral krag\n719CF3C90004051C\t970916114344\tsouthern-domains\n0DB3873516AE57F7\t970916034537\tmetalica\n9DB263190BB17AC2\t970916235345\tapple.com\n9DB263190BB17AC2\t970916235714\tapple.com\n9DB263190BB17AC2\t970916235751\tapple.com movies \n9DB263190BB17AC2\t970916235820\tmovies \nC0BD480632F27E58\t970916111049\tnewsnet\nC0BD480632F27E58\t970916112445\tusenet\nC0BD480632F27E58\t970916113157\tbbs\nC0BD480632F27E58\t970916114716\tbbs\nC0BD480632F27E58\t970916115317\tusenet newsgroups\n59C873BBBA8998BA\t970916064758\tjerusalem post\n59C873BBBA8998BA\t970916064817\tjerusalem post\n59C873BBBA8998BA\t970916064946\tjerusalem post newspaper\n59C873BBBA8998BA\t970916065100\tjerusalem post newspaper\n59C873BBBA8998BA\t970916112015\twebcrawler\nD9142519595FF9D1\t970916105724\tyahoo\nD9142519595FF9D1\t970916105813\tyahoo\n1E6F6DBF634461FA\t970916180926\tcar audio\n1E6F6DBF634461FA\t970916181745\tphoenix gold\n1E6F6DBF634461FA\t970916181811\tphoenix gold\n1E6F6DBF634461FA\t970916181851\tphoenix gold\n1E6F6DBF634461FA\t970916181914\tclarion car audio\n1E6F6DBF634461FA\t970916182518\tclarion car audio\n1E6F6DBF634461FA\t970916182528\tclarion car audio\n1E6F6DBF634461FA\t970916182534\tclarion car audio\n1E6F6DBF634461FA\t970916182544\tclarion car audio\n1E6F6DBF634461FA\t970916182553\tclarion car audio\n98825190824FBCEC\t970916103412\tsatellite pictures\n98825190824FBCEC\t970916103418\tsatellite pictures\n98825190824FBCEC\t970916110033\tsatellite pictures\n98825190824FBCEC\t970916110036\tsatellite pictures\n98825190824FBCEC\t970916110140\tdublin maps\n98825190824FBCEC\t970916110251\tsatellite pictures london\n98825190824FBCEC\t970916110558\tsatellite pictures london\n33E7C94098B1796F\t970916012725\tport douglas\n33E7C94098B1796F\t970916012802\tport douglas\n33E7C94098B1796F\t970916012834\tport douglas\n33E7C94098B1796F\t970916012855\tport douglas\n33E7C94098B1796F\t970916013653\thairy\n33E7C94098B1796F\t970916013801\thairy\n33E7C94098B1796F\t970916014006\thairy\nC0916429A59CE5A1\t970916060012\tcalibration\nC0916429A59CE5A1\t970916061342\tcalibration AND equipment\nC0916429A59CE5A1\t970916061433\tcalibration AND equipment\nC0916429A59CE5A1\t970916061508\tcalibration AND equipment AND testing\n8A7BC9076D6F166F\t970916113432\tnews, europe, netherlands\n8A7BC9076D6F166F\t970916113512\tnews, europe, netherlands\n8A7BC9076D6F166F\t970916113748\tnews, europe, netherlands benelux \n8A7BC9076D6F166F\t970916113816\tnews, europe, netherlands \nF19ED8F44663520A\t970916090752\tsystem search\nF19ED8F44663520A\t970916102316\tmail spy\nF19ED8F44663520A\t970916102339\tmailspy\nF19ED8F44663520A\t970916102350\tmailspy\n4091BCDFBF33197A\t970916223632\tlovers\n4091BCDFBF33197A\t970916223912\tsun\n4091BCDFBF33197A\t970916224414\tnaturism\n8AFBE95F88FA5C99\t970916085553\tthe cranes are flying\n8AFBE95F88FA5C99\t970916085612\t\n8AFBE95F88FA5C99\t970916090003\t\n8FE55B4D65B22166\t970916052821\tespn\n78EC0A6026552159\t970916160144\tpocsag\n78EC0A6026552159\t970916160215\tpocsag\nF268B329129FEA09\t970916103147\temergency medicine\nF268B329129FEA09\t970916103239\temergency medicine organizations \n122A31FB8B9FC6EA\t970916082347\tprinters laserjet \n122A31FB8B9FC6EA\t970916082355\tprinters laserjet hp \n122A31FB8B9FC6EA\t970916082403\tprinters laserjet hp \n122A31FB8B9FC6EA\t970916082416\tprinters laserjet hp printer \n122A31FB8B9FC6EA\t970916082429\tprinters laserjet hp printer inkjet \n122A31FB8B9FC6EA\t970916082445\tprinters laserjet hp printer \n122A31FB8B9FC6EA\t970916082458\tprinters laserjet hp printer \n122A31FB8B9FC6EA\t970916082512\thp laserjet printers\n122A31FB8B9FC6EA\t970916082524\thp laserjet printers deskjet \n122A31FB8B9FC6EA\t970916082532\thp laserjet printers deskjet printer \n122A31FB8B9FC6EA\t970916082536\thp laserjet printers deskjet printer \n122A31FB8B9FC6EA\t970916133144\tchat\nB9922F32F8DD2511\t970916170034\tmontelambert\nDB49308A76F8A6C4\t970916021833\tfiskars\nDB49308A76F8A6C4\t970916022144\tfiskars\nDB49308A76F8A6C4\t970916022357\t\nDB49308A76F8A6C4\t970916022812\t+fiskars +ups +power \nDB49308A76F8A6C4\t970916023022\t+fiskars +ups +power \nDB49308A76F8A6C4\t970916023139\t+fiskars +ups +power \nDB49308A76F8A6C4\t970916023337\t+fiskars +ups +power \n70AFB9518EB9997A\t970916091739\tjscript\n70AFB9518EB9997A\t970916091752\tjscript\n70AFB9518EB9997A\t970916092120\tjscript\nC1977F1B854584B3\t970916214404\tsean mcafee shit my pants\nC1977F1B854584B3\t970916214409\tsean mcafee shit my pants\n5BE36449BA3E2501\t970916100347\txerox\n5BE36449BA3E2501\t970916101126\tduxbury\n5BE36449BA3E2501\t970916101740\tduxbury\n5BE36449BA3E2501\t970916101749\tbraille printers\n5BE36449BA3E2501\t970916101827\tbraille printers\n5BE36449BA3E2501\t970916102425\tbraille printers\n66BA2100B71AF41C\t970916193830\t\n79785E25B2F213B8\t970916145404\t\"free stamps\"\n79785E25B2F213B8\t970916145412\t\"free stamps\"\n79785E25B2F213B8\t970916145444\t\"free stamps\"\n79785E25B2F213B8\t970916145516\t\"free stamps\"\n185D6864023D5B24\t970916133811\tinternet AND certification\n185D6864023D5B24\t970916133929\tinternet AND certification\n185D6864023D5B24\t970916134010\t\n185D6864023D5B24\t970916134118\t\n185D6864023D5B24\t970916134250\tmicrosoft AND internet AND certification\nC5460576B58BB1CC\t970916192508\thacking telenet\nC5460576B58BB1CC\t970916193004\thacking telenet\nC5460576B58BB1CC\t970916193355\thacking telenet\nC5460576B58BB1CC\t970916193737\thacking telenet\nC5460576B58BB1CC\t970916193908\thacking telenet\nC5460576B58BB1CC\t970916194352\thacking telenet\nC5460576B58BB1CC\t970916194748\tphreaking telenet\n158EF1FF1683799A\t970916091509\tsilicon investor\n158EF1FF1683799A\t970916091542\tsilicon investor\n05FD3B6783303D98\t970916210805\ttriphop\n05FD3B6783303D98\t970916211033\ttriphop\n05FD3B6783303D98\t970916211049\ttriphop\n05FD3B6783303D98\t970916211140\ttriphop\n05FD3B6783303D98\t970916211201\ttriphop\n05FD3B6783303D98\t970916211233\ttriphop\n05FD3B6783303D98\t970916211250\ttriphop\n05FD3B6783303D98\t970916211315\ttriphop\n05FD3B6783303D98\t970916211408\tmazzy star\n05FD3B6783303D98\t970916211426\t\n05FD3B6783303D98\t970916211440\t\n05FD3B6783303D98\t970916211459\t\n05FD3B6783303D98\t970916211638\tportishead\n05FD3B6783303D98\t970916211704\t\n05FD3B6783303D98\t970916212103\t\n05FD3B6783303D98\t970916212130\t\n05FD3B6783303D98\t970916212145\ttrip-hop\n71D2D4E6C01FD7E1\t970916102326\tkitty kelly\n59994BE9F8892C94\t970916195903\tamericansingles\n366185767DC07204\t970916042638\tpc_4dgen.zip\n8EE83362186F49EF\t970916110722\tpocket doors\n8EE83362186F49EF\t970916161522\tred fern\n8EE83362186F49EF\t970916161635\ttulsequah chief\nED3EA19F0B5A556B\t970916092326\twww.thatguy.com/splash/\n1A64296CCE60F19E\t970916083359\tblack men\n1A64296CCE60F19E\t970916124107\ttoni braxton\n1A64296CCE60F19E\t970916124152\ttoni braxton\n1A64296CCE60F19E\t970916124156\ttoni braxton\n1A64296CCE60F19E\t970916131923\tblack women\n1A64296CCE60F19E\t970916134113\thalle berry\n1A64296CCE60F19E\t970916134125\thalle berry\n1A64296CCE60F19E\t970916134416\thalle berry\nC81329DC0EF932FB\t970916112502\t1998\nC040A1754EEF11B1\t970916161855\tprimestar\n93CE4FF9E36FA112\t970916203820\tdemi moore\n93CE4FF9E36FA112\t970916211152\tjenne mccarthy\n93CE4FF9E36FA112\t970916211227\tjenne mccarthy jenny \n93CE4FF9E36FA112\t970916211759\tjenny mccarthy\n93CE4FF9E36FA112\t970916211814\tjenny mccarthy\n59E5CD546C202A78\t970916103542\tinternet public library\n59E5CD546C202A78\t970916103609\tinternet public library\n59E5CD546C202A78\t970916103637\tinternet public library\n49D3717A8D3ED397\t970916083233\teek the cat\n77AC89619076A8E1\t970916104906\tsloth\n77AC89619076A8E1\t970916104939\tsloth toed \n4F6F6DA149C3DC4D\t970916005305\tmaria checa\n4F6F6DA149C3DC4D\t970916010738\t\n4F6F6DA149C3DC4D\t970916011154\tallysa milano\n4F6F6DA149C3DC4D\t970916011402\t\nC07D4ECD1ACE0C89\t970916193608\te\nC07D4ECD1ACE0C89\t970916193629\tentertainment\n6FB3D2D282761F25\t970916111515\tusa todays sports page\n6FB3D2D282761F25\t970916111553\tusa todays sports page\n6FB3D2D282761F25\t970916111723\tusa todays sports page\n6FB3D2D282761F25\t970916141735\tinfo on ukiah ca\n6FB3D2D282761F25\t970916142024\tinfo on ukiah ca\n6FB3D2D282761F25\t970916142113\tdavy tree triming\n6FB3D2D282761F25\t970916170122\tcartoons\n6FB3D2D282761F25\t970916170344\tcartoons\n6FB3D2D282761F25\t970916175415\tkids activdies\n6FB3D2D282761F25\t970916175513\tkids activdies\n6FB3D2D282761F25\t970916175523\tkids activdies\n6FB3D2D282761F25\t970916191440\tinvertebrates\n6FB3D2D282761F25\t970916191445\tinvertebrates\n6FB3D2D282761F25\t970916193702\tlittle tikes\n6FB3D2D282761F25\t970916193704\tlittle tikes\n6FB3D2D282761F25\t970916193748\tlittle tikes\n6FB3D2D282761F25\t970916193751\tlittle tikes\n6FB3D2D282761F25\t970916194005\tlittle tikes\n6FB3D2D282761F25\t970916194026\tlittle tikes\n6FB3D2D282761F25\t970916194028\tlittle tikes\n6FB3D2D282761F25\t970916194103\t\n6FB3D2D282761F25\t970916194139\t\n6FB3D2D282761F25\t970916194244\tlittle tykes toys\n6FB3D2D282761F25\t970916194335\ttoys r us\n6FB3D2D282761F25\t970916194518\ttoys r us\n6FB3D2D282761F25\t970916194520\tlittle tykes toys\n6FB3D2D282761F25\t970916194603\tmall of america in minnasota\n6FB3D2D282761F25\t970916195043\tmall of america in minnasota\n6FB3D2D282761F25\t970916230356\tinvertebrates\n6FB3D2D282761F25\t970917000335\tinvertebrates\n6FB3D2D282761F25\t970917000345\t\n6FB3D2D282761F25\t970917000348\t\n4A1C1951CEC8BA70\t970916085140\tmarine midland\n4A1C1951CEC8BA70\t970916085234\t\n778EBE06AC999541\t970916120133\tdeaf history\n778EBE06AC999541\t970916120241\tdeaf history\nB144CE6F1EDAB0DE\t970916080609\tgame\nB144CE6F1EDAB0DE\t970916080737\tcar\nB144CE6F1EDAB0DE\t970916080821\tmercedes benz\nB144CE6F1EDAB0DE\t970916090435\tmercedes benz slk \nB144CE6F1EDAB0DE\t970916090554\tmercedes benz\n259DC5DCBDBD4D4D\t970916125331\t\n7D1DD1781EDB79A0\t970916173046\tmakeup products\n7D1DD1781EDB79A0\t970916173100\tmakeup products\n7D1DD1781EDB79A0\t970916174001\t\"lancom\" cosmetic producrs\n7D1DD1781EDB79A0\t970916174500\t\"lancom\" products\n7D1DD1781EDB79A0\t970916175124\tlancome cosmetics\n7D1DD1781EDB79A0\t970916180008\tlancome beauty products\n7D1DD1781EDB79A0\t970916180439\tlancome paris\n7D1DD1781EDB79A0\t970916181253\t\n7D1DD1781EDB79A0\t970916181408\tnordstrom\nDAA8C88C7DA0F0B9\t970916181646\tcancer and prevention\nDAA8C88C7DA0F0B9\t970916182959\t\nDAA8C88C7DA0F0B9\t970916183616\tcancer and prevention\nDAA8C88C7DA0F0B9\t970916183708\t\n060FCC14E09355CF\t970916002916\tvascular diseases + sleep disorders\n060FCC14E09355CF\t970916004422\tvascular diseases + sleep disorders\n060FCC14E09355CF\t970916005808\tsleep+disorders+high+blood+pressure\n060FCC14E09355CF\t970916005939\t\n060FCC14E09355CF\t970916010137\tinsomnia+high+blood+pressure\n060FCC14E09355CF\t970916010636\t\n060FCC14E09355CF\t970916010800\tinsomnia+high+blood+pressure\n060FCC14E09355CF\t970916011358\tinsomnia+high+blood+pressure\n060FCC14E09355CF\t970916011744\tinsomnia+high+blood+pressure\nBF76256C3A233A8A\t970916101502\t+proposals\nBF76256C3A233A8A\t970916101525\t+proposals\n15BDF589C71C10CB\t970916133051\t\n15BDF589C71C10CB\t970916211915\tinformation drop shipeed\n15BDF589C71C10CB\t970916211916\tinformation drop shipeed\n15BDF589C71C10CB\t970916211919\tinformation drop shipeed\n15BDF589C71C10CB\t970916212003\tinformation drop shipeed\n15BDF589C71C10CB\t970916212106\twhole salers\n15BDF589C71C10CB\t970916212148\twhole salers\n15BDF589C71C10CB\t970916212255\tair filtration systems greg montoya\n15BDF589C71C10CB\t970916212402\tair filtration systems greg montoya\n15BDF589C71C10CB\t970916212437\tair filtration systems greg montoya\n15BDF589C71C10CB\t970916212646\t greg montoya\n15BDF589C71C10CB\t970916214320\t greg montoya\n15BDF589C71C10CB\t970916214400\t greg montoya\n15BDF589C71C10CB\t970916214418\t\n15BDF589C71C10CB\t970916214946\tgreg montoya\n15BDF589C71C10CB\t970916215001\t\n15BDF589C71C10CB\t970916215550\t\n15BDF589C71C10CB\t970916215640\t\n5539B128215E9A49\t970916094717\tcahuilla\n5539B128215E9A49\t970916094858\t\n5539B128215E9A49\t970916095455\tchauilla\n5539B128215E9A49\t970916095642\tchauilla\n5539B128215E9A49\t970916095711\tcandelaria\n5539B128215E9A49\t970916110728\tcandalaria\n5539B128215E9A49\t970916110936\t\n5539B128215E9A49\t970916111924\t\n5539B128215E9A49\t970916112200\t\n5539B128215E9A49\t970916112220\t\n5539B128215E9A49\t970916112442\t\n5539B128215E9A49\t970916112851\t\nB1E4391F6E6EFEF4\t970916203551\tusing multimedia\nB1E4391F6E6EFEF4\t970916205455\tmultimedia system\nB1E4391F6E6EFEF4\t970916205504\tmultimedia system\nB1E4391F6E6EFEF4\t970916205540\tmultimedia system\nB1E4391F6E6EFEF4\t970916205606\tmultimedia system interactive \nB1E4391F6E6EFEF4\t970916205659\tmultimedia system educational \nB1E4391F6E6EFEF4\t970916205737\tmultimedia system educational computing \nB1E4391F6E6EFEF4\t970916210015\tmultimedia system educational computing publications \nB1E4391F6E6EFEF4\t970916210046\tmultimedia system educational computing publications \nAAE7D472AA45AB96\t970916213339\tcrash photos\nAAE7D472AA45AB96\t970916213418\tdi crash photos\nAAE7D472AA45AB96\t970916213433\tdi crash photos\nAAE7D472AA45AB96\t970916213459\tdi crash photos\n790FC18760C238A6\t970916083709\tchildbirth\n790FC18760C238A6\t970916083839\tchildbirth\nC9F4F61D48892F7B\t970916201602\tcal state northridge\nC9F4F61D48892F7B\t970916202041\tcal state northridge - home page\n2F93931CCB13D662\t970916144059\tteen\n99D8C7D14A864902\t970916234643\treiten + western\n99D8C7D14A864902\t970916234803\t\n99D8C7D14A864902\t970917000740\twesternreiten + braunschweig\n99D8C7D14A864902\t970917000831\treiten + western + braunschweig\n99D8C7D14A864902\t970917000923\treiten + western + niedersachsen\n75C18D86685AAEAE\t970916162606\tshannon tweed\n75C18D86685AAEAE\t970916163303\tshannon tweed\n75C18D86685AAEAE\t970916164545\tshannon tweed\n75C18D86685AAEAE\t970916164555\tshannon tweed\n7D61F86F1732EDC6\t970916055000\t+baseball +collector +software\n86EAEA913CC8D7C4\t970916205516\tcheerleaders\n86EAEA913CC8D7C4\t970916205654\tcheerleaders\n86EAEA913CC8D7C4\t970916205722\tcheerleaders\n3DF52D5806E094F4\t970916142206\t\n19887D73626B5D55\t970916200713\tshakespeare\n19887D73626B5D55\t970916200816\tshakespeare comedies\n19887D73626B5D55\t970916201237\tthe comedy of errors\n19887D73626B5D55\t970916201610\t\n19887D73626B5D55\t970916202213\t\n19887D73626B5D55\t970916202446\t\n19887D73626B5D55\t970916202616\t\n19887D73626B5D55\t970916202947\t\n19887D73626B5D55\t970916203256\t\n19887D73626B5D55\t970916203348\t\n19887D73626B5D55\t970916203457\t\n19887D73626B5D55\t970916203552\t\n19887D73626B5D55\t970916203648\t\n19887D73626B5D55\t970916203754\t\n19887D73626B5D55\t970916203854\t\n19887D73626B5D55\t970916203940\t\n19887D73626B5D55\t970916204022\t\n19887D73626B5D55\t970916210207\tcomedy of errors, the\n19887D73626B5D55\t970916211335\tcomedy of errors, the\n19887D73626B5D55\t970916211545\t\n19887D73626B5D55\t970916211639\t\n19887D73626B5D55\t970916211949\tthe comedy of errors  \"i to the world am like a drop of water\"\n19887D73626B5D55\t970916212111\t \"i to the world am like a drop of water\"\n19887D73626B5D55\t970916212319\t the comedy of errors;  important passages\n19887D73626B5D55\t970916212418\t the comedy of errors;  important passages\n19887D73626B5D55\t970916212506\t\n19887D73626B5D55\t970916212705\tthe comedy of errors\n19887D73626B5D55\t970916212801\t\n19887D73626B5D55\t970916215300\tthe comedy of errors\n19887D73626B5D55\t970916215614\t\nB8E12AFC196C5FB7\t970916081637\twin32s \nB8E12AFC196C5FB7\t970916081817\tvxtreme\n070A45F23275C279\t970916195011\tfamily ancestory\n070A45F23275C279\t970916195333\tfamily ancestory immigrants\n070A45F23275C279\t970916195412\tfamily ancestory german\n070A45F23275C279\t970916195444\tfamily ancestory german\n070A45F23275C279\t970916195531\tfamily ancestory german genealogy descendants ancestor \n070A45F23275C279\t970916195554\t\n4E3114DABE39DDB6\t970916224105\tstock photoes\n752FE259E734662C\t970916090642\t\"jlamont@washblade.com\"\n752FE259E734662C\t970916090707\t\"james lamont\"\n752FE259E734662C\t970916090752\t\"washington blade\"+\"james lamont\"\n752FE259E734662C\t970916091014\tchechi\n752FE259E734662C\t970916091035\tceo of northwest airlines\n752FE259E734662C\t970916091220\t\"northwest airlines\"+\"chechi\"\n752FE259E734662C\t970916091317\t\"northwest airlines\"+\"cheechi\"\n752FE259E734662C\t970916091336\tnorthwest airlines\n752FE259E734662C\t970916094054\tjlamont@washblade.com\n752FE259E734662C\t970916113229\t\"caring kids\"\n752FE259E734662C\t970916113306\t\"caring kids\"\n752FE259E734662C\t970916113338\t\"caring kids\"\n752FE259E734662C\t970916113458\t\"caring kids\"+st. johns university\n752FE259E734662C\t970916113533\t\"caring kids\"+st. johns university\n752FE259E734662C\t970916113630\t\"st. johns university\" + kids\nB53A8E9C0F0A04B8\t970916134559\tstudent loans\nB53A8E9C0F0A04B8\t970916134724\tstudent loans, alberta\nF44CC3ECE5C1C448\t970916072822\t\n6D906622D87278E5\t970916094911\tyouth +cult\n6D906622D87278E5\t970916094924\tyouth +cult\n6D906622D87278E5\t970916100502\tyouth +cult\n0F6881A3768F6E29\t970916151920\tfree email\n0F6881A3768F6E29\t970916152015\tfree email\n0F6881A3768F6E29\t970916152058\tfree email\n054340E4B8F63E34\t970916162603\tmedieval pics\n054340E4B8F63E34\t970916163016\tmedieval pics mapoff \n6CB9B3573BCB5A72\t970916081616\tflorida gators football\n0338D63FFC24DC2F\t970916152530\tclothing catalogs\n0338D63FFC24DC2F\t970916154224\tmens clothing catalog\n0338D63FFC24DC2F\t970916163825\tfreebies\n810EFC647D40E4CB\t970916182847\tsee you at the pole\nCF5AFAEC0B19A940\t970916063953\tanimal bites\nCF5AFAEC0B19A940\t970916064021\tanimal bites rabies\nCF5AFAEC0B19A940\t970916064955\tanimal bites rabies\nCF5AFAEC0B19A940\t970916065205\tanimal bites rabies\nC28C7C97640037C1\t970916065817\tieee\nA1F547F916AD8A43\t970916125757\tgrammar\nA1F547F916AD8A43\t970916130038\tenglish grammar\n523E46AA9E20AB3E\t970916163926\tzork cheats\n0C6EA7BA0D77B41A\t970916060223\tsteve AND heater\n0C6EA7BA0D77B41A\t970916060414\tgenamation industries\n0C6EA7BA0D77B41A\t970916094017\tdos AND gvc AND network AND card \n0C6EA7BA0D77B41A\t970916094116\tgvc AND network AND card \n79E7FA8E26F7349E\t970916140438\tlow fat recipe books\n49948224B156B2DF\t970916212913\t\n49948224B156B2DF\t970916213117\t\n9541D2047C5360F9\t970916024016\tjenny\n9541D2047C5360F9\t970916063530\taaa\n9541D2047C5360F9\t970916063614\taaa travel\nB163FAFD64AFAB18\t970916134052\tfree downloadable pc games\nB163FAFD64AFAB18\t970916134142\tfree downloadable pc games\nB163FAFD64AFAB18\t970916134159\tfree downloadable pc wallpaper\nB163FAFD64AFAB18\t970916134220\tfree downloadable pc wallpaper\nB163FAFD64AFAB18\t970916134242\tfree downloadable pc wallpaper\nB163FAFD64AFAB18\t970916134304\tfree downloadable pc wallpaper\nB163FAFD64AFAB18\t970916134344\tfree downloadable pc wallpaper\nB163FAFD64AFAB18\t970916135454\tfree pc screensavers\nB163FAFD64AFAB18\t970916142957\tfree pc screensavers\nD2FFE38AFF1C358A\t970916083539\tmethylmercury\nD2FFE38AFF1C358A\t970916083634\tcantel at&t\n011ACA65C2BF70B2\t970916175937\tprime ministers of australia\n011ACA65C2BF70B2\t970916180131\t\n011ACA65C2BF70B2\t970916182815\tdeath of robert menzies\n011ACA65C2BF70B2\t970916182831\tdead robert menzies\n011ACA65C2BF70B2\t970916182917\t\n233043A33AEF6A1D\t970916204226\tbarbara ross\n233043A33AEF6A1D\t970916204330\tbarbara+ross+e\n233043A33AEF6A1D\t970916204848\tbarbara+ross+law\n233043A33AEF6A1D\t970916210506\trobot+circuit+computer+interface\n233043A33AEF6A1D\t970916210814\trobot+circuit+computer+interface\nE559AEBED8E9E078\t970916113041\twww.csaa.com\nE559AEBED8E9E078\t970916113230\twww.csaa.com home insuramce\nE559AEBED8E9E078\t970916113232\twww.csaa.com home insurance\n6E2A4B3FED94E84D\t970916173721\tvtec and honda\n6E2A4B3FED94E84D\t970916173931\t\"what is 'vtec'?\"\n6E2A4B3FED94E84D\t970916174442\tdodge \"magnum\" engines\n6E2A4B3FED94E84D\t970916174748\tdodge \n6E2A4B3FED94E84D\t970916174817\tdodge avenger \nA1CFAE0FF0E6CFDE\t970916193254\tfreeware\n62F017C7A74C51DC\t970916190511\tusenet\n62F017C7A74C51DC\t970916190616\tusenet newsgroup \n62F017C7A74C51DC\t970916190739\tusenet newsgroup and xenix\n150EBA3F42F75143\t970916191931\travage\n150EBA3F42F75143\t970916191935\travage\n150EBA3F42F75143\t970916191944\travage\n2B73EFE0F9FC9E0B\t970916195501\thttp://educationalproducts.com\n2B73EFE0F9FC9E0B\t970916195507\thttp://educationalproducts.com\n90B21F67EEA27FEA\t970916105359\twolfenstein cheats\n90B21F67EEA27FEA\t970916105418\t\n90B21F67EEA27FEA\t970916105451\t\n90B21F67EEA27FEA\t970916105505\t\n90B21F67EEA27FEA\t970916105532\t\n1830EAD9FEB54EB4\t970916173423\tpointilism\n109FA25A577DCE53\t970916182544\tcoffee+decor\n109FA25A577DCE53\t970916182630\tcoffee+decor\n109FA25A577DCE53\t970916182648\tcoffee+decor\n109FA25A577DCE53\t970916182744\tcoffee+decor\n109FA25A577DCE53\t970916182907\tcoffee+decor\n109FA25A577DCE53\t970916182923\tcoffee+decor\n51BA997ACC88FAE3\t970916230937\tphotodiode\n51BA997ACC88FAE3\t970916231040\tphotodiode circuit\n51BA997ACC88FAE3\t970916231143\tphotodiode photodiodes detectors photodetectors \n14101EC6E7B07817\t970916072051\tr.e.m\n14101EC6E7B07817\t970916072150\trem and music\nB038389D403E4C43\t970916200420\tscarborough,ontario,canada\nB038389D403E4C43\t970916203329\t\nC89F34E15252E94A\t970916201058\tcustom computer configuration\nC89F34E15252E94A\t970916201238\tcustom computer configuration\nC89F34E15252E94A\t970916201330\tcustom computer configuration\nC89F34E15252E94A\t970916201352\tcustom computer configuration\nC89F34E15252E94A\t970916201404\tcustom computer configuration\nEEF64006C7D47AC1\t970916181047\tarmy aviation center organizations \nEEF64006C7D47AC1\t970916181052\tarmy aviation center organizations \nEEF64006C7D47AC1\t970916181208\t'us army'\nEEF64006C7D47AC1\t970916181222\tarmy aviation center organizations \n8F0ECFEDAB4A03DB\t970916025956\tfax\n8F0ECFEDAB4A03DB\t970916035934\tfree fax service\n8F0ECFEDAB4A03DB\t970916035936\tfree fax service\n3887FA7C17106AF1\t970916102956\tcourses+online+engin\n3887FA7C17106AF1\t970916103104\tcourses+online+engin\n3887FA7C17106AF1\t970916103151\tcourses+online+engin\n3887FA7C17106AF1\t970916103315\t\n5210315A34C4E7A4\t970916201905\twww.pol.net\n09EEE6585FB974ED\t970916001014\turlaub ferienwohnungen appartements \n09EEE6585FB974ED\t970916001038\turlaub ferienwohnungen appartements \nA2800E21FDCEE2BF\t970916102057\teclectus roratus\nA2800E21FDCEE2BF\t970916102202\teclectus roratus\nA2800E21FDCEE2BF\t970916102213\teclectus roratus\nA2800E21FDCEE2BF\t970916102342\teclectus roratus aviary \nA2800E21FDCEE2BF\t970916102415\t\nA2800E21FDCEE2BF\t970916102658\teclectus roratus aviary \nA2800E21FDCEE2BF\t970916124816\tparrots\nA2800E21FDCEE2BF\t970916124834\tparrots\nA2800E21FDCEE2BF\t970916124903\teclectus roratus\nA2800E21FDCEE2BF\t970916125218\trakets\n31A203282F24E07C\t970916111228\ttablature\n31A203282F24E07C\t970916111420\ttablature guitar \nBA449E5E59C384BB\t970916135030\tsamuel de champlain\nBA449E5E59C384BB\t970916135059\tsamuel de champlain\nBA449E5E59C384BB\t970916135132\tsamuel de champlain\nBA449E5E59C384BB\t970916135212\tsamuel de champlain\nBA449E5E59C384BB\t970916135301\tsamuel de champlain\nBA449E5E59C384BB\t970916140308\tsamuel de champlain\nBA449E5E59C384BB\t970916140420\tsamuel de champlain\nBA449E5E59C384BB\t970916140626\tsamuel de champlain\nBA449E5E59C384BB\t970916140959\tsamuel de chnplain\nBA449E5E59C384BB\t970916141012\tsamuel de champlain\nBA449E5E59C384BB\t970916141048\tsamuel de champlain\nBA449E5E59C384BB\t970916141714\tetienne brule\nBA449E5E59C384BB\t970916141756\tetienne brule\nBA449E5E59C384BB\t970916141917\thelene boulle\nBA449E5E59C384BB\t970916142225\thelene boule\nBA449E5E59C384BB\t970916142750\tchamplain, samuel  de\n5BB5018A0A78B70D\t970916061121\troyal mutual fund\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/files/2008.log",
    "content": "192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2008:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/files/2009.log",
    "content": "192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2009:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/files/2010.log",
    "content": "192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jan/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Mar/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Feb/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Apr/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/May/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jul/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Sep/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Aug/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n192.168.1.211 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"GET / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Oct/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Jun/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Nov/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n127.0.0.1 - - [01/Dec/2010:00:00:00 -0800] \"POST / HTTP/1.1\" 200 50 \"-\" \"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\"\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/files/readme.txt",
    "content": "Files in this folder can be copied to your HDFS \nto be used with the provided samples."
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce-sample-src/README.TXT",
    "content": "The pentaho-mapreduce-sample-src contains code that can be built and jar'd for\nuse by the Hadoop Job Executor. The sample is a WordCount example where 4 \ncommand line arguments can be passed in to override the defaults.\n\n--input=DIR                   The directory containing the input files for the\n                              WordCount Hadoop job\n--output=DIR                  The directory where the results of the WordCount\n                              Hadoop job will be stored\n--hdfsHost=HOST               The host<:port> of the HDFS service\n                              e.g.- localhost:9000\n--jobTrackerHost=HOST         The host<:port> of the job tracker service\n                              e.g.- localhost:9001"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce-sample-src/src/org/pentaho/hadoop/sample/wordcount/WordCount.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop.sample.wordcount;\n\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.io.IntWritable;\nimport org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapred.FileInputFormat;\nimport org.apache.hadoop.mapred.FileOutputFormat;\nimport org.apache.hadoop.mapred.JobClient;\nimport org.apache.hadoop.mapred.JobConf;\n\npublic class WordCount {\n  public static void main(String[] args) throws Exception {\n    String hdfsHost = \"localhost:9000\";\n    String jobTrackerHost = \"localhost:9001\";\n    String fsPrefix = \"hdfs\";\n    \n    String dirInput = \"/wordcount/input\";\n    String dirOutput = \"/wordcount/output\";\n    \n    if (args.length == 1 && (\n          args[0].equals(\"--help\") ||\n          args[0].equals(\"-h\") ||\n          args[0].equals(\"/?\")\n        )) {\n      System.out.println(\"Usage: WordCount <options>\");\n      System.out.println();\n      System.out.println(\"Options:\");\n      System.out.println();\n      System.out.println(\"--input=DIR                   The directory containing the input files for the\");\n      System.out.println(\"                              WordCount Hadoop job\");\n      System.out.println(\"--output=DIR                  The directory where the results of the WordCount\");\n      System.out.println(\"                              Hadoop job will be stored\");\n      System.out.println(\"--hdfsHost=HOST               The host<:port> of the HDFS service\");\n      System.out.println(\"                              e.g.- localhost:9000\");\n      System.out.println(\"--jobTrackerHost=HOST         The host<:port> of the job tracker service\");\n      System.out.println(\"                              e.g.- localhost:9001\");\n      System.out.println(\"--fsPrefix=PREFIX             The prefix to use for for the filesystem\");\n      System.out.println(\"                              e.g.- hdfs\");\n      System.out.println();\n      System.out.println();\n      System.out.println(\"If an option is not provided through the command prompt the following defaults\");\n      System.out.println(\"will be used:\");\n      System.out.println(\"--input='/wordcount/input'\");\n      System.out.println(\"--output='/wordcount/output'\");\n      System.out.println(\"--hdfsHost=localhost:9000\");\n      System.out.println(\"--jobTrackerHost=localhost:9001\");\n      System.out.println(\"--fsPrefix=hdfs\");\n          \n    } else {\n      if(args.length > 0){\n        for(String arg : args) {\n          if(arg.startsWith(\"--input=\")) {\n            dirInput = WordCount.getArgValue(arg);\n          } else if(arg.startsWith(\"--output=\")) {\n            dirOutput = WordCount.getArgValue(arg);\n          } else if(arg.startsWith(\"--hdfsHost=\")) {\n            hdfsHost = WordCount.getArgValue(arg);\n          } else if(arg.startsWith(\"--jobTrackerHost=\")) {\n            jobTrackerHost = WordCount.getArgValue(arg);\n          } else if(arg.startsWith(\"--fsPrefix=\")) {\n            fsPrefix = WordCount.getArgValue(arg);\n          }\n        }\n      }\n      \n      JobConf conf = new JobConf(WordCount.class);\n      conf.setJobName(\"WordCount\");\n\n      String hdfsBaseUrl = fsPrefix + \"://\" + hdfsHost;\n      conf.set(\"fs.default.name\", hdfsBaseUrl + \"/\");\n      if (jobTrackerHost != null && jobTrackerHost.length() > 0) {\n        conf.set(\"mapred.job.tracker\", jobTrackerHost);\n      }\n      \n      FileInputFormat.setInputPaths(conf, new Path[] { new Path(hdfsBaseUrl + dirInput) });\n      FileOutputFormat.setOutputPath(conf, new Path(hdfsBaseUrl + dirOutput));\n\n      conf.setMapperClass(WordCountMapper.class);\n      conf.setReducerClass(WordCountReducer.class);\n\n      conf.setMapOutputKeyClass(Text.class);\n      conf.setMapOutputValueClass(IntWritable.class);\n\n      conf.setOutputKeyClass(Text.class);\n      conf.setOutputValueClass(IntWritable.class);\n\n      JobClient.runJob(conf);\n    }\n  }\n  \n  private static String getArgValue(String arg) {\n    String result = null; \n    \n    String[] tokens = arg.split(\"=\"); \n    if(tokens.length > 1) {\n      result = tokens[1].replace(\"'\", \"\").replace(\"\\\"\", \"\");\n    }\n    System.out.println(arg + \" parses to \" + result);    \n    return result;\n  }\n}\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce-sample-src/src/org/pentaho/hadoop/sample/wordcount/WordCountMapper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop.sample.wordcount;\n\nimport java.io.IOException;\nimport java.util.StringTokenizer;\n\nimport org.apache.hadoop.io.IntWritable;\nimport org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapred.MapReduceBase;\nimport org.apache.hadoop.mapred.Mapper;\nimport org.apache.hadoop.mapred.OutputCollector;\nimport org.apache.hadoop.mapred.Reporter;\n\npublic class WordCountMapper extends MapReduceBase implements Mapper<Object, Text, Text, IntWritable> {\n  private Text word = new Text();\n\n  private static final IntWritable ONE = new IntWritable(1);\n\n  public void map(Object key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter)\n      throws IOException {\n    StringTokenizer wordList = new StringTokenizer(value.toString());\n\n    while (wordList.hasMoreTokens()) {\n      this.word.set(wordList.nextToken());\n      output.collect(this.word, ONE);\n    }\n  }\n}\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce-sample-src/src/org/pentaho/hadoop/sample/wordcount/WordCountReducer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop.sample.wordcount;\n\nimport java.io.IOException;\nimport java.util.Iterator;\n\nimport org.apache.hadoop.io.IntWritable;\nimport org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapred.MapReduceBase;\nimport org.apache.hadoop.mapred.OutputCollector;\nimport org.apache.hadoop.mapred.Reducer;\nimport org.apache.hadoop.mapred.Reporter;\n\npublic class WordCountReducer extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {\n  private IntWritable totalWordCount = new IntWritable();\n\n  public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output,\n      Reporter reporter) throws IOException {\n    int wordCount = 0;\n    while (values.hasNext()) {\n      wordCount += ((IntWritable) values.next()).get();\n    }\n\n    this.totalWordCount.set(wordCount);\n    output.collect(key, this.totalWordCount);\n  }\n}\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce2-sample-src/README.TXT",
    "content": "The pentaho-mapreduce-sample2-src contains code that can be built and jar'd for\nuse by the Hadoop Job Executor. The sample is a WordCount2 example where 2 \ncommand line arguments should be passed in.\n\ninput                   The directory containing the input files for the\n                        WordCount Hadoop job\noutput                  The directory where the results of the WordCount2\n                        Hadoop job will be stored\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/pentaho-mapreduce2-sample-src/src/org/pentaho/hadoop/sample/wordcount/WordCount2.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\n\npackage org.pentaho.hadoop.sample.wordcount;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.conf.Configured;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.io.IntWritable;\nimport org.apache.hadoop.io.LongWritable;\nimport org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapreduce.Job;\nimport org.apache.hadoop.mapreduce.Mapper;\nimport org.apache.hadoop.mapreduce.Reducer;\nimport org.apache.hadoop.mapreduce.lib.input.FileInputFormat;\nimport org.apache.hadoop.mapreduce.lib.input.TextInputFormat;\nimport org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;\nimport org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;\nimport org.apache.hadoop.util.Tool;\nimport org.apache.hadoop.util.ToolRunner;\n\nimport java.io.IOException;\nimport java.util.StringTokenizer;\n\npublic class WordCount2 extends Configured implements Tool {\n  public int run( String[] strings ) throws Exception {\n    Configuration conf = getConf();\n\n    Job job = Job.getInstance( conf, \"wordcount2\" );\n    job.setJarByClass( WordCount2.class );\n\n    job.setOutputKeyClass( Text.class );\n    job.setOutputValueClass( IntWritable.class );\n\n    job.setMapperClass( Map.class );\n    job.setReducerClass( Reduce.class );\n\n    job.setInputFormatClass( TextInputFormat.class );\n    job.setOutputFormatClass( TextOutputFormat.class );\n\n    FileInputFormat.addInputPath( job, new Path( strings[ 0 ] ) );\n    FileOutputFormat.setOutputPath( job, new Path( strings[ 1 ] ) );\n\n    return job.waitForCompletion( true ) ? 0 : 1;\n  }\n\n  public static class Map\n    extends Mapper<LongWritable, Text, Text, IntWritable> {\n    private static final IntWritable one = new IntWritable( 1 );\n    private Text word = new Text();\n\n    public void map( LongWritable key, Text value, Context context )\n      throws IOException, InterruptedException {\n      String line = value.toString();\n      StringTokenizer tokenizer = new StringTokenizer( line );\n      while ( tokenizer.hasMoreTokens() ) {\n        this.word.set( tokenizer.nextToken() );\n        context.write( this.word, one );\n      }\n    }\n  }\n\n  public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {\n    public void reduce( Text key, Iterable<IntWritable> values, Context context )\n      throws IOException, InterruptedException {\n      int sum = 0;\n      for ( IntWritable val : values ) {\n        sum += val.get();\n      }\n      context.write( key, new IntWritable( sum ) );\n    }\n  }\n\n  public static void main( String[] args ) throws Exception {\n    int exitCode = ToolRunner.run( new Configuration(), new WordCount2(), args );\n    System.exit( exitCode );\n  }\n}\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/script1-hadoop-mod.pig",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n-- Query Phrase Popularity (Hadoop cluster)\n\n-- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day. \n\n\n-- Register the tutorial JAR file so that the included UDFs can be called in the script.\nREGISTER $udf_jar;\n\n-- Use the  PigStorage function to load the excite log file into the \"raw\" bag as an array of records.\n-- Input: (user,time,query) \nraw = LOAD '/excite.log.bz2' USING PigStorage('\\t') AS (user, time, query);\n\n\n-- Call the NonURLDetector UDF to remove records if the query field is empty or a URL. \nclean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);\n\n-- Call the ToLower UDF to change the query field to lowercase. \nclean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;\n\n-- Because the log file only contains queries for a single day, we are only interested in the hour.\n-- The excite query log timestamp format is YYMMDDHHMMSS.\n-- Call the ExtractHour UDF to extract the hour (HH) from the time field.\nhoured = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;\n\n-- Call the NGramGenerator UDF to compose the n-grams of the query.\nngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;\n\n-- Use the  DISTINCT command to get the unique n-grams for all records.\nngramed2 = DISTINCT ngramed1;\n\n-- Use the  GROUP command to group records by n-gram and hour. \nhour_frequency1 = GROUP ngramed2 BY (ngram, hour);\n\n-- Use the  COUNT function to get the count (occurrences) of each n-gram. \nhour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;\n\n-- Use the  GROUP command to group records by n-gram only. \n-- Each group now corresponds to a distinct n-gram and has the count for each hour.\nuniq_frequency1 = GROUP hour_frequency2 BY group::ngram;\n\n-- For each group, identify the hour in which this n-gram is used with a particularly high frequency.\n-- Call the ScoreGenerator UDF to calculate a \"popularity\" score for the n-gram.\nuniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));\n\n-- Use the  FOREACH-GENERATE command to assign names to the fields. \nuniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;\n\n-- Use the  FILTER command to move all records with a score less than or equal to 2.0.\nfiltered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;\n\n-- Use the  ORDER command to sort the remaining records by hour and score. \nordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;\n\n-- Use the  PigStorage function to store the results. \n-- Output: (hour, n-gram, score, count, average_counts_among_all_hours)\nSTORE ordered_uniq_frequency INTO '/script1-hadoop-results' USING PigStorage();\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/script1-local-mod.pig",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n-- Query Phrase Popularity (local mode)\n\n-- This script processes a search query log file from the Excite search engine and finds search phrases that occur with particular high frequency during certain times of the day.\n\n-- Register the tutorial JAR file so that the included UDFs can be called in the script.\nREGISTER $udf_jar;\n\n-- Use the PigStorage function to load the excite log file into the raw bag as an array of records.\n-- Input: (user,time,query) \nraw = LOAD '$excite_small' USING PigStorage('\\t') AS (user, time, query);\n\n-- Call the NonURLDetector UDF to remove records if the query field is empty or a URL. \nclean1 = FILTER raw BY org.apache.pig.tutorial.NonURLDetector(query);\n\n-- Call the ToLower UDF to change the query field to lowercase. \nclean2 = FOREACH clean1 GENERATE user, time, org.apache.pig.tutorial.ToLower(query) as query;\n\n-- Because the log file only contains queries for a single day, we are only interested in the hour.\n-- The excite query log timestamp format is YYMMDDHHMMSS.\n-- Call the ExtractHour UDF to extract the hour (HH) from the time field.\nhoured = FOREACH clean2 GENERATE user, org.apache.pig.tutorial.ExtractHour(time) as hour, query;\n\n-- Call the NGramGenerator UDF to compose the n-grams of the query.\nngramed1 = FOREACH houred GENERATE user, hour, flatten(org.apache.pig.tutorial.NGramGenerator(query)) as ngram;\n\n-- Use the DISTINCT command to get the unique n-grams for all records.\nngramed2 = DISTINCT ngramed1;\n\n-- Use the GROUP command to group records by n-gram and hour. \nhour_frequency1 = GROUP ngramed2 BY (ngram, hour);\n\n-- Use the COUNT function to get the count (occurrences) of each n-gram. \nhour_frequency2 = FOREACH hour_frequency1 GENERATE flatten($0), COUNT($1) as count;\n\n-- Use the GROUP command to group records by n-gram only. \n-- Each group now corresponds to a distinct n-gram and has the count for each hour.\nuniq_frequency1 = GROUP hour_frequency2 BY group::ngram;\n\n-- For each group, identify the hour in which this n-gram is used with a particularly high frequency.\n-- Call the ScoreGenerator UDF to calculate a \"popularity\" score for the n-gram.\nuniq_frequency2 = FOREACH uniq_frequency1 GENERATE flatten($0), flatten(org.apache.pig.tutorial.ScoreGenerator($1));\n\n-- Use the FOREACH-GENERATE command to assign names to the fields. \nuniq_frequency3 = FOREACH uniq_frequency2 GENERATE $1 as hour, $0 as ngram, $2 as score, $3 as count, $4 as mean;\n\n-- Use the FILTER command to move all records with a score less than or equal to 2.0.\nfiltered_uniq_frequency = FILTER uniq_frequency3 BY score > 2.0;\n\n-- Use the ORDER command to sort the remaining records by hour and score. \nordered_uniq_frequency = ORDER filtered_uniq_frequency BY hour, score;\n\n-- Use the PigStorage function to store the results. \n-- Output: (hour, n-gram, score, count, average_counts_among_all_hours)\nSTORE ordered_uniq_frequency INTO 'script1-local-results.txt' USING PigStorage();\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/weblogs-mapper.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>weblogs-mapper</name>\n    <description/>\n    <extended_description/>\n    <trans_version/>\n    <trans_type>Normal</trans_type>\n    <trans_status>0</trans_status>\n    <directory>&#x2f;</directory>\n    <parameters>\n    </parameters>\n    <log>\n      <trans-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </trans-log-table>\n      <perf-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>SEQ_NR</id>\n          <enabled>Y</enabled>\n          <name>SEQ_NR</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>INPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>INPUT_BUFFER_ROWS</name>\n        </field>\n        <field>\n          <id>OUTPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>OUTPUT_BUFFER_ROWS</name>\n        </field>\n      </perf-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n      <step-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n      </step-log-table>\n      <metrics-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_DATE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_CODE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_CODE</name>\n        </field>\n        <field>\n          <id>METRICS_DESCRIPTION</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DESCRIPTION</name>\n        </field>\n        <field>\n          <id>METRICS_SUBJECT</id>\n          <enabled>Y</enabled>\n          <name>METRICS_SUBJECT</name>\n        </field>\n        <field>\n          <id>METRICS_TYPE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_TYPE</name>\n        </field>\n        <field>\n          <id>METRICS_VALUE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_VALUE</name>\n        </field>\n      </metrics-log-table>\n    </log>\n    <maxdate>\n      <connection/>\n      <table/>\n      <field/>\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file/>\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n      <clusterschema>\n        <name>clusterTest</name>\n        <base_port>40000</base_port>\n        <sockets_buffer_size>2000</sockets_buffer_size>\n        <sockets_flush_interval>5000</sockets_flush_interval>\n        <sockets_compressed>Y</sockets_compressed>\n        <dynamic>Y</dynamic>\n        <slaveservers>\n        </slaveservers>\n      </clusterschema>\n    </clusterschemas>\n    <created_user/>\n    <created_date>2010&#x2f;10&#x2f;26 15&#x3a;38&#x3a;33.201</created_date>\n    <modified_user>-</modified_user>\n    <modified_date>2010&#x2f;07&#x2f;15 10&#x3a;12&#x3a;26.133</modified_date>\n    <key_for_session_key>H4sIAAAAAAAAAAMAAAAAAAAAAAA&#x3d;</key_for_session_key>\n    <is_key_private>N</is_key_private>\n  </info>\n  <notepads>\n    <notepad>\n      <note>Hadoop will pass data into this step based on the &#xa;format defined in the Pentaho MapReduce entry&#xa;used to run this mapper.  For this example we&#xa;assume the input will be&#x3a;&#xa;&#xa;&#x28;hadoop-generated key, line from log file&#x29;</note>\n      <xloc>11</xloc>\n      <yloc>36</yloc>\n      <width>302</width>\n      <heigth>98</heigth>\n      <fontname>Ariel</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Parse the log line into identifiable fields&#xa;&#x28;client ip, http request, day, month, etc&#x29;.</note>\n      <xloc>196</xloc>\n      <yloc>378</yloc>\n      <width>242</width>\n      <heigth>39</heigth>\n      <fontname>Ariel</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Combine the month and year of the &#xa;log line to create the output key&#xa;&#x28;how the output will be ultimately grouped&#x29;.</note>\n      <xloc>540</xloc>\n      <yloc>376</yloc>\n      <width>257</width>\n      <heigth>54</heigth>\n      <fontname>Ariel</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Define the output of this Transformation as the output key&#xa;previously generated and the client_ip that made the request.&#xa;These key-value pairs are passed to the reducer where they will be&#xa;grouped by the output key and tallied up.</note>\n      <xloc>658</xloc>\n      <yloc>56</yloc>\n      <width>401</width>\n      <heigth>69</heigth>\n      <fontname>Ariel</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n  <order>\n    <hop>\n      <from>Parse Log</from>\n      <to>Combine Year and Month into output key</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Hadoop Input</from>\n      <to>Parse Log</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Combine Year and Month into output key</from>\n      <to>Hadoop Output</to>\n      <enabled>Y</enabled>\n    </hop>\n  </order>\n  <step>\n    <name>Combine Year and Month into output key</name>\n    <type>Calculator</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <calculation>\n      <field_name>outKey</field_name>\n      <calc_type>ADD</calc_type>\n      <field_a>year</field_a>\n      <field_b>month</field_b>\n      <field_c/>\n      <value_type>String</value_type>\n      <value_length>-1</value_length>\n      <value_precision>-1</value_precision>\n      <remove>N</remove>\n      <conversion_mask/>\n      <decimal_symbol/>\n      <grouping_symbol/>\n      <currency_symbol/>\n    </calculation>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>645</xloc>\n      <yloc>297</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Input</name>\n    <type>HadoopEnterPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <fields>      <field>        <name>key</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>      <field>        <name>value</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>    </fields>    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>120</xloc>\n      <yloc>155</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Output</name>\n    <type>HadoopExitPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <outkeyfieldname>outKey</outkeyfieldname>\n    <outvaluefieldname>client_ip</outvaluefieldname>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>824</xloc>\n      <yloc>155</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Parse Log</name>\n    <type>RegexEval</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>2</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <script><![CDATA[^([^\\s]{7,15})\\s            # client_ip\n-\\s                         # unused IDENT field\n-\\s                         # unused USER field\n\\[((\\d{2})/(\\w{3})/(\\d{4})  # request date dd/MMM/yyyy\n:(\\d{2}):(\\d{2}):(\\d{2})\\s([-+ ]\\d{4}))\\]\n                            # request time :HH:mm:ss -0800\n\\s\"(GET|POST)\\s             # HTTP verb\n([^\\s]*)                     # HTTP URI\n\\sHTTP/1\\.[01]\"\\s           # HTTP version\n\n(\\d{3})\\s                   # HTTP status code\n(\\d+)\\s                     # bytes returned\n\"([^\"]+)\"\\s                 # referrer field\n\n\"                           # User agent parsing, always quoted.\n\"?                          # Sometimes if the user spoofs the user_agent, they incorrectly quote it.\n(                           # The UA string\n  [^\"]*?                    # Uninteresting bits\n  (?:\n    (?:\n     rv:                    # Beginning of the gecko engine version token\n     (?=[^;)]{3,15}[;)])    # ensure version string size\n     (                      # Whole gecko version\n       (\\d{1,2})                   # version_component_major\n       \\.(\\d{1,2}[^.;)]{0,8})      # version_component_minor\n       (?:\\.(\\d{1,2}[^.;)]{0,8}))? # version_component_a\n       (?:\\.(\\d{1,2}[^.;)]{0,8}))? # version_component_b\n     )\n     [^\"]*                  # More uninteresting bits\n    )\n   |\n    [^\"]*                   # More uninteresting bits\n  )\n)                           # End of UA string\n\"?\n\"]]></script>    <matcher>value</matcher>\n    <resultfieldname>is_match</resultfieldname>\n    <usevar>Y</usevar>\n    <allowcapturegroups>Y</allowcapturegroups>\n    <replacefields>N</replacefields>\n    <canoneq>N</canoneq>\n    <caseinsensitive>N</caseinsensitive>\n    <comment>Y</comment>\n    <dotall>N</dotall>\n    <multiline>N</multiline>\n    <unicode>N</unicode>\n    <unix>N</unix>\n    <fields>\n      <field>\n        <name>client_ip</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>full_request_date</name>\n        <type>Date</type>\n        <format>dd&#x2f;MMM&#x2f;yyyy&#x3a;HH&#x3a;mm&#x3a;ss Z</format>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>day</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>month</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>year</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>hour</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>minute</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>second</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>tz</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>http_verb</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>URI</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>http_status_code</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>bytes_returned</name>\n        <type>Integer</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>referrer</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif>-</nullif>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>user_agent</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif>-</nullif>\n        <ifnull>Unknown</ifnull>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>firefox_gecko_version</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>firefox_gecko_version_major</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>firefox_gecko_version_minor</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>firefox_gecko_version_a</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n      <field>\n        <name>firefox_gecko_version_b</name>\n        <type>String</type>\n        <format/>\n        <group/>\n        <decimal/>\n        <length>-1</length>\n        <precision>-1</precision>\n        <nullif/>\n        <ifnull/>\n        <trimtype>none</trimtype>\n        <currency/>\n      </field>\n    </fields>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>299</xloc>\n      <yloc>297</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step_error_handling>\n  </step_error_handling>\n  <slave-step-copy-partition-distribution>\n  </slave-step-copy-partition-distribution>\n  <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/weblogs-reducer.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>weblogs-reducer</name>\n    <description/>\n    <extended_description/>\n    <trans_version/>\n    <trans_type>Normal</trans_type>\n    <trans_status>0</trans_status>\n    <directory>&#x2f;</directory>\n    <parameters>\n    </parameters>\n    <log>\n      <trans-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </trans-log-table>\n      <perf-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>SEQ_NR</id>\n          <enabled>Y</enabled>\n          <name>SEQ_NR</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>INPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>INPUT_BUFFER_ROWS</name>\n        </field>\n        <field>\n          <id>OUTPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>OUTPUT_BUFFER_ROWS</name>\n        </field>\n      </perf-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n      <step-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n      </step-log-table>\n      <metrics-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_DATE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_CODE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_CODE</name>\n        </field>\n        <field>\n          <id>METRICS_DESCRIPTION</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DESCRIPTION</name>\n        </field>\n        <field>\n          <id>METRICS_SUBJECT</id>\n          <enabled>Y</enabled>\n          <name>METRICS_SUBJECT</name>\n        </field>\n        <field>\n          <id>METRICS_TYPE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_TYPE</name>\n        </field>\n        <field>\n          <id>METRICS_VALUE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_VALUE</name>\n        </field>\n      </metrics-log-table>\n    </log>\n    <maxdate>\n      <connection/>\n      <table/>\n      <field/>\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file/>\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n      <clusterschema>\n        <name>clusterTest</name>\n        <base_port>40000</base_port>\n        <sockets_buffer_size>2000</sockets_buffer_size>\n        <sockets_flush_interval>5000</sockets_flush_interval>\n        <sockets_compressed>Y</sockets_compressed>\n        <dynamic>Y</dynamic>\n        <slaveservers>\n        </slaveservers>\n      </clusterschema>\n    </clusterschemas>\n    <created_user/>\n    <created_date>2010&#x2f;08&#x2f;16 14&#x3a;33&#x3a;33.360</created_date>\n    <modified_user>-</modified_user>\n    <modified_date>2010&#x2f;07&#x2f;16 09&#x3a;23&#x3a;42.406</modified_date>\n    <key_for_session_key>H4sIAAAAAAAAAAMAAAAAAAAAAAA&#x3d;</key_for_session_key>\n    <is_key_private>N</is_key_private>\n  </info>\n  <notepads>\n    <notepad>\n      <note>Hadoop will pass data into this step based on the &#xa;input format defined in the Pentaho MapReduce entry&#xa;used to run this reducer.  We are expecting&#x3a;&#xa;&#xa;&#x28;monthYear, client_ip&#x29;&#xa;&#xa;from the previously parsed log line.</note>\n      <xloc>10</xloc>\n      <yloc>45</yloc>\n      <width>332</width>\n      <heigth>113</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>This step performs the &#xa;counting of HTTP methods. &#xa;</note>\n      <xloc>341</xloc>\n      <yloc>266</yloc>\n      <width>178</width>\n      <heigth>49</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Data that flows out of this step will become the output of &#xa;the reducer running this transformation. The output here&#xa;is considered reduced &#x28;every occurrence of a key &#x28;yearMonth&#x29; &#xa;will be tallied up and passed on&#x29;.</note>\n      <xloc>475</xloc>\n      <yloc>80</yloc>\n      <width>377</width>\n      <heigth>69</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n  <order>\n    <hop>\n      <from>Hadoop Input</from>\n      <to>Group by</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Group by</from>\n      <to>Hadoop Output</to>\n      <enabled>Y</enabled>\n    </hop>\n  </order>\n  <step>\n    <name>Group by</name>\n    <type>GroupBy</type>\n    <description/>\n    <distribute>N</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n      <all_rows>N</all_rows>\n      <ignore_aggregate>N</ignore_aggregate>\n      <field_ignore/>\n      <directory>&#x25;&#x25;java.io.tmpdir&#x25;&#x25;</directory>\n      <prefix>grp</prefix>\n      <add_linenr>N</add_linenr>\n      <linenr_fieldname/>\n      <give_back_row>N</give_back_row>\n      <group>\n        <field>\n          <name>key</name>\n        </field>\n      </group>\n      <fields>\n        <field>\n          <aggregate>sum</aggregate>\n          <subject>value</subject>\n          <type>COUNT_ALL</type>\n          <valuefield/>\n        </field>\n      </fields>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>409</xloc>\n      <yloc>181</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Input</name>\n    <type>HadoopEnterPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <fields>      <field>        <name>key</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>      <field>        <name>value</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>    </fields>    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>169</xloc>\n      <yloc>181</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Output</name>\n    <type>HadoopExitPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <outkeyfieldname>key</outkeyfieldname>\n    <outvaluefieldname>sum</outvaluefieldname>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>649</xloc>\n      <yloc>181</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step_error_handling>\n  </step_error_handling>\n  <slave-step-copy-partition-distribution>\n  </slave-step-copy-partition-distribution>\n  <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/wordcount-mapper.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>wordcount-mapper</name>\n    <description/>\n    <extended_description/>\n    <trans_version/>\n    <trans_type>Normal</trans_type>\n    <trans_status>0</trans_status>\n    <directory>&#x2f;</directory>\n    <parameters>\n    </parameters>\n    <log>\n      <trans-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </trans-log-table>\n      <perf-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>SEQ_NR</id>\n          <enabled>Y</enabled>\n          <name>SEQ_NR</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>INPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>INPUT_BUFFER_ROWS</name>\n        </field>\n        <field>\n          <id>OUTPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>OUTPUT_BUFFER_ROWS</name>\n        </field>\n      </perf-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n      <step-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n      </step-log-table>\n      <metrics-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_DATE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_CODE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_CODE</name>\n        </field>\n        <field>\n          <id>METRICS_DESCRIPTION</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DESCRIPTION</name>\n        </field>\n        <field>\n          <id>METRICS_SUBJECT</id>\n          <enabled>Y</enabled>\n          <name>METRICS_SUBJECT</name>\n        </field>\n        <field>\n          <id>METRICS_TYPE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_TYPE</name>\n        </field>\n        <field>\n          <id>METRICS_VALUE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_VALUE</name>\n        </field>\n      </metrics-log-table>\n    </log>\n    <maxdate>\n      <connection/>\n      <table/>\n      <field/>\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file/>\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n      <clusterschema>\n        <name>clusterTest</name>\n        <base_port>40000</base_port>\n        <sockets_buffer_size>2000</sockets_buffer_size>\n        <sockets_flush_interval>5000</sockets_flush_interval>\n        <sockets_compressed>Y</sockets_compressed>\n        <dynamic>Y</dynamic>\n        <slaveservers>\n        </slaveservers>\n      </clusterschema>\n    </clusterschemas>\n    <created_user/>\n    <created_date>2010&#x2f;10&#x2f;25 08&#x3a;45&#x3a;28.015</created_date>\n    <modified_user>-</modified_user>\n    <modified_date>2010&#x2f;07&#x2f;15 10&#x3a;12&#x3a;26.133</modified_date>\n    <key_for_session_key>H4sIAAAAAAAAAAMAAAAAAAAAAAA&#x3d;</key_for_session_key>\n    <is_key_private>N</is_key_private>\n  </info>\n  <notepads>\n    <notepad>\n      <note>Hadoop will pass data into this step based on the &#xa;format defined in the Pentaho MapReduce entry&#xa;used to run this mapper.  For this example we&#xa;assume the input will be&#x3a;&#xa;&#xa;&#x28;hadoop-generated key, line from text file&#x29;</note>\n      <xloc>10</xloc>\n      <yloc>50</yloc>\n      <width>302</width>\n      <heigth>98</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>In this particular example, each &#x22;row&#x22; contains &#xa;several words which we want to split so they &#xa;may be individually counted.</note>\n      <xloc>37</xloc>\n      <yloc>335</yloc>\n      <width>286</width>\n      <heigth>54</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Each word that we split will have &#xa;a default count value of 1.</note>\n      <xloc>481</xloc>\n      <yloc>342</yloc>\n      <width>207</width>\n      <heigth>39</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>Define the output of this Transformation as the word we&#x27;ve split&#xa;from the row and the count of 1 &#x28;it&#x27;s only 1 word after all&#x29;.&#xa;These key-value pairs are passed to the reducer where they are&#xa;grouped by the output key and tallied up.</note>\n      <xloc>562</xloc>\n      <yloc>67</yloc>\n      <width>384</width>\n      <heigth>69</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n  <order>\n    <hop>\n      <from>Split words to rows</from>\n      <to>Add value</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Hadoop Input</from>\n      <to>Split words to rows</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Add value</from>\n      <to>Hadoop Output</to>\n      <enabled>Y</enabled>\n    </hop>\n  </order>\n  <step>\n    <name>Add value</name>\n    <type>Constant</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <fields>\n      <field>\n        <name>count</name>\n        <type>Integer</type>\n        <format/>\n        <currency/>\n        <decimal/>\n        <group/>\n        <nullif>1</nullif>\n        <length>-1</length>\n        <precision>-1</precision>\n        <set_empty_string>N</set_empty_string>\n      </field>\n    </fields>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>490</xloc>\n      <yloc>266</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Input</name>\n    <type>HadoopEnterPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <fields>      <field>        <name>key</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>      <field>        <name>value</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>0</precision>\n      </field>    </fields>    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>100</xloc>\n      <yloc>170</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Output</name>\n    <type>HadoopExitPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <outkeyfieldname>word</outkeyfieldname>\n    <outvaluefieldname>count</outvaluefieldname>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>685</xloc>\n      <yloc>170</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Split words to rows</name>\n    <type>SplitFieldToRows3</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n   <splitfield>value</splitfield>\n   <delimiter> </delimiter>\n   <newfield>word</newfield>\n   <rownum>N</rownum>\n   <rownum_field/>\n   <resetrownumber>Y</resetrownumber>\n   <delimiter_is_regex>N</delimiter_is_regex>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>295</xloc>\n      <yloc>266</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step_error_handling>\n  </step_error_handling>\n  <slave-step-copy-partition-distribution>\n  </slave-step-copy-partition-distribution>\n  <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "assemblies/samples/src/main/resources/jobs/hadoop/wordcount-reducer.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>wordcount-reducer</name>\n    <description/>\n    <extended_description/>\n    <trans_version/>\n    <trans_type>Normal</trans_type>\n    <trans_status>0</trans_status>\n    <directory>&#x2f;</directory>\n    <parameters>\n    </parameters>\n    <log>\n      <trans-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <size_limit_lines/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STATUS</id>\n          <enabled>Y</enabled>\n          <name>STATUS</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n          <subject/>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n          <subject/>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>STARTDATE</id>\n          <enabled>Y</enabled>\n          <name>STARTDATE</name>\n        </field>\n        <field>\n          <id>ENDDATE</id>\n          <enabled>Y</enabled>\n          <name>ENDDATE</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>DEPDATE</id>\n          <enabled>Y</enabled>\n          <name>DEPDATE</name>\n        </field>\n        <field>\n          <id>REPLAYDATE</id>\n          <enabled>Y</enabled>\n          <name>REPLAYDATE</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>Y</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n        <field>\n          <id>EXECUTING_SERVER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_SERVER</name>\n        </field>\n        <field>\n          <id>EXECUTING_USER</id>\n          <enabled>N</enabled>\n          <name>EXECUTING_USER</name>\n        </field>\n        <field>\n          <id>CLIENT</id>\n          <enabled>N</enabled>\n          <name>CLIENT</name>\n        </field>\n      </trans-log-table>\n      <perf-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <interval/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>SEQ_NR</id>\n          <enabled>Y</enabled>\n          <name>SEQ_NR</name>\n        </field>\n        <field>\n          <id>LOGDATE</id>\n          <enabled>Y</enabled>\n          <name>LOGDATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>INPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>INPUT_BUFFER_ROWS</name>\n        </field>\n        <field>\n          <id>OUTPUT_BUFFER_ROWS</id>\n          <enabled>Y</enabled>\n          <name>OUTPUT_BUFFER_ROWS</name>\n        </field>\n      </perf-log-table>\n      <channel-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>LOGGING_OBJECT_TYPE</id>\n          <enabled>Y</enabled>\n          <name>LOGGING_OBJECT_TYPE</name>\n        </field>\n        <field>\n          <id>OBJECT_NAME</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_NAME</name>\n        </field>\n        <field>\n          <id>OBJECT_COPY</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_COPY</name>\n        </field>\n        <field>\n          <id>REPOSITORY_DIRECTORY</id>\n          <enabled>Y</enabled>\n          <name>REPOSITORY_DIRECTORY</name>\n        </field>\n        <field>\n          <id>FILENAME</id>\n          <enabled>Y</enabled>\n          <name>FILENAME</name>\n        </field>\n        <field>\n          <id>OBJECT_ID</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_ID</name>\n        </field>\n        <field>\n          <id>OBJECT_REVISION</id>\n          <enabled>Y</enabled>\n          <name>OBJECT_REVISION</name>\n        </field>\n        <field>\n          <id>PARENT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>PARENT_CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>ROOT_CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>ROOT_CHANNEL_ID</name>\n        </field>\n      </channel-log-table>\n      <step-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>TRANSNAME</id>\n          <enabled>Y</enabled>\n          <name>TRANSNAME</name>\n        </field>\n        <field>\n          <id>STEPNAME</id>\n          <enabled>Y</enabled>\n          <name>STEPNAME</name>\n        </field>\n        <field>\n          <id>STEP_COPY</id>\n          <enabled>Y</enabled>\n          <name>STEP_COPY</name>\n        </field>\n        <field>\n          <id>LINES_READ</id>\n          <enabled>Y</enabled>\n          <name>LINES_READ</name>\n        </field>\n        <field>\n          <id>LINES_WRITTEN</id>\n          <enabled>Y</enabled>\n          <name>LINES_WRITTEN</name>\n        </field>\n        <field>\n          <id>LINES_UPDATED</id>\n          <enabled>Y</enabled>\n          <name>LINES_UPDATED</name>\n        </field>\n        <field>\n          <id>LINES_INPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_INPUT</name>\n        </field>\n        <field>\n          <id>LINES_OUTPUT</id>\n          <enabled>Y</enabled>\n          <name>LINES_OUTPUT</name>\n        </field>\n        <field>\n          <id>LINES_REJECTED</id>\n          <enabled>Y</enabled>\n          <name>LINES_REJECTED</name>\n        </field>\n        <field>\n          <id>ERRORS</id>\n          <enabled>Y</enabled>\n          <name>ERRORS</name>\n        </field>\n        <field>\n          <id>LOG_FIELD</id>\n          <enabled>N</enabled>\n          <name>LOG_FIELD</name>\n        </field>\n      </step-log-table>\n      <metrics-log-table>\n        <connection/>\n        <schema/>\n        <table/>\n        <timeout_days/>\n        <field>\n          <id>ID_BATCH</id>\n          <enabled>Y</enabled>\n          <name>ID_BATCH</name>\n        </field>\n        <field>\n          <id>CHANNEL_ID</id>\n          <enabled>Y</enabled>\n          <name>CHANNEL_ID</name>\n        </field>\n        <field>\n          <id>LOG_DATE</id>\n          <enabled>Y</enabled>\n          <name>LOG_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_DATE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DATE</name>\n        </field>\n        <field>\n          <id>METRICS_CODE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_CODE</name>\n        </field>\n        <field>\n          <id>METRICS_DESCRIPTION</id>\n          <enabled>Y</enabled>\n          <name>METRICS_DESCRIPTION</name>\n        </field>\n        <field>\n          <id>METRICS_SUBJECT</id>\n          <enabled>Y</enabled>\n          <name>METRICS_SUBJECT</name>\n        </field>\n        <field>\n          <id>METRICS_TYPE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_TYPE</name>\n        </field>\n        <field>\n          <id>METRICS_VALUE</id>\n          <enabled>Y</enabled>\n          <name>METRICS_VALUE</name>\n        </field>\n      </metrics-log-table>\n    </log>\n    <maxdate>\n      <connection/>\n      <table/>\n      <field/>\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file/>\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n    </clusterschemas>\n    <created_user/>\n    <created_date>2010&#x2f;08&#x2f;12 12&#x3a;40&#x3a;09.759</created_date>\n    <modified_user>-</modified_user>\n    <modified_date>2010&#x2f;07&#x2f;16 09&#x3a;23&#x3a;42.406</modified_date>\n    <key_for_session_key>H4sIAAAAAAAAAAMAAAAAAAAAAAA&#x3d;</key_for_session_key>\n    <is_key_private>N</is_key_private>\n  </info>\n  <notepads>\n    <notepad>\n      <note>This step serves as an injection point for &#xa;our GenericTransReducer to add rows of &#xa;data to the transformation.</note>\n      <xloc>30</xloc>\n      <yloc>58</yloc>\n      <width>255</width>\n      <heigth>54</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>This step performs the actual &#xa;counting for WordCount.</note>\n      <xloc>218</xloc>\n      <yloc>222</yloc>\n      <width>188</width>\n      <heigth>39</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n    <notepad>\n      <note>A simple &#x22;Dummy&#x22; step used to listen to rows &#xa;as they are generated by the transformation for the &#xa;GenericTransReducer.  Output here is considered&#xa;reduced, as every occurance of a word will be&#xa;tallied up and passed on.</note>\n      <xloc>518</xloc>\n      <yloc>26</yloc>\n      <width>312</width>\n      <heigth>84</heigth>\n      <fontname>Microsoft Sans Serif</fontname>\n      <fontsize>8</fontsize>\n      <fontbold>N</fontbold>\n      <fontitalic>N</fontitalic>\n      <fontcolorred>0</fontcolorred>\n      <fontcolorgreen>0</fontcolorgreen>\n      <fontcolorblue>0</fontcolorblue>\n      <backgroundcolorred>255</backgroundcolorred>\n      <backgroundcolorgreen>255</backgroundcolorgreen>\n      <backgroundcolorblue>0</backgroundcolorblue>\n      <bordercolorred>100</bordercolorred>\n      <bordercolorgreen>100</bordercolorgreen>\n      <bordercolorblue>100</bordercolorblue>\n      <drawshadow>Y</drawshadow>\n    </notepad>\n  </notepads>\n  <order>\n    <hop>\n      <from>Hadoop Input</from>\n      <to>Group by</to>\n      <enabled>Y</enabled>\n    </hop>\n    <hop>\n      <from>Group by</from>\n      <to>Hadoop Output</to>\n      <enabled>Y</enabled>\n    </hop>\n  </order>\n  <step>\n    <name>Group by</name>\n    <type>GroupBy</type>\n    <description/>\n    <distribute>N</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n      <all_rows>N</all_rows>\n      <ignore_aggregate>N</ignore_aggregate>\n      <field_ignore/>\n      <directory>&#x25;&#x25;java.io.tmpdir&#x25;&#x25;</directory>\n      <prefix>grp</prefix>\n      <add_linenr>N</add_linenr>\n      <linenr_fieldname/>\n      <give_back_row>N</give_back_row>\n      <group>\n        <field>\n          <name>key</name>\n        </field>\n      </group>\n      <fields>\n        <field>\n          <aggregate>sum</aggregate>\n          <subject>value</subject>\n          <type>SUM</type>\n          <valuefield/>\n        </field>\n      </fields>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>302</xloc>\n      <yloc>145</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Input</name>\n    <type>HadoopEnterPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <fields>      <field>        <name>key</name>\n        <type>String</type>\n        <length>0</length>\n        <precision>2</precision>\n      </field>      <field>        <name>value</name>\n        <type>Integer</type>\n        <length>0</length>\n        <precision>5</precision>\n      </field>    </fields>    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>155</xloc>\n      <yloc>143</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step>\n    <name>Hadoop Output</name>\n    <type>HadoopExitPlugin</type>\n    <description/>\n    <distribute>Y</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n      <method>none</method>\n      <schema_name/>\n    </partitioning>\n    <outkeyfieldname>key</outkeyfieldname>\n    <outvaluefieldname>sum</outvaluefieldname>\n    <cluster_schema/>\n    <remotesteps>\n      <input>\n      </input>\n      <output>\n      </output>\n    </remotesteps>\n    <GUI>\n      <xloc>655</xloc>\n      <yloc>146</yloc>\n      <draw>Y</draw>\n    </GUI>\n    </step>\n\n  <step_error_handling>\n  </step_error_handling>\n  <slave-step-copy-partition-distribution>\n  </slave-step-copy-partition-distribution>\n  <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "authentication-mapper/api/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-authentication-mapper-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-authentication-mapper-api</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <build>\n    <plugins>\n      <plugin>\n        <artifactId>maven-jar-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>test-jar</id>\n            <phase>package</phase>\n            <goals>\n              <goal>test-jar</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "authentication-mapper/api/src/main/java/org/pentaho/authentication/mapper/api/AuthenticationMappingManager.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.authentication.mapper.api;\n\n/**\n * @author bryan\n */\npublic interface AuthenticationMappingManager {\n  String RANKING_CONFIG = \"service.ranking\";\n\n  <InputType, OutputType> OutputType getMapping( Class<InputType> inputType, InputType input,\n                                                 Class<OutputType> outputType ) throws MappingException;\n}\n"
  },
  {
    "path": "authentication-mapper/api/src/main/java/org/pentaho/authentication/mapper/api/AuthenticationMappingService.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.authentication.mapper.api;\n\nimport java.util.Map;\n\n/**\n * @author bryan\n */\npublic interface AuthenticationMappingService<InputType, OutputType> {\n  String getId();\n\n  Class<? extends InputType> getInputType();\n\n  Class<? extends OutputType> getOutputType();\n\n  boolean accepts( Object input );\n\n  OutputType getMapping( InputType input, Map<String, ?> config ) throws MappingException;\n}\n"
  },
  {
    "path": "authentication-mapper/api/src/main/java/org/pentaho/authentication/mapper/api/MappingException.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.authentication.mapper.api;\n\n/**\n * Created by bryan on 3/18/16.\n */\npublic class MappingException extends Exception {\n  public MappingException() {\n  }\n\n  public MappingException( String message ) {\n    super( message );\n  }\n\n  public MappingException( String message, Throwable cause ) {\n    super( message, cause );\n  }\n\n  public MappingException( Throwable cause ) {\n    super( cause );\n  }\n\n  @FunctionalInterface\n  public interface Function<T, R> {\n    R apply( T t ) throws MappingException;\n  }\n\n  @FunctionalInterface\n  public interface Supplier<R> {\n    R get() throws MappingException;\n  }\n}\n"
  },
  {
    "path": "authentication-mapper/impl/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-authentication-mapper-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-authentication-mapper-impl</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-authentication-mapper-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n  </dependencies>\n  <build>\n    <finalName>${project.artifactId}</finalName>\n    <plugins>\n      <plugin>\n        <artifactId>maven-jar-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>test-jar</id>\n            <phase>package</phase>\n            <goals>\n              <goal>test-jar</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "authentication-mapper/impl/src/main/java/org/pentaho/authentication/mapper/impl/AuthenticationMappingManagerImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.authentication.mapper.impl;\n\nimport java.io.IOException;\nimport java.util.Comparator;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Objects;\nimport java.util.Optional;\nimport java.util.TreeSet;\n\nimport org.pentaho.authentication.mapper.api.AuthenticationMappingManager;\nimport org.pentaho.authentication.mapper.api.AuthenticationMappingService;\nimport org.pentaho.authentication.mapper.api.MappingException;\n\nimport com.google.common.collect.Multimaps;\nimport com.google.common.collect.SortedSetMultimap;\n\n/**\n * @author bryan\n */\npublic class AuthenticationMappingManagerImpl implements AuthenticationMappingManager {\n\n  private final SortedSetMultimap<TypePair, RankedAuthService> serviceMap = Multimaps.synchronizedSortedSetMultimap(\n      Multimaps.newSortedSetMultimap( new HashMap<>(), TreeSet::new )\n  );\n\n  public AuthenticationMappingManagerImpl() throws IOException {\n  }\n\n  public AuthenticationMappingManagerImpl( AuthenticationMappingService service ) throws IOException {\n    serviceMap.put( new TypePair( service ), new RankedAuthService( 50, service ) );\n  }\n\n  @Override\n  @SuppressWarnings( \"unchecked\" )\n  public <InputType, OutputType> OutputType getMapping( Class<InputType> inputType, InputType input,\n                                                        Class<OutputType> outputType ) throws MappingException {\n    AuthenticationMappingService<InputType, OutputType> service;\n    synchronized ( serviceMap ) {\n      service = serviceMap.get( new TypePair( inputType, outputType ) ).stream()\n        .filter( ( rankedService ) -> rankedService.getService().accepts( input ) )\n        .findFirst()\n        .map( RankedAuthService::getService )\n        .orElse( null );\n    }\n\n    return service != null ? service.getMapping( input, null ) : null;\n  }\n\n  public void onMappingServiceAdded( AuthenticationMappingService service, Map config ) {\n    if ( service == null ) {\n      return;\n    }\n\n    int ranking = Optional.ofNullable( config.get( RANKING_CONFIG ) )\n        .map( String::valueOf ).map( Integer::parseInt ).orElse( 50 );\n\n    serviceMap.put( new TypePair( service ), new RankedAuthService( ranking, service ) );\n  }\n\n  public void onMappingServiceRemoved( AuthenticationMappingService service ) {\n    if ( service == null ) {\n      return;\n    }\n\n    synchronized ( serviceMap ) {\n      serviceMap.get( new TypePair( service ) )\n        .removeIf( rankedAuthService -> rankedAuthService.service.equals( service ) );\n    }\n  }\n\n  private static class TypePair {\n    final Class input, output;\n\n    TypePair( AuthenticationMappingService service ) {\n      this( service.getInputType(), service.getOutputType() );\n    }\n\n    TypePair( Class input, Class output ) {\n      this.input = Objects.requireNonNull( input );\n      this.output = Objects.requireNonNull( output );\n    }\n\n    @Override public boolean equals( Object o ) {\n      if ( this == o ) {\n        return true;\n      }\n      if ( !( o instanceof TypePair ) ) {\n        return false;\n      }\n      TypePair typePair = (TypePair) o;\n      return Objects.equals( input, typePair.input ) && Objects.equals( output, typePair.output );\n    }\n\n    @Override public int hashCode() {\n      return Objects.hash( input, output );\n    }\n\n    @Override public String toString() {\n      return input + \" -> \" + output;\n    }\n  }\n\n  private static class RankedAuthService implements Comparable<RankedAuthService> {\n    final int rank;\n    final AuthenticationMappingService service;\n\n    RankedAuthService( int rank, AuthenticationMappingService service ) {\n      this.rank = rank;\n      this.service = service;\n    }\n\n    private String getId() {\n      return getService().getId();\n    }\n\n    int getRank() {\n      return rank;\n    }\n\n    AuthenticationMappingService getService() {\n      return service;\n    }\n\n    @Override public String toString() {\n      return \"(\" + rank + \") \" + service;\n    }\n\n    @Override public int compareTo( RankedAuthService o ) {\n      return Comparator\n        .comparingInt( RankedAuthService::getRank ).reversed()\n        .thenComparing( RankedAuthService::getId )\n        .compare( this, o );\n    }\n\n  }\n}\n"
  },
  {
    "path": "authentication-mapper/impl/src/test/java/org/pentaho/authentication/mapper/impl/AuthenticationMappingManagerImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.authentication.mapper.impl;\n\nimport com.google.common.collect.ImmutableMap;\nimport org.junit.Before;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.ExpectedException;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Captor;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.authentication.mapper.api.AuthenticationMappingManager;\nimport org.pentaho.authentication.mapper.api.AuthenticationMappingService;\nimport org.pentaho.authentication.mapper.api.MappingException;\n\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.hamcrest.Matchers.allOf;\nimport static org.hamcrest.Matchers.hasEntry;\nimport static org.hamcrest.Matchers.nullValue;\nimport static org.junit.Assert.assertThat;\n\n/**\n * @author nhudak\n */\n\n@RunWith( MockitoJUnitRunner.class )\npublic class AuthenticationMappingManagerImplTest {\n  @Rule public TemporaryFolder etc = new TemporaryFolder();\n  @Rule public ExpectedException exception = ExpectedException.none();\n  @Captor ArgumentCaptor<Map<String, Object>> mapArgumentCaptor;\n  private AuthenticationMappingManagerImpl manager;\n\n  @Before\n  public void setUp() throws Exception {\n    manager = new AuthenticationMappingManagerImpl();\n  }\n\n  @Test\n  @SuppressWarnings( \"unchecked\" )\n  public void mappingService() throws Exception {\n    // Add mock service\n    TestService service = new TestService( \"cluster_security_mapping_configuration\" );\n    manager.onMappingServiceAdded( service, ImmutableMap.of( AuthenticationMappingManager.RANKING_CONFIG, 200 ) );\n\n    // Also add decoy services with lower (default) priority an invalid input\n    TestService defaultService = new TestService( \"default\" );\n    manager.onMappingServiceAdded( defaultService, ImmutableMap.of() );\n    TestService unused = new TestService( \"unused\" ) {\n      @Override public boolean accepts( Object input ) {\n        return false;\n      }\n    };\n    manager.onMappingServiceAdded( unused, ImmutableMap.of( AuthenticationMappingManager.RANKING_CONFIG, 1000 ) );\n\n    // Service called if input/output match\n    Map<String, ?> result = manager.getMapping( String.class, \"map this\", Map.class );\n\n    assertThat( result, allOf(\n        hasEntry( \"id\", \"cluster_security_mapping_configuration\" ),\n        hasEntry( \"input\", \"map this\" ) )\n    );\n\n    // Remove service, default will be used\n    manager.onMappingServiceRemoved( service );\n    result = manager.getMapping( String.class, \"use the default\", Map.class );\n    assertThat( result, hasEntry( \"id\", \"default\" ) );\n  }\n\n  @Test\n  public void noMappingAvailable() throws Exception {\n    assertThat( manager.getMapping( String.class, \"some value\", List.class ), nullValue() );\n  }\n\n  class TestService implements AuthenticationMappingService<String, Map> {\n\n    final String id;\n\n    TestService( String id ) {\n      this.id = id;\n    }\n\n    @Override public String getId() {\n      return id;\n    }\n\n    @Override public Class<String> getInputType() {\n      return String.class;\n    }\n\n    @Override public Class<Map> getOutputType() {\n      return Map.class;\n    }\n\n    @Override public boolean accepts( Object input ) {\n      return true;\n    }\n\n    @Override public Map getMapping( String input, Map<String, ?> config ) throws MappingException {\n      return ImmutableMap.of( \"id\", id, \"input\", input );\n    }\n  }\n}\n"
  },
  {
    "path": "authentication-mapper/impl/src/test/resources/invalid_mapping.json",
    "content": "{\n  invalid : #\n}"
  },
  {
    "path": "authentication-mapper/impl/src/test/resources/mapping.json",
    "content": "{\n  \"cluster_security_mapping_configuration\": {\n    \"default\": {\n      \"pentaho_server_credentials\": {\n        \"kerberos\": {\n          \"principal\": \"oozie@PENTAHOQA.COM\",\n          \"keytab\": \"/home/bryan/platform-codebase/kerb-integration-2/7.0-SNAPSHOT-218/data-integration-server/oozie.keytab\"\n        }\n      },\n      \"user_impersonation_mapping\": {\n        \"type\": \"simple_mapping\"\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "authentication-mapper/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-authentication-mapper-parent</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <developers>\n    <developer>\n      <name>Matt Campbell</name>\n      <email>mcampbell@pentaho.com</email>\n      <roles>\n        <role>developer</role>\n      </roles>\n    </developer>\n    <developer>\n      <name>Nick Hudak</name>\n      <email>nhudak@pentaho.com</email>\n      <roles>\n        <role>developer</role>\n      </roles>\n    </developer>\n    <developer>\n      <name>Bryan Rosander</name>\n      <email>brosander@pentaho.com</email>\n      <roles>\n        <role>developer</role>\n      </roles>\n    </developer>\n  </developers>\n  <modules>\n    <module>api</module>\n    <module>impl</module>\n  </modules>\n  <dependencies>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hamcrest</groupId>\n      <artifactId>hamcrest-all</artifactId>\n      <version>${hamcrest.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${junit.version}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "dev-doc/multishim/MultiShimHBase.sd",
    "content": "kettlePlugin:KP \"Kettle Plugin\"\nnamedClusterServiceLocator:NCSL \"Named Cluster Service\"\nclusterInitializer:CI \"Cluster Initializer\"\nhadoopConfigurationBootstrap:HCB \"Hadoop Configuration Bootstrap\"\nhbaseServiceFactory:HSF \"HBase Service Factory\"\n\nkettlePlugin:namedClusterServiceLocator.getService(cdh55unsec, HBaseService.class)\nnamedClusterServiceLocator:clusterInitializer.initialize(cdh55unsec)\nclusterInitializer:hadoopConfigurationBootstrap.getProvider(cdh55)\nnamedClusterServiceLocator:hbaseServiceFactory.canHandle(cdh55unsec)\nnamedClusterServiceLocator:hbaseServiceFactory.create(cdh55unsec)\n"
  },
  {
    "path": "dev-doc/multishim/README.md",
    "content": "This outlines the anticipated changes needed to support multiple shims.\n\nStandard services (accessed via steps/job entries)\n--------------------------------------------------\n\nCurrently, the service location has unused hooks for the parts we think will be needed.\n\nThe current flow is as follows:\n![Single shim hbase sequence diagram](SingleShimHBase.png)\n\nThe adjusted flow is mostly the same:\n![Multi shim hbase sequence diagram](MultiShimHBase.png)\n\nFirst we need to add an attribute to the named cluster specifying the shim.\n\nThe getProvider() call to HadoopConfigurationBootstrap is what initializes a shim.  We need to change it to account for the shim selected in the named cluster so that it can initialize the right shim (or no-op if the shim has already been initialized).\n\nThe other major change isn't visible in the sequence diagram.  The HBaseServiceFactory's canHandle method needs to account for the selected shim in the named cluster (falling back to the default or \"active\" configuration if it is null) so\n\n```java\n  @Override public boolean canHandle( NamedCluster namedCluster ) {\n    return isActiveConfiguration;\n  }\n```\n\nbecomes\n\n```java\n  @Override public boolean canHandle( NamedCluster namedCluster ) {\n    String shim = namedCluster.getShim();\n    if ( shim == null ) {\n      return isActiveConfiguration;\n    }\n    return shim.equals( hadoopConfiguration.getIdentifier() );\n  }\n```\n\nLimited context services (vfs, jdbc)\n------------------------------------\n\nWe need a reference to the active MetaStore and the named cluster name in order to load a NamedCluster.\n\nAs far as locating the active metastore, [metastore locator plugin](https://github.com/pentaho/pentaho-kettle/tree/master/plugins/metastore-locator) will allow us to determine the right one.  We will probably need to flush out several more scenarios and implementations though.\n\nWe will somehow need to embed the named cluster name in the URL both to determine which shim to use as well as to associate other named cluster settings with VFS and JDBC connections once we add them.\n\nTools\n-----\n[sdedit4.0.1](https://sourceforge.net/projects/sdedit/files/sdedit/4.0/) was used to transform the .sd files into .pngs\n\n```\njava -jar ~/Downloads/sdedit-4.01.jar -o SingleShimHBase.png -t png SingleShimHBase.sd\njava -jar ~/Downloads/sdedit-4.01.jar -o MultiShimHBase.png -t png MultiShimHBase.sd \n```\n"
  },
  {
    "path": "dev-doc/multishim/SingleShimHBase.sd",
    "content": "kettlePlugin:KP \"Kettle Plugin\"\nnamedClusterServiceLocator:NCSL \"Named Cluster Service\"\nclusterInitializer:CI \"Cluster Initializer\"\nhadoopConfigurationBootstrap:HCB \"Hadoop Configuration Bootstrap\"\nhbaseServiceFactory:HSF \"HBase Service Factory\"\n\nkettlePlugin:namedClusterServiceLocator.getService(cdh55unsec, HBaseService.class)\nnamedClusterServiceLocator:clusterInitializer.initialize(cdh55unsec)\nclusterInitializer:hadoopConfigurationBootstrap.getProvider()\nnamedClusterServiceLocator:hbaseServiceFactory.canHandle(cdh55unsec)\nnamedClusterServiceLocator:hbaseServiceFactory.create(cdh55unsec)"
  },
  {
    "path": "dev-doc/shim-bridge-classloading.graphml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<graphml xmlns=\"http://graphml.graphdrawing.org/xmlns\" xmlns:java=\"http://www.yworks.com/xml/yfiles-common/1.0/java\" xmlns:sys=\"http://www.yworks.com/xml/yfiles-common/markup/primitives/2.0\" xmlns:x=\"http://www.yworks.com/xml/yfiles-common/markup/2.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:y=\"http://www.yworks.com/xml/graphml\" xmlns:yed=\"http://www.yworks.com/xml/yed/3\" xsi:schemaLocation=\"http://graphml.graphdrawing.org/xmlns http://www.yworks.com/xml/schema/graphml/1.1/ygraphml.xsd\">\n  <!--Created by yEd 3.14.4-->\n  <key attr.name=\"Description\" attr.type=\"string\" for=\"graph\" id=\"d0\"/>\n  <key for=\"port\" id=\"d1\" yfiles.type=\"portgraphics\"/>\n  <key for=\"port\" id=\"d2\" yfiles.type=\"portgeometry\"/>\n  <key for=\"port\" id=\"d3\" yfiles.type=\"portuserdata\"/>\n  <key attr.name=\"url\" attr.type=\"string\" for=\"node\" id=\"d4\"/>\n  <key attr.name=\"description\" attr.type=\"string\" for=\"node\" id=\"d5\"/>\n  <key for=\"node\" id=\"d6\" yfiles.type=\"nodegraphics\"/>\n  <key for=\"graphml\" id=\"d7\" yfiles.type=\"resources\"/>\n  <key attr.name=\"url\" attr.type=\"string\" for=\"edge\" id=\"d8\"/>\n  <key attr.name=\"description\" attr.type=\"string\" for=\"edge\" id=\"d9\"/>\n  <key for=\"edge\" id=\"d10\" yfiles.type=\"edgegraphics\"/>\n  <graph edgedefault=\"directed\" id=\"G\">\n    <data key=\"d0\"/>\n    <node id=\"n0\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"39.0\" width=\"189.0\" x=\"10.0\" y=\"0.0\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"68.3828125\" x=\"60.30859375\" y=\"10.515625\">Load Class<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n1\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"40.0\" width=\"228.0\" x=\"276.9690476190476\" y=\"89.0\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"31.9375\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"117.619140625\" x=\"55.1904296875\" y=\"4.03125\">Attempt to load\nfrom bundle wiring<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n2\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"80.0\" width=\"189.0\" x=\"10.0\" y=\"69.0\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"135.291015625\" x=\"26.8544921875\" y=\"31.015625\">Class already loaded?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n3\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"60.0\" width=\"120.0\" x=\"330.9690476190476\" y=\"179.0\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"58.673828125\" x=\"30.6630859375\" y=\"21.015625\">Success?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n4\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"40.0\" width=\"80.0\" x=\"350.9690476190476\" y=\"766.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"40.5859375\" x=\"19.70703125\" y=\"11.015625\">Profit!<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n5\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"40.0\" width=\"189.0\" x=\"296.4690476190476\" y=\"411.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"31.9375\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"136.451171875\" x=\"26.2744140625\" y=\"4.03125\">Attempt to load\nfrom shim classloader<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n6\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"60.0\" width=\"120.0\" x=\"330.9690476190476\" y=\"481.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"58.673828125\" x=\"30.6630859375\" y=\"21.015625\">Success?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n7\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"40.0\" width=\"120.0\" x=\"979.5999999999999\" y=\"671.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"72.162109375\" x=\"23.9189453125\" y=\"11.015625\">Don't profit<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n8\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"80.0\" width=\"189.0\" x=\"296.4690476190476\" y=\"286.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"31.9375\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"126.68359375\" x=\"31.158203125\" y=\"24.03125\">Class was from\nSystem classloader?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n9\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"53.531964809384164\" width=\"174.72375366568917\" x=\"528.9381231671554\" y=\"485.20276759530793\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"45.90625\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"126.34375\" x=\"24.190001832844587\" y=\"3.8128574046920676\">Attempt to load\nfrom big data plugin\nclassloader<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"-2.7755575615628914E-16\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n10\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"60.0\" width=\"120.0\" x=\"556.3\" y=\"571.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"58.673828125\" x=\"30.6630859375\" y=\"21.015625\">Success?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n11\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.terminator\">\n          <y:Geometry height=\"40.0\" width=\"174.72375366568917\" x=\"754.2690755481078\" y=\"581.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"31.9375\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"152.318359375\" x=\"11.202697145344587\" y=\"4.03125\">Attempt to load\nfrom System classloader<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <node id=\"n12\">\n      <data key=\"d6\">\n        <y:GenericNode configuration=\"com.yworks.flowchart.decision\">\n          <y:Geometry height=\"60.0\" width=\"120.0\" x=\"781.6309523809524\" y=\"661.96875\"/>\n          <y:Fill color=\"#E8EEF7\" color2=\"#B7C9E3\" transparent=\"false\"/>\n          <y:BorderStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:NodeLabel alignment=\"center\" autoSizePolicy=\"content\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" textColor=\"#000000\" visible=\"true\" width=\"58.673828125\" x=\"30.6630859375\" y=\"21.015625\">Success?<y:LabelModel>\n              <y:SmartNodeLabelModel distance=\"4.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartNodeLabelModelParameter labelRatioX=\"0.0\" labelRatioY=\"0.0\" nodeRatioX=\"0.0\" nodeRatioY=\"0.0\" offsetX=\"0.0\" offsetY=\"0.0\" upX=\"0.0\" upY=\"-1.0\"/>\n            </y:ModelParameter>\n          </y:NodeLabel>\n        </y:GenericNode>\n      </data>\n    </node>\n    <edge id=\"e0\" source=\"n0\" target=\"n2\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"19.5\" tx=\"0.0\" ty=\"-40.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e1\" source=\"n1\" target=\"n3\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"20.0\" tx=\"0.0\" ty=\"-30.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e2\" source=\"n3\" target=\"n5\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"60.0\" sy=\"0.0\" tx=\"0.0\" ty=\"-20.0\">\n            <y:Point x=\"616.3\" y=\"209.0\"/>\n            <y:Point x=\"616.3\" y=\"381.96875\"/>\n            <y:Point x=\"390.9690476190476\" y=\"381.96875\"/>\n          </y:Path>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"10.084648204985115\" y=\"-19.968749999999943\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"1.9999999999999716\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.03412758541821126\" segment=\"0\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e3\" source=\"n6\" target=\"n4\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"30.0\" tx=\"0.0\" ty=\"-20.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"1.9999924432663647\" y=\"10.125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.026011102299762095\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e4\" source=\"n2\" target=\"n4\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"40.0\" tx=\"0.0\" ty=\"-20.0\">\n            <y:Point x=\"104.5\" y=\"556.96875\"/>\n            <y:Point x=\"390.9690476190476\" y=\"556.96875\"/>\n          </y:Path>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"-26.96484375\" y=\"10.125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"right\" ratio=\"0.013008130081300813\" segment=\"0\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e5\" source=\"n3\" target=\"n8\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"30.0\" tx=\"0.0\" ty=\"-40.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"-26.964851306733635\" y=\"10.125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"right\" ratio=\"0.25625\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e6\" source=\"n8\" target=\"n4\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"-94.5\" sy=\"0.0\" tx=\"0.0\" ty=\"-20.0\">\n            <y:Point x=\"104.5\" y=\"326.96875\"/>\n            <y:Point x=\"104.5\" y=\"556.96875\"/>\n            <y:Point x=\"390.9690476190476\" y=\"556.96875\"/>\n          </y:Path>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"-30.443366931733635\" y=\"2.0\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.02898593873722114\" segment=\"0\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e7\" source=\"n8\" target=\"n5\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"40.0\" tx=\"0.0\" ty=\"-20.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"-26.964851306733692\" y=\"10.125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"right\" ratio=\"0.30091743119266057\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e8\" source=\"n5\" target=\"n6\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"20.0\" tx=\"0.0\" ty=\"-30.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e9\" source=\"n6\" target=\"n9\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"60.0\" sy=\"0.0\" tx=\"-87.36187683284459\" ty=\"0.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"10.08693702334449\" y=\"-19.968749999999886\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"1.9999999999999432\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.10755347267775914\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e10\" source=\"n9\" target=\"n10\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"26.765982404692068\" tx=\"0.0\" ty=\"-30.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e11\" source=\"n10\" target=\"n4\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"30.0\" tx=\"0.0\" ty=\"-20.0\">\n            <y:Point x=\"616.3\" y=\"646.96875\"/>\n            <y:Point x=\"390.9690476190476\" y=\"646.96875\"/>\n          </y:Path>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"2.0000122070312045\" y=\"3.078125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.6083333333333333\" segment=\"0\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e12\" source=\"n10\" target=\"n11\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"60.0\" sy=\"0.0\" tx=\"-87.36187683284459\" ty=\"0.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"10.125012207031318\" y=\"-19.96875\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.10755347267775887\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e13\" source=\"n11\" target=\"n12\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"20.0\" tx=\"0.0\" ty=\"-30.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e14\" source=\"n12\" target=\"n4\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"0.0\" sy=\"30.0\" tx=\"0.0\" ty=\"-20.0\">\n            <y:Point x=\"841.6309523809524\" y=\"736.96875\"/>\n            <y:Point x=\"390.9690476190476\" y=\"736.96875\"/>\n          </y:Path>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"24.96484375\" x=\"1.9999709356397943\" y=\"3.078125\">Yes<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"2.0\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.6083333333333333\" segment=\"0\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e15\" source=\"n12\" target=\"n7\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"60.0\" sy=\"0.0\" tx=\"-60.0\" ty=\"0.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"10.086946033296158\" y=\"-19.96875\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"1.9999999999999716\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.10755353571708824\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n    <edge id=\"e16\" source=\"n2\" target=\"n1\">\n      <data key=\"d10\">\n        <y:PolyLineEdge>\n          <y:Path sx=\"94.5\" sy=\"0.0\" tx=\"-114.0\" ty=\"0.0\"/>\n          <y:LineStyle color=\"#000000\" type=\"line\" width=\"1.0\"/>\n          <y:Arrows source=\"none\" target=\"standard\"/>\n          <y:EdgeLabel alignment=\"center\" configuration=\"AutoFlippingLabel\" distance=\"2.0\" fontFamily=\"Dialog\" fontSize=\"12\" fontStyle=\"plain\" hasBackgroundColor=\"false\" hasLineColor=\"false\" height=\"17.96875\" modelName=\"custom\" preferredPlacement=\"anywhere\" ratio=\"0.5\" textColor=\"#000000\" visible=\"true\" width=\"20.318359375\" x=\"10.124999999999972\" y=\"-19.968750000000014\">No<y:LabelModel>\n              <y:SmartEdgeLabelModel autoRotationEnabled=\"false\" defaultAngle=\"0.0\" defaultDistance=\"10.0\"/>\n            </y:LabelModel>\n            <y:ModelParameter>\n              <y:SmartEdgeLabelModelParameter angle=\"6.283185307179586\" distance=\"1.9999999999999964\" distanceToCenter=\"false\" position=\"left\" ratio=\"0.10755353571708799\" segment=\"-1\"/>\n            </y:ModelParameter>\n            <y:PreferredPlacementDescriptor angle=\"0.0\" angleOffsetOnRightSide=\"0\" angleReference=\"absolute\" angleRotationOnRightSide=\"co\" distance=\"-1.0\" frozen=\"true\" placement=\"anywhere\" side=\"anywhere\" sideReference=\"relative_to_edge_flow\"/>\n          </y:EdgeLabel>\n          <y:BendStyle smoothed=\"false\"/>\n        </y:PolyLineEdge>\n      </data>\n    </edge>\n  </graph>\n  <data key=\"d7\">\n    <y:Resources/>\n  </data>\n</graphml>\n"
  },
  {
    "path": "dev-doc/shim-bridging-classloading.md",
    "content": "As a step towards a more flexible architecture less constrained by the shims, we have been refactoring the steps and job entries to use higher level services.  They follow the following pattern:\n\napi\n---\nA higher level api that exposes big data capabilities as services with locators that take a NamedCluster as their argument.\n\nThese should only rely on kettle-core, the metastore, and other api bundles.\n\nThey will typically be made up of a [a service interface](https://github.com/pentaho/big-data-plugin/blob/master/api/pig/src/main/java/org/pentaho/bigdata/api/pig/PigService.java).\n\nThe Service (and any supporting classes for arguments and/or return types) is responsible for performing operations against the cluster.\n\nimpl/shim \n---------\nAn initial implementation of the api that delegates to the shim.\n\nThese are OSGi bundles that bridge over to the legacy plugin as well as the shim.  A [factory loader](https://github.com/pentaho/big-data-plugin/blob/master/impl/shim/pig/src/main/java/org/pentaho/big/data/impl/shim/pig/PigServiceFactoryLoader.java), a [factory](https://github.com/pentaho/big-data-plugin/blob/master/impl/shim/pig/src/main/java/org/pentaho/big/data/impl/shim/pig/PigServiceFactoryImpl.java), and a [service](https://github.com/pentaho/big-data-plugin/blob/master/impl/shim/pig/src/main/java/org/pentaho/big/data/impl/shim/pig/PigServiceImpl.java) need to be implemented.\n\nThe factory loader implements the [HadoopConfigurationListener](https://github.com/pentaho/big-data-plugin/blob/master/legacy/src/main/java/org/pentaho/di/core/hadoop/HadoopConfigurationListener.java) interface. Its job is [to instantiate a new factory and register it with the service locator](https://github.com/pentaho/big-data-plugin/blob/master/impl/shim/pig/src/main/java/org/pentaho/big/data/impl/shim/pig/PigServiceFactoryLoader.java) for each HadoopConfiguration that opens and unregister the factory when the HadoopConfiguration is closed.\n\nThe factory has two parent classloaders, the OSGi Bundle Context Classloader and the Shim's classloader.  This way it is able to implement the Factory interface and the Service it instantiates can use the shim classes to do the work.\n\n![Logic flow chart](shim-bridge-classloading.png)\n\nThe Service interface is able to reference anything in the shim to do its job but sticking with the hadoop shim api classes is preferable as they are less likely to change from shim to shim.\n\n[Example blueprint](https://github.com/pentaho/big-data-plugin/blob/master/impl/shim/pig/src/main/resources/OSGI-INF/blueprint/blueprint.xml)\n\nkettle-plugins\n--------------\nThe step and job entry logic and dialog code.\n\nThese are able to depend on Kettle artifacts as well as api artifacts (above) but should NOT depend on the legacy plugin, the hadoop api, or any shim artifacts to do their job. They are OSGi bundles that provide Kettle plugins via blueprint.\n\nThey can use the [NamedClusterServiceLocator](https://github.com/pentaho/big-data-plugin/blob/master/api/clusterServiceLocator/src/main/java/org/pentaho/big/data/api/cluster/service/locator/NamedClusterServiceLocator.java) interface to get services for a given NamedCluster.\n\n[Example blueprint](https://github.com/pentaho/big-data-plugin/blob/master/kettle-plugins/pig/src/main/resources/OSGI-INF/blueprint/blueprint.xml)\n"
  },
  {
    "path": "dev-doc/shimprovements.md",
    "content": "Big Data Plugin in 6.1\n======================\nOSGi\n----\nAs of 6.1, all the main Hadoop functionality (HDFS, MapReduce, PMR, HBase, Pig, Sqoop, Oozie, YARN) is accessible via OSGi services.\n\nHDFS, Pig, and YARN were moved to OSGi services in the 6.0 version of the software.  For 6.1, MapReduce, PMR, HBase, Oozie, and Sqoop were moved to OSGi services. This doesn't introduce a new paradigm, it just completes the migration of the functionality to OSGi.\n\nThis won't impact the user experience.  Shims, configuration files are in the same place and all saved jobs and transformations will continue to work.  However, the steps and services themselves are now in OSGi.\n\nThis change allows any OSGi plugin to leverage OSGi services in the future; these services are no longer limited to the Big Data Plugin.\n\nIt also paves the way for the eventual addition of more advanced authentication/authorization as well as multi-shim support.\n\nJDBC\n----\nHive and Impala Drivers have not been migrated to OSGi, and are still part of the Big Data Plugin.\n\nFiles Being Moved/Modified\n--------------------------\nThe individual Kettle Plugins from the old Big Data Plugin have been split into an API that exposes shim capability as a series of OSGi services, an implementation using the shim, and the Kettle Plugin that consumes the API.  The shims themselves and the configuration of the Big Data Plugin have not changed for this release.\n\nAffected Products\n-----------------\nThis affects all parts of the stack capable of using the Kettle Big Data Plugin steps and job entries.\n\nLicense Impact\n--------------\nThere should be no change in licensing driven by these changes.  Kerberos support and the YARN service and job entries remain EE features while the rest is still open.\n\nDeployment Impact\n-----------------\nUpdates to the legacy Big Data Plugin should be the same as before. Either drop a new big-data-plugin folder into the plugins directory and configure it, or unzip a new shim in the hadoop-configurations directory.\n\nUpdates to the OSGi bundles currently can be accomplished most easily by building the same version as the release and overwriting the bundle in the Karaf system repository.  After this, stop the tool, remove the Karaf cache, and restart the tool. The bundle updating process will be improved after 6.1 and we will be aiming for a much easier deployment scenario.\n"
  },
  {
    "path": "impl/cluster/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-cluster</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <impl-cluster-mockito.version>3.12.4</impl-cluster-mockito.version>\n    <mockito-core.version>5.17.0</mockito-core.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>metastore</artifactId>\n      <version>${metastore.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-beanutils</groupId>\n      <artifactId>commons-beanutils</artifactId>\n      <version>${commons-beanutils.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito-core.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-inline</artifactId>\n      <version>${mockito-inline.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.code.bean-matchers</groupId>\n      <artifactId>bean-matchers</artifactId>\n      <version>${dependency.bean-matchers.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.osgi</groupId>\n      <artifactId>osgi.core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.osgi</groupId>\n      <artifactId>osgi.cmpn</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "impl/cluster/src/it/resources/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>fs.defaultFS</name>\n    <value>hdfs://CDH61Secure</value>\n  </property>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>kerberos</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>privacy</value>\n  </property>\n  <!--'hadoop.security.key.provider.path', originally set to 'kms://https@svqxobcdh61secn1.pentaho.net:16000/kms' (non-final), is overridden below by a safety valve-->\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.security.key.provider.path</name>\n    <value>kms://https@svqxobcdh61secn1.pentaho.net:16000/kms</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "impl/cluster/src/main/java/org/pentaho/big/data/impl/cluster/NamedClusterImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.InputStream;\nimport java.io.StringWriter;\nimport java.lang.reflect.InvocationTargetException;\nimport java.util.ArrayList;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\nimport javax.xml.XMLConstants;\nimport javax.xml.parsers.DocumentBuilder;\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.parsers.ParserConfigurationException;\nimport javax.xml.transform.Transformer;\nimport javax.xml.transform.TransformerException;\nimport javax.xml.transform.TransformerFactory;\nimport javax.xml.transform.dom.DOMSource;\nimport javax.xml.transform.stream.StreamResult;\n\nimport org.apache.commons.beanutils.BeanMap;\nimport org.apache.commons.beanutils.BeanUtils;\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.provider.url.UrlFileName;\nimport org.apache.commons.vfs2.provider.url.UrlFileNameParser;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleValueException;\nimport org.pentaho.di.core.osgi.api.NamedClusterOsgi;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.osgi.impl.NamedClusterSiteFileImpl;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.security.Base64TwoWayPasswordEncoder;\nimport org.pentaho.metastore.api.security.ITwoWayPasswordEncoder;\nimport org.pentaho.metastore.persist.MetaStoreAttribute;\nimport org.pentaho.metastore.persist.MetaStoreElementType;\nimport org.w3c.dom.Document;\nimport org.w3c.dom.Element;\nimport org.w3c.dom.Node;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\n\nimport com.google.common.annotations.VisibleForTesting;\n\n@MetaStoreElementType( name = \"NamedCluster\", description = \"A NamedCluster\" )\npublic class NamedClusterImpl implements NamedCluster, NamedClusterOsgi {\n\n  public static final String HDFS_SCHEME = \"hdfs\";\n  public static final String MAPRFS_SCHEME = \"maprfs\";\n  public static final String WASB_SCHEME = \"wasb\";\n  public static final String NC_SCHEME = \"hc\";\n  public static final String ID = \"id\";\n  public static final String CHILD = \"child\";\n  public static final String CHILDREN = \"children\";\n  public static final String STRING = \"string\";\n  public static final String VALUE = \"value\";\n  public static final String UPPER_STRING = \"String\";\n\n\n  private static final Logger LOGGER = LogManager.getLogger( NamedClusterImpl.class );\n\n  private VariableSpace variables = new Variables();\n\n  @MetaStoreAttribute\n  private String name;\n\n  @MetaStoreAttribute\n  private String shimIdentifier;\n\n  @MetaStoreAttribute\n  private String storageScheme;\n\n  @MetaStoreAttribute\n  private String hdfsHost;\n  @MetaStoreAttribute\n  private String hdfsPort;\n  @MetaStoreAttribute\n  private String hdfsUsername;\n  @MetaStoreAttribute\n  private String hdfsPassword; //encrypted\n  @MetaStoreAttribute\n  private String jobTrackerHost;\n  @MetaStoreAttribute\n  private String jobTrackerPort;\n\n  @MetaStoreAttribute\n  private String zooKeeperHost;\n  @MetaStoreAttribute\n  private String zooKeeperPort;\n\n  @MetaStoreAttribute\n  private String oozieUrl;\n\n  @MetaStoreAttribute\n  @Deprecated\n  private boolean mapr;\n\n  @MetaStoreAttribute\n  private String gatewayUrl;\n\n  @MetaStoreAttribute\n  private String gatewayUsername;\n\n  @MetaStoreAttribute\n  private String gatewayPassword;  //encrypted\n  @MetaStoreAttribute\n  private boolean useGateway;\n\n  @MetaStoreAttribute\n  private String kafkaBootstrapServers;\n\n  @MetaStoreAttribute\n  private long lastModifiedDate = System.currentTimeMillis();\n\n  @MetaStoreAttribute\n  private List<NamedClusterSiteFile> siteFiles;\n\n  private ITwoWayPasswordEncoder passwordEncoder = new Base64TwoWayPasswordEncoder();\n\n  private static String hadoopActiveConfiguration = null;\n\n  public NamedClusterImpl() {\n    siteFiles = new ArrayList<>();\n    initializeVariablesFrom( null );\n  }\n\n  public NamedClusterImpl( NamedCluster namedCluster ) {\n    this();\n    replaceMeta( namedCluster );\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public String getShimIdentifier() {\n    return this.shimIdentifier;\n  }\n\n  public void setShimIdentifier( String shimIdentifier ) {\n    this.shimIdentifier = shimIdentifier;\n  }\n\n  public String getStorageScheme() {\n    if ( storageScheme == null ) {\n      if ( isMapr() ) {\n        storageScheme = MAPRFS_SCHEME;\n      } else {\n        storageScheme = HDFS_SCHEME;\n      }\n    }\n    return storageScheme;\n  }\n\n  public void setStorageScheme( String storageScheme ) {\n    this.storageScheme = storageScheme;\n  }\n\n  public void copyVariablesFrom( VariableSpace space ) {\n    variables.copyVariablesFrom( space );\n  }\n\n  public String environmentSubstitute( String aString ) {\n    return variables.environmentSubstitute( aString );\n  }\n\n  public String[] environmentSubstitute( String[] aString ) {\n    return variables.environmentSubstitute( aString );\n  }\n\n  public String fieldSubstitute( String aString, RowMetaInterface rowMeta, Object[] rowData )\n    throws KettleValueException {\n    return variables.fieldSubstitute( aString, rowMeta, rowData );\n  }\n\n  public VariableSpace getParentVariableSpace() {\n    return variables.getParentVariableSpace();\n  }\n\n  public void setParentVariableSpace( VariableSpace parent ) {\n    variables.setParentVariableSpace( parent );\n  }\n\n  public String getVariable( String variableName, String defaultValue ) {\n    return variables.getVariable( variableName, defaultValue );\n  }\n\n  public String getVariable( String variableName ) {\n    return variables.getVariable( variableName );\n  }\n\n  public boolean getBooleanValueOfVariable( String variableName, boolean defaultValue ) {\n    if ( !Utils.isEmpty( variableName ) ) {\n      String value = environmentSubstitute( variableName );\n      if ( !Utils.isEmpty( value ) ) {\n        return ValueMetaBase.convertStringToBoolean( value );\n      }\n    }\n    return defaultValue;\n  }\n\n  public void initializeVariablesFrom( VariableSpace parent ) {\n    variables.initializeVariablesFrom( parent );\n  }\n\n  public String[] listVariables() {\n    return variables.listVariables();\n  }\n\n  public void setVariable( String variableName, String variableValue ) {\n    variables.setVariable( variableName, variableValue );\n  }\n\n  public void shareVariablesWith( VariableSpace space ) {\n    variables = space;\n  }\n\n  public void injectVariables( Map<String, String> prop ) {\n    variables.injectVariables( prop );\n  }\n\n  public void replaceMeta( NamedCluster nc ) {\n    this.setName( nc.getName() );\n    this.setShimIdentifier( nc.getShimIdentifier() );\n    this.setStorageScheme( nc.getStorageScheme() );\n    this.setHdfsHost( nc.getHdfsHost() );\n    this.setHdfsPort( nc.getHdfsPort() );\n    this.setHdfsUsername( nc.getHdfsUsername() );\n    this.setHdfsPassword( nc.getHdfsPassword() );\n    this.setJobTrackerHost( nc.getJobTrackerHost() );\n    this.setJobTrackerPort( nc.getJobTrackerPort() );\n    this.setZooKeeperHost( nc.getZooKeeperHost() );\n    this.setZooKeeperPort( nc.getZooKeeperPort() );\n    this.setOozieUrl( nc.getOozieUrl() );\n    this.setMapr( nc.isMapr() );\n    this.setGatewayUrl( nc.getGatewayUrl() );\n    this.setGatewayUsername( nc.getGatewayUsername() );\n    this.setGatewayPassword( nc.getGatewayPassword() );\n    this.setUseGateway( nc.isUseGateway() );\n    this.setKafkaBootstrapServers( nc.getKafkaBootstrapServers() );\n    this.lastModifiedDate = System.currentTimeMillis();\n    for ( NamedClusterSiteFile ncsf : nc.getSiteFiles() ) {\n      this.siteFiles.add( ncsf.copy() );\n    }\n  }\n\n  public NamedClusterImpl clone() {\n    return new NamedClusterImpl( this );\n  }\n\n  @Override\n  public String processURLsubstitution( String incomingURL, IMetaStore metastore, VariableSpace variableSpace ) {\n    if ( isUseGateway() ) {\n      if ( incomingURL.startsWith( NC_SCHEME ) ) {\n        return incomingURL;\n      }\n      StringBuilder builder = new StringBuilder( NC_SCHEME + \"://\" );\n      builder.append( getName() );\n      builder.append( incomingURL.startsWith( \"/\" ) ? incomingURL : \"/\" + incomingURL );\n      return builder.toString();\n    } else if ( isMapr() ) {\n      String url = processURLsubstitution( incomingURL, MAPRFS_SCHEME, metastore, variableSpace );\n      if ( url != null && !url.startsWith( MAPRFS_SCHEME ) ) {\n        url = MAPRFS_SCHEME + \"://\" + url;\n      }\n      return url;\n    } else {\n      return processURLsubstitution( incomingURL, getStorageScheme(), metastore, variableSpace );\n    }\n  }\n\n  private String processURLsubstitution( String incomingURL, String hdfsScheme, IMetaStore metastore,\n                                         VariableSpace variableSpace ) {\n\n    String outgoingURL = null;\n    String clusterURL = null;\n    if ( !hdfsScheme.equals( MAPRFS_SCHEME ) ) {\n      clusterURL = generateURL( hdfsScheme, metastore, variableSpace );\n    }\n    try {\n      if ( clusterURL == null || isHdfsHostEmpty( variableSpace ) ) {\n        outgoingURL = incomingURL;\n      } else if ( incomingURL.equals( \"/\" ) ) {\n        outgoingURL = clusterURL;\n      } else if ( clusterURL != null ) {\n        String noVariablesURL = incomingURL.replaceAll( \"[${}]\", \"/\" );\n\n        String fullyQualifiedIncomingURL = incomingURL;\n        if ( !incomingURL.startsWith( hdfsScheme ) && !incomingURL.startsWith( NC_SCHEME ) ) {\n          fullyQualifiedIncomingURL = clusterURL + incomingURL;\n          noVariablesURL = clusterURL + incomingURL.replaceAll( \"[${}]\", \"/\" );\n        }\n\n        UrlFileNameParser parser = new UrlFileNameParser();\n        FileName fileName = parser.parseUri( null, null, noVariablesURL );\n        String root = fileName.getRootURI();\n        String path = fullyQualifiedIncomingURL.substring( root.length() - 1 );\n        StringBuilder buffer = new StringBuilder();\n        // Check for a special case where a fully qualified path (one that has the protocol in it).\n        // This can only happen through variable replacement. See BACKLOG-15849. When this scenario\n        // occurs we do not prepend the cluster uri to the url.\n        boolean prependCluster = true;\n        if ( variableSpace != null ) {\n          String filePath = variableSpace.environmentSubstitute( path );\n          StringBuilder pattern = new StringBuilder();\n          pattern.append( \"^(\" ).append( HDFS_SCHEME ).append( \"|\" ).append( WASB_SCHEME ).append( \"|\" ).append(\n              MAPRFS_SCHEME ).append( \"|\" ).append( NC_SCHEME ).append( \"):\\\\/\\\\/\" );\n          Pattern r = Pattern.compile( pattern.toString() );\n          Matcher m = r.matcher( filePath );\n          prependCluster = !m.find();\n        }\n        if ( prependCluster ) {\n          buffer.append( clusterURL );\n        }\n        buffer.append( path );\n        outgoingURL = buffer.toString();\n      }\n    } catch ( Exception e ) {\n      outgoingURL = null;\n    }\n    return outgoingURL;\n  }\n\n  @VisibleForTesting boolean isHdfsHostEmpty( VariableSpace variableSpace ) {\n    String hostNameParsed = getHostNameParsed( variableSpace );\n    return hostNameParsed == null || hostNameParsed.trim().isEmpty();\n  }\n\n  public String getHostNameParsed( VariableSpace variableSpace ) {\n    if ( StringUtil.isVariable( hdfsHost ) ) {\n      if ( variableSpace == null ) {\n        return null;\n      }\n      return variableSpace.getVariable( StringUtil.getVariableName( getHdfsHost() ) );\n    }\n    return hdfsHost != null ? hdfsHost.trim() : null;\n  }\n\n  /**\n   * This method generates the URL from the specific NamedCluster using the specified scheme.\n   *\n   * @param scheme the name of the scheme to use to create the URL\n   * @return the generated URL from the specific NamedCluster or null if an error occurs\n   */\n  @VisibleForTesting String generateURL( String scheme, IMetaStore metastore, VariableSpace variableSpace ) {\n    String clusterURL = null;\n    try {\n      if ( !Utils.isEmpty( scheme ) ) {\n        String ncHostname = getHdfsHost() != null ? getHdfsHost() : \"\";\n        String ncPort = getHdfsPort() != null ? getHdfsPort() : \"\";\n        String ncUsername = getHdfsUsername() != null ? getHdfsUsername() : \"\";\n        String ncPassword = getHdfsPassword() != null ? decodePassword( getHdfsPassword() ) : \"\";\n\n        if ( variableSpace != null ) {\n          variableSpace.initializeVariablesFrom( getParentVariableSpace() );\n          if ( StringUtil.isVariable( scheme ) ) {\n            scheme =\n              variableSpace.getVariable( StringUtil.getVariableName( scheme ) ) != null ? variableSpace\n                .environmentSubstitute( scheme ) : null;\n          }\n          if ( StringUtil.isVariable( ncHostname ) ) {\n            ncHostname =\n              variableSpace.getVariable( StringUtil.getVariableName( ncHostname ) ) != null ? variableSpace\n                .environmentSubstitute( ncHostname ) : null;\n          }\n          if ( StringUtil.isVariable( ncPort ) ) {\n            ncPort =\n              variableSpace.getVariable( StringUtil.getVariableName( ncPort ) ) != null ? variableSpace\n                .environmentSubstitute( ncPort ) : null;\n          }\n          if ( StringUtil.isVariable( ncUsername ) ) {\n            ncUsername =\n              variableSpace.getVariable( StringUtil.getVariableName( ncUsername ) ) != null ? variableSpace\n                .environmentSubstitute( ncUsername ) : null;\n          }\n          if ( StringUtil.isVariable( ncPassword ) ) {\n            ncPassword =\n              variableSpace.getVariable( StringUtil.getVariableName( ncPassword ) ) != null ? variableSpace\n                .environmentSubstitute( ncPassword ) : null;\n          }\n        }\n\n        ncHostname = ncHostname != null ? ncHostname.trim() : \"\";\n        if ( ncPort == null ) {\n          ncPort = \"-1\";\n        } else {\n          ncPort = ncPort.trim();\n          if ( Utils.isEmpty( ncPort ) ) {\n            ncPort = \"-1\";\n          }\n        }\n        ncUsername = ncUsername != null ? ncUsername.trim() : \"\";\n        ncPassword = ncPassword != null ? ncPassword.trim() : \"\";\n\n        UrlFileName file =\n          new UrlFileName( scheme, ncHostname, Integer.parseInt( ncPort ), -1, ncUsername, ncPassword, null, null,\n            null );\n        clusterURL = file.getURI();\n        if ( clusterURL.endsWith( \"/\" ) ) {\n          clusterURL = clusterURL.substring( 0, clusterURL.lastIndexOf( '/' ) );\n        }\n      }\n    } catch ( Exception e ) {\n      clusterURL = null;\n    }\n    return clusterURL;\n  }\n\n  /* (non-Javadoc)\n   * @see java.lang.Object#equals(java.lang.Object)\n   */\n  @Override\n  public boolean equals( Object obj ) {\n    if ( this == obj ) {\n      return true;\n    }\n    if ( obj == null ) {\n      return false;\n    }\n    if ( getClass() != obj.getClass() ) {\n      return false;\n    }\n    NamedCluster other = (NamedCluster) obj;\n    if ( name == null ) {\n      if ( other.getName() != null ) {\n        return false;\n      }\n    } else if ( !name.equals( other.getName() ) ) {\n      return false;\n    }\n    return true;\n  }\n\n  public String getHdfsHost() {\n    return hdfsHost;\n  }\n\n  public void setHdfsHost( String hdfsHost ) {\n    this.hdfsHost = hdfsHost;\n  }\n\n  public String getHdfsPort() {\n    return hdfsPort;\n  }\n\n  public void setHdfsPort( String hdfsPort ) {\n    this.hdfsPort = hdfsPort;\n  }\n\n  public String getHdfsUsername() {\n    return hdfsUsername;\n  }\n\n  public void setHdfsUsername( String hdfsUsername ) {\n    this.hdfsUsername = hdfsUsername;\n  }\n\n  public String getHdfsPassword() {\n    return hdfsPassword;\n  }\n\n  public void setHdfsPassword( String hdfsPassword ) {\n    this.hdfsPassword = hdfsPassword;\n  }\n\n  public String getJobTrackerHost() {\n    return jobTrackerHost;\n  }\n\n  public void setJobTrackerHost( String jobTrackerHost ) {\n    this.jobTrackerHost = jobTrackerHost;\n  }\n\n  public String getJobTrackerPort() {\n    return jobTrackerPort;\n  }\n\n  public void setJobTrackerPort( String jobTrackerPort ) {\n    this.jobTrackerPort = jobTrackerPort;\n  }\n\n  public String getZooKeeperHost() {\n    return zooKeeperHost;\n  }\n\n  public void setZooKeeperHost( String zooKeeperHost ) {\n    this.zooKeeperHost = zooKeeperHost;\n  }\n\n  public String getZooKeeperPort() {\n    return zooKeeperPort;\n  }\n\n  public void setZooKeeperPort( String zooKeeperPort ) {\n    this.zooKeeperPort = zooKeeperPort;\n  }\n\n  public String getOozieUrl() {\n    return oozieUrl;\n  }\n\n  public void setOozieUrl( String oozieUrl ) {\n    this.oozieUrl = oozieUrl;\n  }\n\n  public long getLastModifiedDate() {\n    return lastModifiedDate;\n  }\n\n  public void setLastModifiedDate( long lastModifiedDate ) {\n    this.lastModifiedDate = lastModifiedDate;\n  }\n\n  public void setMapr( boolean mapr ) {\n    if ( mapr ) {\n      setStorageScheme( MAPRFS_SCHEME );\n    }\n  }\n\n  @Deprecated\n  public boolean isMapr() {\n    if ( storageScheme == null ) {\n      return mapr;\n    } else {\n      return storageScheme.equals( MAPRFS_SCHEME );\n    }\n  }\n\n  @Override\n  public String toString() {\n    return \"Named cluster: \" + getName();\n  }\n\n  public String toXmlForEmbed( String rootTag ) {\n    BeanMap m = new BeanMap( this );\n    DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();\n    DocumentBuilder builder = null;\n    Document doc = null;\n    try {\n      builder = dbf.newDocumentBuilder();\n      doc = builder.newDocument();\n      Element rootNode = doc.createElement( rootTag );\n      doc.appendChild( rootNode );\n      Iterator<Map.Entry<Object, Object>> i = m.entryIterator();\n      while ( i.hasNext() ) {\n        Map.Entry<Object, Object> entry = i.next();\n        String elementName = (String) entry.getKey();\n        if ( !\"class\".equals( elementName ) && !\"parentVariableSpace\".equals( elementName ) ) {\n          String value = \"\";\n          String type = UPPER_STRING;\n          Element children = null;\n          Object o = entry.getValue();\n          if ( o != null ) {\n            if ( o instanceof ArrayList ) {\n              value = NamedClusterSiteFileImpl.class.getName();\n              children = createSiteFileChildren( doc, ( (ArrayList<NamedClusterSiteFile>) o ) );\n            } else if ( o instanceof Long ) {\n              value = Long.toString( (Long) o );\n            } else if ( o instanceof Boolean ) {\n              value = Boolean.toString( (Boolean) o );\n            } else {\n              try {\n                value = (String) entry.getValue();\n                if ( elementName.toLowerCase().contains( \"password\" ) ) {\n                  value = encodePassword( value );\n                }\n              } catch ( Exception e ) {\n                LOGGER.error( \"Error encoding password\", e );\n              }\n            }\n          }\n          rootNode.appendChild( createChildElement( doc, elementName, type, value, children ) );\n        }\n      }\n      DOMSource domSource = new DOMSource( doc );\n      StringWriter writer = new StringWriter();\n      StreamResult result = new StreamResult( writer );\n      TransformerFactory tf = TransformerFactory.newInstance();\n      tf.setFeature( XMLConstants.FEATURE_SECURE_PROCESSING, true );\n      tf.setAttribute( XMLConstants.ACCESS_EXTERNAL_DTD, \"\" );\n      tf.setAttribute( XMLConstants.ACCESS_EXTERNAL_STYLESHEET, \"\" );\n      Transformer transformer = tf.newTransformer();\n      transformer.transform( domSource, result );\n      String s = writer.toString();\n      // Remove header from the XML\n      s = s.substring( s.indexOf( '>' ) + 1 );\n      return s;\n    } catch ( ParserConfigurationException | TransformerException e1 ) {\n      LOGGER.error( \"Could not parse embedded cluster xml\", e1 );\n      return \"\";\n    }\n  }\n\n  private Element createSiteFileChildren( Document doc, ArrayList<NamedClusterSiteFile> siteFiles ) {\n    Element children = doc.createElement( CHILDREN );\n    int index = 0;\n    for ( NamedClusterSiteFile sitefile : siteFiles ) {\n      Element siteChildren = doc.createElement( CHILDREN );\n      siteChildren\n        .appendChild( createChildElement( doc, \"siteFileContents\", UPPER_STRING, sitefile.getSiteFileContents(), null ) );\n      siteChildren\n        .appendChild( createChildElement( doc, \"siteFileName\",  UPPER_STRING, sitefile.getSiteFileName(), null ) );\n      children.appendChild( createChildElement( doc, String.valueOf( index++ ),  UPPER_STRING, \"\", siteChildren ) );\n    }\n    return children;\n  }\n\n  public NamedCluster fromXmlForEmbed( Node node ) {\n    NamedClusterImpl returnCluster = this.clone();\n    List<Node> fields = XMLHandler.getNodes( node, CHILD );\n    for ( Node field: fields ) {\n      String fieldName = XMLHandler.getTagValue( field, ID );\n      Object fieldValue = null;\n\n      if ( \"siteFiles\".equals( fieldName ) ) {\n        fieldValue = unmarshallSiteFileNode( field );\n      } else {\n        String stringValue = XMLHandler.getTagValue( field, VALUE );\n        if ( fieldName.toLowerCase().contains( \"password\" ) ) {\n          stringValue = decodePassword( stringValue );\n        }\n        fieldValue = stringValue;\n      }\n      try {\n        BeanUtils.setProperty( returnCluster, fieldName, fieldValue );\n      } catch ( IllegalAccessException | InvocationTargetException e ) {\n        LOGGER.error( \"Could not set field \" + fieldName + \" in NamedCluster\", e );\n      }\n    }\n    return returnCluster;\n  }\n\n  private Object unmarshallSiteFileNode( Node field ) {\n    ArrayList<NamedClusterSiteFile> namedClusterSiteFiles = new ArrayList<>();\n    Node siteFileWrapper = XMLHandler.getSubNode( field, CHILDREN );\n    if ( siteFileWrapper != null ) {\n      unmarshallSiteFiles( namedClusterSiteFiles, XMLHandler.getNodes( siteFileWrapper, CHILD )  );\n    }\n    return namedClusterSiteFiles;\n  }\n\n  private void unmarshallSiteFiles( ArrayList<NamedClusterSiteFile> namedClusterSiteFiles, List<Node> siteFileNodes ) {\n    for ( Node siteFile : siteFileNodes ) {\n      namedClusterSiteFiles.add( unmarshallSiteFields( XMLHandler.getNodes( XMLHandler.getSubNode( siteFile, CHILDREN ), CHILD ) ) );\n    }\n  }\n\n  private NamedClusterSiteFileImpl unmarshallSiteFields( List<Node> siteFields ) {\n    NamedClusterSiteFileImpl namedClusterSiteFile = new NamedClusterSiteFileImpl();\n    for ( Node siteField : siteFields ) {\n      String id = XMLHandler.getTagValue( siteField, ID );\n      if ( id != null && !id.isEmpty() ) {\n        try {\n          BeanUtils.setProperty( namedClusterSiteFile, id, XMLHandler.getTagValue( siteField, VALUE ) );\n        } catch ( IllegalAccessException | InvocationTargetException e ) {\n          LOGGER.error( \"Could not set field \" + id + \" in NamedClusterSiteFile\", e );\n        }\n      }\n    }\n    return namedClusterSiteFile;\n  }\n\n  private Node createChildElement( Document doc, String elementName, String elementType, String elementValue, Element children ) {\n    Element childNode = doc.createElement( CHILD );\n    childNode.appendChild( createTextNode( doc, ID, elementName ) );\n    childNode.appendChild( createTextNode( doc, VALUE, elementValue ) );\n    childNode.appendChild( createTextNode( doc, \"type\", elementType ) );\n    if ( children != null ) {\n      childNode.appendChild( children );\n    }\n    return childNode;\n  }\n\n  private Node createTextNode( Document doc, String tagName, String value ) {\n    Node node = doc.createElement( tagName );\n    node.appendChild( doc.createTextNode( value ) );\n    return node;\n  }\n\n  @Override\n  public String getGatewayUrl() {\n    return gatewayUrl;\n  }\n\n  @Override\n  public void setGatewayUrl( String gatewayUrl ) {\n    this.gatewayUrl = gatewayUrl;\n  }\n\n  @Override\n  public String getGatewayUsername() {\n    return gatewayUsername;\n  }\n\n  @Override\n  public void setGatewayUsername( String gatewayUsername ) {\n    this.gatewayUsername = gatewayUsername;\n  }\n\n  @Override\n  public String getGatewayPassword() {\n    return decodePassword( gatewayPassword );\n  }\n\n  @Override\n  public void setGatewayPassword( String gatewayPassword ) {\n    this.gatewayPassword = encodePassword( gatewayPassword );\n  }\n\n  @Override\n  public boolean isUseGateway() {\n    return useGateway;\n  }\n\n  @Override\n  public void setUseGateway( boolean useGateway ) {\n    this.useGateway = useGateway;\n  }\n\n  @Override public String getKafkaBootstrapServers() {\n    return kafkaBootstrapServers;\n  }\n\n  @Override public void setKafkaBootstrapServers( String kafkaBootstrapServers ) {\n    this.kafkaBootstrapServers = kafkaBootstrapServers;\n  }\n\n  @Override public NamedClusterOsgi nonOsgiFromXmlForEmbed( Node node ) {\n    return (NamedClusterOsgi) fromXmlForEmbed( node );\n  }\n\n  public String decodePassword( String password ) {\n    if ( password == null || password.startsWith( Encr.PASSWORD_ENCRYPTED_PREFIX ) ) {\n      return Encr.decryptPasswordOptionallyEncrypted( password );\n    } else {\n      //Password is likely stored encrypted with legacy Base64TwoWayPasswordEncoder\n      if ( !StringUtil.isVariable( password ) ) {\n        return passwordEncoder.decode( password );\n      }\n    }\n    return password;\n  }\n\n  public String encodePassword( String password ) {\n    return Encr.encryptPasswordIfNotUsingVariables( password );\n  }\n\n  @Override\n  public List<NamedClusterSiteFile> getSiteFiles() {\n    return siteFiles;\n  }\n\n  @Override\n  public void setSiteFiles( List<NamedClusterSiteFile> siteFiles ) {\n    this.siteFiles = siteFiles;\n  }\n\n  @Override\n  public void addSiteFile( String fileName, String content ) {\n    siteFiles.add( new NamedClusterSiteFileImpl( fileName, content ) );\n  }\n\n  @Override\n  public void addSiteFile( NamedClusterSiteFile namedClusterSiteFile ) {\n    siteFiles.add( namedClusterSiteFile );\n  }\n\n  @Override\n  public InputStream getSiteFileInputStream( String siteFileName ) {\n    NamedClusterSiteFile n = siteFiles.stream().filter( sf -> sf.getSiteFileName().equals( siteFileName ) )\n      .findFirst().orElse( null );\n    return n == null ? null : new ByteArrayInputStream( n.getSiteFileContents().getBytes() );\n  }\n}\n"
  },
  {
    "path": "impl/cluster/src/main/java/org/pentaho/big/data/impl/cluster/NamedClusterManager.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.io.FileUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileType;\nimport org.osgi.framework.BundleContext;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.KettleClientEnvironment;\nimport org.pentaho.di.core.attributes.metastore.EmbeddedMetaStore;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.osgi.impl.NamedClusterSiteFileImpl;\nimport org.pentaho.di.core.plugins.LifecyclePluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.persist.MetaStoreFactory;\nimport org.pentaho.metastore.stores.xml.XmlMetaStore;\nimport org.pentaho.metastore.stores.xml.XmlUtil;\nimport org.pentaho.metastore.util.PentahoDefaults;\n\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileNotFoundException;\nimport java.io.IOException;\nimport java.nio.charset.StandardCharsets;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.stream.Collectors;\n\npublic class NamedClusterManager implements NamedClusterService {\n\n  private static NamedClusterManager instance = new NamedClusterManager();\n  public static final String BIG_DATA_SLAVE_METASTORE_DIR = \"hadoop.configurations.path\";\n  private static final Class<?> PKG = NamedClusterManager.class;\n  private Map<IMetaStore, MetaStoreFactory<NamedClusterImpl>> factoryMap = new HashMap<>();\n  private NamedCluster clusterTemplate;\n\n  private LogChannel log = new LogChannel( this );\n\n  private Map<String, Object> properties = new HashMap<>();\n  private static final String LOCALHOST = \"localhost\";\n  private static final List<String> siteFileNames =\n    Arrays.asList( \"hdfs-site.xml\", \"core-site.xml\", \"mapred-site.xml\", \"yarn-site.xml\",\n      \"hbase-site.xml\", \"hive-site.xml\" );\n\n  public static NamedClusterManager getInstance() {\n    return instance;\n  }\n\n  /**\n   * returns a NamedClusterMetaStoreFactory for a given MetaStore instance. NOTE:  This method caches and returns a\n   * factory for Embedded MetaStores.  For all other MetaStores, a new instance of MetaStoreFactory will always be\n   * returned.\n   *\n   * @param metastore - the MetaStore for which to to get a MetaStoreFactory.\n   * @return a MetaStoreFactory for the given MetaStore.\n   */\n  @VisibleForTesting\n  MetaStoreFactory<NamedClusterImpl> getMetaStoreFactory( IMetaStore metastore ) {\n    MetaStoreFactory<NamedClusterImpl> namedClusterMetaStoreFactory = null;\n\n    // Only MetaStoreFactories for EmbeddedMetaStores are cached.  For all other MetaStore types, create a new\n    // MetaStoreFactory\n    if ( !( metastore instanceof EmbeddedMetaStore ) ) {\n      return new MetaStoreFactory<>( NamedClusterImpl.class, metastore, PentahoDefaults.NAMESPACE );\n    }\n\n    // cache MetaStoreFactories for Embedded MetaStores\n    namedClusterMetaStoreFactory = factoryMap.computeIfAbsent( metastore,\n      m -> ( new MetaStoreFactory<>( NamedClusterImpl.class, m, NamedClusterEmbedManager.NAMESPACE ) ) );\n\n    return namedClusterMetaStoreFactory;\n  }\n\n  @VisibleForTesting\n  void putMetaStoreFactory( IMetaStore metastore, MetaStoreFactory<NamedClusterImpl> metaStoreFactory ) {\n    factoryMap.put( metastore, metaStoreFactory );\n  }\n\n  @Override public void close( IMetaStore metastore ) {\n    factoryMap.remove( metastore );\n  }\n\n  @Override\n  public NamedCluster getClusterTemplate() {\n    if ( clusterTemplate == null ) {\n      clusterTemplate = new NamedClusterImpl();\n      clusterTemplate.setName( \"\" );\n      clusterTemplate.setHdfsHost( LOCALHOST );\n      clusterTemplate.setHdfsPort( \"8020\" );\n      clusterTemplate.setHdfsUsername( \"user\" );\n      clusterTemplate.setHdfsPassword( clusterTemplate.encodePassword( \"password\" ) );\n      clusterTemplate.setJobTrackerHost( LOCALHOST );\n      clusterTemplate.setJobTrackerPort( \"8032\" );\n      clusterTemplate.setZooKeeperHost( LOCALHOST );\n      clusterTemplate.setZooKeeperPort( \"2181\" );\n      clusterTemplate.setOozieUrl( \"http://localhost:8080/oozie\" );\n    }\n    return clusterTemplate.clone();\n  }\n\n  @Override\n  public void setClusterTemplate( NamedCluster clusterTemplate ) {\n    this.clusterTemplate = clusterTemplate;\n  }\n\n  @Override\n  public void create( NamedCluster namedCluster, IMetaStore metastore ) throws MetaStoreException {\n    getMetaStoreFactory( metastore ).saveElement( new NamedClusterImpl( namedCluster ) );\n  }\n\n  @Override\n  public NamedCluster read( String clusterName, IMetaStore metastore ) throws MetaStoreException {\n    MetaStoreFactory<NamedClusterImpl> factory = getMetaStoreFactory( metastore );\n\n    if ( metastore == null || !listNames( metastore ).contains( clusterName ) ) {\n      // only try the slave metastore if the given one fails\n      IMetaStore slaveMetastore = getSlaveServerMetastore();\n      if ( slaveMetastore != null && listNames( slaveMetastore ).contains( clusterName ) ) {\n        factory = getMetaStoreFactory( slaveMetastore );\n      }\n    }\n\n    NamedCluster namedCluster = null;\n    try {\n      namedCluster = factory.loadElement( clusterName );\n    } catch ( MetaStoreException e ) {\n      // While executing Pentaho MapReduce on a secure cluster, the .lock file\n      // might not be able to be created due to permissions.\n      // In this case, try and read the MetaStore without locking.\n      namedCluster = factory.loadElement( clusterName, false );\n    }\n    return namedCluster;\n  }\n\n  @Override\n  public void update( NamedCluster namedCluster, IMetaStore metastore ) throws MetaStoreException {\n    MetaStoreFactory<NamedClusterImpl> factory = getMetaStoreFactory( metastore );\n    List<NamedCluster> namedClusters = list( metastore );\n    for ( NamedCluster nc : namedClusters ) {\n      if ( namedCluster.getName().equals( nc.getName() ) ) {\n        factory.deleteElement( nc.getName() );\n        factory.saveElement( new NamedClusterImpl( namedCluster ) );\n      }\n    }\n  }\n\n  @Override\n  public void delete( String clusterName, IMetaStore metastore ) throws MetaStoreException {\n    getMetaStoreFactory( metastore ).deleteElement( clusterName );\n  }\n\n  @Override\n  public List<NamedCluster> list( IMetaStore metastore ) throws MetaStoreException {\n    MetaStoreFactory<NamedClusterImpl> factory = getMetaStoreFactory( metastore );\n    List<NamedCluster> namedClusters;\n    List<MetaStoreException> exceptionList = new ArrayList<>();\n\n    try {\n      namedClusters = new ArrayList<>( factory.getElements( true, exceptionList ) );\n    } catch ( MetaStoreException ex ) {\n      // While executing Pentaho MapReduce on a secure cluster, the .lock file\n      // might not be able to be created due to permissions.\n      // In this case, try and read the MetaStore without locking.\n      namedClusters = new ArrayList<>( factory.getElements( false, exceptionList ) );\n    }\n\n    return namedClusters;\n  }\n\n  /**\n   * This method lists the NamedClusters in the given IMetaStore.  If an exception is thrown when parsing the data for a\n   * given NamedCluster.  The exception will be added to the exceptionList, but list generation will continue.\n   *\n   * @param metastore     the IMetaStore to operate with\n   * @param exceptionList As list to hold any exceptions that occur\n   * @return the list of NamedClusters in the provided IMetaStore\n   * @throws MetaStoreException\n   */\n  @Override\n  public List<NamedCluster> list( IMetaStore metastore, List<MetaStoreException> exceptionList )\n    throws MetaStoreException {\n    MetaStoreFactory<NamedClusterImpl> factory = getMetaStoreFactory( metastore );\n    return new ArrayList<>( factory.getElements( false, exceptionList ) );\n  }\n\n  @Override\n  public List<String> listNames( IMetaStore metastore ) throws MetaStoreException {\n    return getMetaStoreFactory( metastore ).getElementNames( false );\n  }\n\n  @Override\n  public boolean contains( String clusterName, IMetaStore metastore ) throws MetaStoreException {\n    boolean found = false;\n    if ( metastore != null ) {\n      found = listNames( metastore ).contains( clusterName );\n    }\n    if ( !found ) {\n      IMetaStore slaveMetastore = getSlaveServerMetastore();\n      if ( slaveMetastore != null ) {\n        found = listNames( slaveMetastore ).contains( clusterName );\n      }\n    }\n    return found;\n  }\n\n  @Override\n  public NamedCluster getNamedClusterByName( String namedClusterName, IMetaStore metastore ) {\n    NamedCluster namedCluster = null;\n    if ( metastore != null ) {\n      namedCluster = searchMetastoreByName( namedClusterName, metastore );\n    }\n    if ( namedCluster == null ) {\n      IMetaStore slaveMetastore = getSlaveServerMetastore();\n      if ( slaveMetastore != null ) {\n        namedCluster = searchMetastoreByName( namedClusterName, slaveMetastore );\n      }\n      if ( namedCluster != null ) {\n        metastore = slaveMetastore;\n      }\n    }\n    loadSiteFilesIfNecessary( namedCluster, metastore );\n    return namedCluster;\n  }\n\n  private NamedCluster searchMetastoreByName( String namedCluster, IMetaStore metastore ) {\n    try {\n      List<NamedCluster> namedClusters = list( metastore );\n      for ( NamedCluster nc : namedClusters ) {\n        if ( nc.getName().equals( namedCluster ) ) {\n          return nc;\n        }\n      }\n    } catch ( MetaStoreException e ) {\n      return null;\n    }\n    return null;\n  }\n\n  public Map<String, Object> getProperties() {\n    return properties;\n  }\n\n  @Override\n  public NamedCluster getNamedClusterByHost( String hostName, IMetaStore metastore ) {\n    NamedCluster namedCluster = null;\n    if ( hostName == null ) {\n      return null;\n    }\n    if ( metastore != null ) {\n      namedCluster = searchMetastoreByHost( hostName, metastore );\n    }\n    if ( namedCluster == null ) {\n      IMetaStore slaveMetastore = getSlaveServerMetastore();\n      if ( slaveMetastore != null ) {\n        namedCluster = searchMetastoreByHost( hostName, slaveMetastore );\n      }\n    }\n    return namedCluster;\n  }\n\n  private NamedCluster searchMetastoreByHost( String hostName, IMetaStore metastore ) {\n    try {\n      List<NamedCluster> namedClusters = list( metastore );\n      for ( NamedCluster nc : namedClusters ) {\n        if ( hostName.equals( nc.getHdfsHost() ) ) {\n          loadSiteFilesIfNecessary( nc, metastore );\n          return nc;\n        }\n      }\n    } catch ( MetaStoreException e ) {\n      return null;\n    }\n    return null;\n  }\n\n  @Override\n  public void updateNamedClusterTemplate( String hostName, int port, boolean isMapr ) {\n    if ( clusterTemplate == null ) {\n      getClusterTemplate();\n    }\n    clusterTemplate.setHdfsHost( hostName );\n    if ( port > 0 ) {\n      clusterTemplate.setHdfsPort( String.valueOf( port ) );\n    } else {\n      clusterTemplate.setHdfsPort( \"\" );\n    }\n    clusterTemplate.setMapr( isMapr );\n  }\n\n  private String getSlaveServerMetastoreDir() throws IOException {\n    PluginInterface pluginInterface =\n      PluginRegistry.getInstance().findPluginWithId( LifecyclePluginType.class, \"HadoopSpoonPlugin\" );\n    Properties legacyProperties;\n\n    try {\n      legacyProperties = loadProperties( pluginInterface, \"plugin.properties\" );\n      String slaveMetaStorePath = legacyProperties.getProperty( BIG_DATA_SLAVE_METASTORE_DIR );\n      FileObject slaveMetastoreDir;\n\n      // check for user-specified metastore directory\n      if ( useSlaveMetastorePathFromProperties( slaveMetaStorePath ) ) {\n        return slaveMetaStorePath;\n      }\n\n      // see if metastore was copied to the big data plugin folder (yarn kettle cluster job)\n      slaveMetaStorePath = pluginInterface.getPluginDirectory().getPath();\n      slaveMetastoreDir =\n        KettleVFS.getInstance( DefaultBowl.getInstance() )\n          .getFileObject( slaveMetaStorePath + File.separator + XmlUtil.META_FOLDER_NAME );\n      if ( null != slaveMetastoreDir && slaveMetastoreDir.exists()\n        && slaveMetastoreDir.getType().equals( FileType.FOLDER )\n        // last condition exists to ensure that this path doesn't get used if two jobs are running on a slave instance\n        // at once, and one of them is packaging up the install for a yarn carte job\n        && KettleClientEnvironment.getInstance().getClient().equals( KettleClientEnvironment.ClientType.CARTE ) ) {\n        return slaveMetaStorePath;\n      }\n\n      slaveMetaStorePath = System.getProperty( \"user.home\" ) + File.separator + \".pentaho\";\n      slaveMetastoreDir =\n        KettleVFS.getInstance( DefaultBowl.getInstance() ).getFileObject( slaveMetaStorePath );\n      if ( null != slaveMetastoreDir && slaveMetastoreDir.exists()\n        && slaveMetastoreDir.getType().equals( FileType.FOLDER ) ) {\n        return slaveMetaStorePath;\n      } else {\n        return null;\n      }\n    } catch ( KettleFileException | NullPointerException e ) {\n      log.logError( BaseMessages.getString( PKG, \"NamedClusterManager.ErrorFindingUserMetastore\" ), e );\n      throw new IOException( e );\n    }\n  }\n\n  private boolean useSlaveMetastorePathFromProperties( String slaveMetaStorePath ) throws FileSystemException {\n    FileObject slaveMetastoreDir;\n    try {\n      slaveMetastoreDir = KettleVFS.getInstance( DefaultBowl.getInstance() )\n        .getFileObject( slaveMetaStorePath + File.separator + XmlUtil.META_FOLDER_NAME );\n      return null != slaveMetaStorePath && !slaveMetaStorePath.equals( \"\" )\n        && null != slaveMetastoreDir && slaveMetastoreDir.exists();\n    } catch ( KettleFileException e ) {\n      log.logError( BaseMessages.getString( PKG, \"NamedClusterManager.ErrorFindingUserMetastore\" ), e );\n    }\n    return false;\n  }\n\n  @VisibleForTesting\n  IMetaStore getSlaveServerMetastore() {\n    try {\n      String metastoreDir = getSlaveServerMetastoreDir();\n      if ( null != metastoreDir ) {\n        return new XmlMetaStore( getSlaveServerMetastoreDir() );\n      } else {\n        // it is essential that this method returns a null value if no slave metastore directory exists\n        return null;\n      }\n    } catch ( IOException | MetaStoreException e ) {\n      log.logError( BaseMessages.getString( PKG, \"NamedClusterManager.ErrorReadingMetastore\" ), e );\n      return null;\n    }\n  }\n\n  /**\n   * Loads a properties file from the plugin directory for the plugin interface provided\n   *\n   * @param plugin\n   * @return\n   * @throws KettleFileException\n   * @throws IOException\n   */\n  private Properties loadProperties( PluginInterface plugin, String relativeName ) throws KettleFileException,\n    IOException {\n    if ( plugin == null ) {\n      throw new NullPointerException();\n    }\n    FileObject propFile =\n      KettleVFS.getInstance( DefaultBowl.getInstance() )\n        .getFileObject( plugin.getPluginDirectory().getPath() + Const.FILE_SEPARATOR + relativeName );\n    if ( !propFile.exists() ) {\n      throw new FileNotFoundException( propFile.toString() );\n    }\n    try {\n      Properties pluginProperties = new Properties();\n      pluginProperties.load( new FileInputStream( propFile.getName().getPath() ) );\n      return pluginProperties;\n    } catch ( Exception e ) {\n      // Do not catch ConfigurationException. Different shims will use different\n      // packages for this exception.\n      throw new IOException( e );\n    }\n  }\n\n  private void loadSiteFilesIfNecessary( NamedCluster namedCluster, IMetaStore metaStore ) {\n    if ( namedCluster == null ) {\n      return; //Can't do anything without a cluster\n    }\n    if ( namedCluster.getSiteFiles().isEmpty() ) {\n      // This seeds the site files once if not already present - standard behavior\n      unconditionalAddOfSiteFiles( namedCluster, metaStore );\n      return;\n    }\n    if ( Boolean.parseBoolean( System.getProperties().getProperty( Const.KETTLE_AUTO_UPDATE_SITE_FILE ) ) ) {\n      // Special mode that tries to update site files by checking modification time of the file against what\n      // is stored in the named cluster\n      semiIntelligentSiteFileUpdate( namedCluster, metaStore );\n    }\n  }\n\n  private void unconditionalAddOfSiteFiles( NamedCluster namedCluster, IMetaStore metaStore ) {\n    String rootDir = getNamedClusterConfigsRootDir( metaStore );\n    for ( String siteFileName : siteFileNames ) {\n      String path = rootDir + File.separator + namedCluster.getName() + File.separator + siteFileName;\n      File file = new File( path );\n      if ( file.exists() ) {\n        try {\n          namedCluster.addSiteFile( new NamedClusterSiteFileImpl( siteFileName, file.lastModified(),\n            FileUtils.readFileToString( file, StandardCharsets.UTF_8.toString() ) ) );\n        } catch ( IOException e ) {\n          log.logError( \"An error occurred importing \" + path + \" into HadoopCluster \" + namedCluster.getName(), e );\n        }\n      }\n    }\n    if ( !namedCluster.getSiteFiles().isEmpty() ) {\n      autoUpdateMetastoreWithSiteFiles( namedCluster, metaStore );\n    }\n  }\n\n  private void semiIntelligentSiteFileUpdate( NamedCluster namedCluster, IMetaStore metaStore ) {\n    String rootDir = getNamedClusterConfigsRootDir( metaStore );\n    Map<String, NamedClusterSiteFile> map = namedCluster.getSiteFiles().stream().collect(\n      Collectors.toMap( NamedClusterSiteFile::getSiteFileName, namedClusterSiteFile -> namedClusterSiteFile ) );\n    List<NamedClusterSiteFile> newSiteFiles = new ArrayList<>();\n    List<String> missingFiles = new ArrayList<>();\n    for ( String siteFileName : siteFileNames ) {\n      String path = rootDir + File.separator + namedCluster.getName() + File.separator + siteFileName;\n      File file = new File( path );\n      if ( file.exists() && ( map.get( siteFileName ) == null || file.lastModified() != map.get( siteFileName )\n        .getSourceFileModificationTime() ) ) {\n        try {\n          newSiteFiles.add( new NamedClusterSiteFileImpl( siteFileName, file.lastModified(),\n            FileUtils.readFileToString( file, StandardCharsets.UTF_8.toString() ) ) );\n        } catch ( IOException e ) {\n          log.logError( \"An error occurred importing \" + path + \" into HadoopCluster \" + namedCluster.getName(), e );\n        }\n      } else {\n        //List of files where we need to retain the old site file if it exists\n        missingFiles.add( siteFileName );\n      }\n    }\n    // If there is nothing new then we don't need to change anything\n    if ( !newSiteFiles.isEmpty() ) {\n      //Bring in the old files not present\n      for ( String siteFile : missingFiles ) {\n        if ( map.get( siteFile ) != null ) {\n          newSiteFiles.add( map.get( siteFile ) );\n        }\n      }\n      //newSiteFiles is complete, update the named cluster and write the metastore entry\n      namedCluster.setSiteFiles( newSiteFiles );\n      autoUpdateMetastoreWithSiteFiles( namedCluster, metaStore );\n    }\n  }\n\n  private void autoUpdateMetastoreWithSiteFiles( NamedCluster namedCluster, IMetaStore metaStore ) {\n    boolean recoverOriginal = false;\n    try {\n      update( namedCluster, metaStore );\n    } catch ( MetaStoreException e ) {\n      log.logError( \"An error occurred trying to save HadoopCluster \" + namedCluster.getName()\n        + \" with embedded site files in the metastore.  Recovering original HadoopCluster.\", e );\n      recoverOriginal = true;\n    }\n    //As a safeguard make sure we can read the metastore\n    if ( !recoverOriginal ) {\n      try {\n        getNamedClusterByName( namedCluster.getName(), metaStore );\n      } catch ( Exception e ) {\n        log.logError( \"Could not successfully read back Hadoop Cluster \" + namedCluster.getName()\n          + \" after embedding site files.  Recovering original HadoopCluster.\" );\n        recoverOriginal = true;\n      }\n    }\n    if ( recoverOriginal ) {\n      // We can't read the metastore or could store the new one.  Try to put the old hadoop cluster back\n      namedCluster.setSiteFiles( new ArrayList<NamedClusterSiteFile>() );\n      try {\n        update( namedCluster, metaStore );\n      } catch ( MetaStoreException e ) {\n        log.logError( \"An error occurred trying to recover the old HadoopCluster\" + namedCluster.getName(), e );\n      }\n    }\n  }\n\n  private String getNamedClusterConfigsRootDir( IMetaStore metaStore ) {\n    String rootDir = metaStore instanceof XmlMetaStore ? ( (XmlMetaStore) metaStore ).getRootFolder()\n      : System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\";\n\n    return rootDir + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\";\n  }\n}\n"
  },
  {
    "path": "impl/cluster/src/main/resources/org/pentaho/big/data/impl/cluster/messages/messages_en_US.properties",
    "content": "NamedClusterManager.ErrorFindingUserMetastore=No metastore found and exception encountered looking for user-specified or legacy metastore\nNamedClusterManager.ErrorReadingMetastore=Error loading user-specified metastore"
  },
  {
    "path": "impl/cluster/src/test/java/org/pentaho/big/data/impl/cluster/NamedClusterImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster;\n\nimport org.apache.commons.io.FileUtils;\nimport org.apache.commons.vfs2.VFS;\nimport org.apache.commons.vfs2.impl.StandardFileSystemManager;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.exception.KettleValueException;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.osgi.impl.NamedClusterSiteFileImpl;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.security.Base64TwoWayPasswordEncoder;\nimport org.pentaho.metastore.api.security.ITwoWayPasswordEncoder;\nimport org.w3c.dom.Element;\nimport org.w3c.dom.Node;\n\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.xpath.XPath;\nimport javax.xml.xpath.XPathConstants;\nimport javax.xml.xpath.XPathFactory;\nimport java.io.ByteArrayInputStream;\nimport java.io.File;\nimport java.util.Map;\n\nimport static com.google.code.beanmatchers.BeanMatchers.hasValidBeanConstructor;\nimport static com.google.code.beanmatchers.BeanMatchers.hasValidBeanEqualsFor;\nimport static com.google.code.beanmatchers.BeanMatchers.hasValidGettersAndSetters;\nimport static org.hamcrest.MatcherAssert.assertThat;\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.AdditionalMatchers.or;\nimport static org.mockito.ArgumentMatchers.isNull;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.validateMockitoUsage;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n\n/**\n * Created by bryan on 7/14/15.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class NamedClusterImplTest {\n  private static final String HDFS_PREFIX = \"hdfs\";\n\n  private VariableSpace variableSpace;\n  private NamedClusterImpl namedCluster;\n\n  private String namedClusterName;\n  private String namedClusterHdfsHost;\n  private String namedClusterHdfsPort;\n  private String namedClusterHdfsUsername;\n  private String namedClusterHdfsPassword;\n  private String namedClusterJobTrackerPort;\n  private String namedClusterJobTrackerHost;\n  private String namedClusterZookeeperHost;\n  private String namedClusterZookeeperPort;\n  private String namedClusterOozieUrl;\n  private String namedClusterStorageScheme;\n  private String namedClusterKafkaBootstrapServers;\n  private boolean isMapr;\n  private IMetaStore metaStore;\n  private StandardFileSystemManager fsm;\n  private String fileContents1;\n  private String fileContents2;\n  private MockedStatic<VFS> vfsMockedStatic;\n  private MockedStatic<UriParser> uriParserMockedStatic;\n\n  @Before\n  public void setup() throws Exception {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n    vfsMockedStatic = Mockito.mockStatic( VFS.class );\n    uriParserMockedStatic = Mockito.mockStatic( UriParser.class );\n    uriParserMockedStatic.when( () -> UriParser.encode( anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.decode( anyString() ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( any( StringBuilder.class ), anyString(), any( char[].class ) ) ).thenCallRealMethod();\n\n    metaStore = mock( IMetaStore.class );\n    variableSpace = mock( VariableSpace.class );\n    namedCluster = new NamedClusterImpl();\n    namedCluster.shareVariablesWith( variableSpace );\n    namedClusterName = \"namedClusterName\";\n    namedClusterHdfsHost = \"namedClusterHdfsHost\";\n    namedClusterHdfsPort = \"12345\";\n    namedClusterHdfsUsername = \"namedClusterHdfsUsername\";\n    namedClusterHdfsPassword = \"namedClusterHdfsPassword\";\n    namedClusterJobTrackerHost = \"namedClusterJobTrackerHost\";\n    namedClusterJobTrackerPort = \"namedClusterJobTrackerPort\";\n    namedClusterZookeeperHost = \"namedClusterZookeeperHost\";\n    namedClusterZookeeperPort = \"namedClusterZookeeperPort\";\n    namedClusterOozieUrl = \"namedClusterOozieUrl\";\n    namedClusterStorageScheme = \"hdfs\";\n    namedClusterKafkaBootstrapServers = \"kafkaBootstrapServers\";\n    isMapr = true;\n    fileContents1 =\n      FileUtils.readFileToString( new File( getClass().getResource( \"/core-site.xml\" ).getFile() ), \"UTF-8\" );\n    fileContents2 = \"some printable contents\";\n\n    namedCluster.setName( namedClusterName );\n    namedCluster.setHdfsHost( namedClusterHdfsHost );\n    namedCluster.setHdfsPort( namedClusterHdfsPort );\n    namedCluster.setHdfsUsername( namedClusterHdfsUsername );\n    namedCluster.setHdfsPassword( namedCluster.encodePassword( namedClusterHdfsPassword ) );\n    namedCluster.setJobTrackerHost( namedClusterJobTrackerHost );\n    namedCluster.setJobTrackerPort( namedClusterJobTrackerPort );\n    namedCluster.setZooKeeperHost( namedClusterZookeeperHost );\n    namedCluster.setZooKeeperPort( namedClusterZookeeperPort );\n    namedCluster.setOozieUrl( namedClusterOozieUrl );\n    namedCluster.setMapr( isMapr );\n    namedCluster.setStorageScheme( namedClusterStorageScheme );\n    namedCluster.setKafkaBootstrapServers( namedClusterKafkaBootstrapServers );\n    namedCluster.addSiteFile( \"core-site.xml\", fileContents1 );\n    namedCluster.addSiteFile( new NamedClusterSiteFileImpl( \"hbase-site.xml\", 11111L, fileContents2 ) );\n\n    fsm = mock( StandardFileSystemManager.class );\n    vfsMockedStatic.when( VFS::getManager ).thenReturn( fsm );\n  }\n\n  @After\n  public void cleanupMocks() {\n    vfsMockedStatic.close();\n    uriParserMockedStatic.close();\n    validateMockitoUsage();\n  }\n\n  @Test\n  public void testBean() {\n    assertThat( NamedClusterImpl.class, hasValidBeanConstructor() );\n    assertThat( NamedClusterImpl.class, hasValidGettersAndSetters() );\n    assertThat( NamedClusterImpl.class, hasValidBeanEqualsFor( \"name\" ) );\n  }\n\n  @Test\n  public void testClone() {\n    long before = System.currentTimeMillis();\n    NamedClusterImpl newNamedCluster = namedCluster.clone();\n    assertEquals( namedClusterStorageScheme, newNamedCluster.getStorageScheme() );\n    assertEquals( namedClusterName, newNamedCluster.getName() );\n    assertEquals( namedClusterHdfsHost, newNamedCluster.getHdfsHost() );\n    assertEquals( namedClusterHdfsPort, newNamedCluster.getHdfsPort() );\n    assertEquals( namedClusterHdfsUsername, newNamedCluster.getHdfsUsername() );\n    assertEquals( namedClusterHdfsPassword, newNamedCluster.decodePassword( newNamedCluster.getHdfsPassword() ) );\n    assertEquals( namedClusterJobTrackerHost, newNamedCluster.getJobTrackerHost() );\n    assertEquals( namedClusterJobTrackerPort, newNamedCluster.getJobTrackerPort() );\n    assertEquals( namedClusterZookeeperHost, newNamedCluster.getZooKeeperHost() );\n    assertEquals( namedClusterZookeeperPort, newNamedCluster.getZooKeeperPort() );\n    assertEquals( namedClusterOozieUrl, newNamedCluster.getOozieUrl() );\n    assertEquals( namedClusterKafkaBootstrapServers, newNamedCluster.getKafkaBootstrapServers() );\n    assertTrue( before <= newNamedCluster.getLastModifiedDate() );\n    assertTrue( newNamedCluster.getLastModifiedDate() <= System.currentTimeMillis() );\n  }\n\n  @Test\n  public void testCopyVariablesFrom() {\n    VariableSpace from = mock( VariableSpace.class );\n    namedCluster.copyVariablesFrom( from );\n    verify( variableSpace ).copyVariablesFrom( from );\n  }\n\n  @Test\n  public void testEnvironmentSubstitute() {\n    String testVar = \"testVar\";\n    String testVal = \"testVal\";\n    when( variableSpace.environmentSubstitute( testVar ) ).thenReturn( testVal );\n    assertEquals( testVal, namedCluster.environmentSubstitute( testVar ) );\n  }\n\n  @Test\n  public void testArrayEnvironmentSubstitute() {\n    String[] testVars = { \"testVar\" };\n    String[] testVals = { \"testVal\" };\n    Mockito.when( variableSpace.environmentSubstitute( testVars ) ).thenReturn( testVals );\n    assertArrayEquals( testVals, namedCluster.environmentSubstitute( testVars ) );\n  }\n\n  @Test\n  public void testFieldSubstitute() throws KettleValueException {\n    String testString = \"testString\";\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n    Object[] rowData = new Object[] {};\n    String testVal = \"testVal\";\n    when( variableSpace.fieldSubstitute( testString, rowMetaInterface, rowData ) ).thenReturn( testVal );\n    assertEquals( testVal, namedCluster.fieldSubstitute( testString, rowMetaInterface, rowData ) );\n  }\n\n  @Test\n  public void testGetVariableDefault() {\n    String name = \"name\";\n    String defaultValue = \"default\";\n    String val = \"val\";\n    when( variableSpace.getVariable( name, defaultValue ) ).thenReturn( val );\n    assertEquals( val, namedCluster.getVariable( name, defaultValue ) );\n  }\n\n  @Test\n  public void testGetVariable() {\n    String name = \"name\";\n    String val = \"val\";\n    when( variableSpace.getVariable( name ) ).thenReturn( val );\n    assertEquals( val, namedCluster.getVariable( name ) );\n  }\n\n  @Test\n  public void testGetBooleanValueOfVariable() {\n    String var = \"var\";\n    String val1 = \"Y\";\n    String val2 = \"N\";\n\n    assertTrue( namedCluster.getBooleanValueOfVariable( null, true ) );\n    assertFalse( namedCluster.getBooleanValueOfVariable( null, false ) );\n\n    when( variableSpace.environmentSubstitute( var ) ).thenReturn( val1 ).thenReturn( val2 ).thenReturn( null );\n    assertTrue( namedCluster.getBooleanValueOfVariable( var, false ) );\n    assertFalse( namedCluster.getBooleanValueOfVariable( var, true ) );\n    assertTrue( namedCluster.getBooleanValueOfVariable( var, true ) );\n    assertFalse( namedCluster.getBooleanValueOfVariable( var, false ) );\n  }\n\n  @Test\n  public void testListVariables() {\n    String[] vars = new String[] { \"vars\" };\n    when( variableSpace.listVariables() ).thenReturn( vars );\n    assertArrayEquals( vars, namedCluster.listVariables() );\n  }\n\n  @Test\n  public void testSetVariable() {\n    String var = \"var\";\n    String val = \"val\";\n    namedCluster.setVariable( var, val );\n    verify( variableSpace ).setVariable( var, val );\n  }\n\n  @Test\n  @SuppressWarnings( \"unchecked\" )\n  public void testInjectVariables() {\n    Map<String, String> prop = mock( Map.class );\n    namedCluster.injectVariables( prop );\n    verify( variableSpace ).injectVariables( prop );\n  }\n\n  @Test\n  public void testComparator() {\n    NamedClusterImpl other = new NamedClusterImpl();\n    other.setName( \"a\" );\n    assertTrue( NamedClusterImpl.comparator.compare( namedCluster, other ) > 0 );\n    other.setName( \"z\" );\n    assertTrue( NamedClusterImpl.comparator.compare( namedCluster, other ) < 0 );\n    other.setName( namedClusterName );\n    assertTrue( NamedClusterImpl.comparator.compare( namedCluster, other ) == 0 );\n  }\n\n  @Test\n  public void testToString() {\n    NamedClusterImpl other = new NamedClusterImpl();\n    assertEquals( \"Named cluster: null\", other.toString() );\n    other.setName( \"a\" );\n    assertEquals( \"Named cluster: a\", other.toString() );\n  }\n\n  @Ignore @Test\n  public void testGenerateURLNullParameters() {\n    namedCluster.setName( null );\n    String scheme = \"testScheme\";\n    buildAppendEncodedUserPassMocks( namedClusterHdfsUsername, namedClusterHdfsPassword );\n    assertEquals(\n      scheme + \"://\" + namedClusterHdfsUsername + \":\" + namedClusterHdfsPassword + \"@\" + namedClusterHdfsHost + \":\"\n        + namedClusterHdfsPort,\n      namedCluster.generateURL( \"testScheme\", metaStore, null ) );\n    assertNull( namedCluster.generateURL( null, metaStore, null ) );\n    assertEquals(\n      scheme + \"://\" + namedClusterHdfsUsername + \":\" + namedClusterHdfsPassword + \"@\" + namedClusterHdfsHost + \":\"\n        + namedClusterHdfsPort,\n      namedCluster.generateURL( \"testScheme\", null, null ) );\n  }\n\n  @Ignore @Test\n  public void testGenerateURLHDFS() {\n    String scheme = \"hdfs\";\n    String testHost = \"testHost\";\n    String testPort = \"9333\";\n    String testUsername = \"testUsername\";\n    String testPassword = \"testPassword\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    namedCluster.setHdfsPort( \" \" + testPort + \" \" );\n    namedCluster.setHdfsUsername( \" \" + testUsername + \" \" );\n    namedCluster.setHdfsPassword( namedCluster.encodePassword( testPassword ) );\n    buildAppendEncodedUserPassMocks( testUsername, namedCluster.encodePassword( testPassword ) );\n    assertEquals( scheme + \"://\" + testUsername + \":\" + testPassword + \"@\" + testHost + \":\" + testPort,\n      namedCluster.generateURL( scheme, metaStore, null ) );\n  }\n\n  @Test\n  public void testGenerateURLHDFSPort() {\n    String scheme = \"hdfs\";\n    String testHost = \"testHost\";\n    String testPort = \"9333\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    namedCluster.setHdfsPort( \" \" + testPort + \" \" );\n    namedCluster.setHdfsUsername( null );\n    namedCluster.setHdfsPassword( null );\n    assertEquals( scheme + \"://\" + testHost + \":\" + testPort,\n      namedCluster.generateURL( scheme, metaStore, null ) );\n  }\n\n  @Test\n  public void testCheckHdfsNameEmpty() {\n    String testHost = \"\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    assertEquals( true, namedCluster.isHdfsHostEmpty( null ) );\n  }\n\n  @Test\n  public void testGetHdfsNameParsed() {\n    String testHost = \"test\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    assertEquals( \"test\", namedCluster.getHostNameParsed( null ) );\n  }\n\n  @Test\n  public void testGetHdfsNameParsedFromVariable() {\n    String testHost = \"${hdfsHost}\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    when( variableSpace.getVariable( \"hdfsHost\" ) ).thenReturn( \"test\" );\n    assertEquals( \"test\", namedCluster.getHostNameParsed( variableSpace ) );\n  }\n\n  @Test\n  public void testGetHdfsNameParsedFromVariableNoVariableInSpace() {\n    String testHost = \"${hdfsHost}\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    assertEquals( null, namedCluster.getHostNameParsed( variableSpace ) );\n  }\n\n  @Test\n  public void testCheckHdfsNameNotEmpty() {\n    String testHost = \"test\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    assertEquals( false, namedCluster.isHdfsHostEmpty( null ) );\n  }\n\n  @Test\n  public void testCheckHdfsNameNull() {\n    namedCluster.setHdfsHost( null );\n    assertEquals( true, namedCluster.isHdfsHostEmpty( null ) );\n  }\n\n  @Test\n  public void testCheckHdfsNameVariableNull() {\n    namedCluster.setHdfsHost( \"${hdfsHost}\" );\n    assertEquals( true, namedCluster.isHdfsHostEmpty( null ) );\n  }\n\n  @Test\n  public void testCheckHdfsNameVariableNotNull() {\n    namedCluster.setHdfsHost( \"${hdfsHost}\" );\n    when( variableSpace.getVariable( \"hdfsHost\" ) ).thenReturn( \"test\" );\n    assertEquals( false, namedCluster.isHdfsHostEmpty( variableSpace ) );\n  }\n\n  @Test\n  public void testProcessURLHostEmpty() {\n    namedCluster.setHdfsHost( null );\n    namedCluster.setStorageScheme( \"hdfs\" );\n    String incomingURL = \"${hdfsUrl}/test\";\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Ignore @Test\n  public void testProcessURLhdfsFullSubstitution() {\n    String pathBase = \"//namedClusterHdfsUsername:namedClusterHdfsPassword@hostname:12340\";\n    String filePathInFileSystem = \"/tmp/hdsfDemo.txt\";\n    namedCluster.setHdfsHost( \"hostname\" );\n    namedCluster.setHdfsPort( \"12340\" );\n    namedCluster.setStorageScheme( HDFS_PREFIX );\n    String incomingURL = HDFS_PREFIX + \":\" + pathBase + filePathInFileSystem;\n    buildExtractSchemeMocks( HDFS_PREFIX, incomingURL, pathBase + filePathInFileSystem );\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Test\n  public void testProcessURLSubstitution_Gateway() {\n    namedCluster.setUseGateway( true );\n    String incomingURL = \"/path\";\n    String expected = \"hc://\" + namedCluster.getName() + incomingURL;\n    String actual = namedCluster.processURLsubstitution( incomingURL, metaStore, null );\n    assertTrue( \"Expected \" + expected + \" actual \" + actual, expected.equalsIgnoreCase( actual ) );\n  }\n\n  @Ignore @Test\n  public void testProcessURLWASBFullSubstitution() {\n    String prefix = \"wasb\";\n    String pathBase = \"//namedClusterHdfsUsername:namedClusterHdfsPassword@hostname:12340\";\n    String filePathInFileSystem = \"/tmp/hdsfDemo.txt\";\n    namedCluster.setHdfsHost( \"hostname\" );\n    namedCluster.setHdfsPort( \"12340\" );\n    namedCluster.setStorageScheme( prefix );\n    String incomingURL = prefix + \":\" + pathBase + filePathInFileSystem;\n    buildAppendEncodedUserPassMocks( namedClusterHdfsUsername, namedClusterHdfsPassword );\n    buildExtractSchemeMocks( prefix, incomingURL, pathBase + filePathInFileSystem );\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Test\n  public void testProcessURLHostVariableNull() {\n    namedCluster.setHdfsHost( \"${hostUrl}\" );\n    namedCluster.setStorageScheme( \"hdfs\" );\n    String incomingURL = \"${hdfsUrl}/test\";\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Test\n  public void testProcessURLHostVariableNotNull() {\n    namedCluster.setHdfsHost( \"${hostUrl}\" );\n    namedCluster.setStorageScheme( HDFS_PREFIX );\n    String hostPort = \"1000\";\n    namedCluster.setHdfsPort( hostPort );\n    namedCluster.setHdfsUsername( \"\" );\n    namedCluster.setHdfsPassword( \"\" );\n    String incomingURL = \"${hdfsUrl}/test\";\n    String hostName = \"test\";\n    when( variableSpace.getVariable( \"hostUrl\" ) ).thenReturn( hostName );\n    when( variableSpace.environmentSubstitute( namedCluster.getHdfsHost() ) ).thenReturn( hostName );\n    when( variableSpace.environmentSubstitute( incomingURL ) ).thenReturn( hostName + \"/test\" );\n    String pathWithoutPrefix = \"//\" + hostName + \":\" + hostPort + \"//hdfsUrl//test\";\n    String pathWithPrefix = HDFS_PREFIX + \":\" + pathWithoutPrefix;\n    buildExtractSchemeMocks( HDFS_PREFIX, pathWithPrefix, pathWithoutPrefix );\n    assertEquals( \"hdfs://\" + hostName + \":\" + hostPort + incomingURL,\n      namedCluster.processURLsubstitution( incomingURL, metaStore, variableSpace ) );\n  }\n\n  @Test\n  public void testProcessCompleteClusterVariableReplacement() {\n    String hostname = \"hostname\";\n    String hostPort = \"1000\";\n    String variableName = \"hdfsUrl\";\n    // special case to allow legacy fully qualified urls to work\n    namedCluster.setHdfsHost( hostname );\n    namedCluster.setStorageScheme( HDFS_PREFIX );\n    namedCluster.setHdfsPort( hostPort );\n    namedCluster.setHdfsUsername( \"\" );\n    namedCluster.setHdfsPassword( \"\" );\n    String incomingURL = \"${\" + variableName + \"}/test\";\n    String pathWithoutPrefix = \"//\" + hostname + \":\" + hostPort + \"//\" + variableName + \"//test\";\n    String pathWithPrefix = HDFS_PREFIX + \":\" + pathWithoutPrefix;\n    when( variableSpace.environmentSubstitute( incomingURL ) ).thenReturn( \"hdfs://FullyQualifiedPath/test\" );\n    buildExtractSchemeMocks( HDFS_PREFIX, pathWithPrefix, pathWithoutPrefix );\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, variableSpace ) );\n  }\n\n  @Test\n  public void testProcessURLsubstitutionMaprFS_startsWithMaprfs() {\n    String incomingURL = \"maprfs\";\n    namedCluster.setMapr( true );\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Test\n  public void testProcessURLsubstitutionMaprFS_startsWithNoMaprfs() {\n    String incomingURL = \"path\";\n    namedCluster.setMapr( true );\n    assertEquals( \"maprfs://\" + incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, null ) );\n  }\n\n  @Ignore @Test\n  public void testProcessURLsubstitutionNC() {\n    String prefix = \"hc\";\n    String pathWithoutPrefix = \"//cluster/input/file.txt\";\n    String pathWithPrefix = prefix + \":\" + pathWithoutPrefix;\n    buildAppendEncodedUserPassMocks( namedClusterHdfsUsername, namedClusterHdfsPassword );\n    buildExtractSchemeMocks( prefix, pathWithPrefix, pathWithoutPrefix );\n    assertEquals( \"hdfs://namedClusterHdfsUsername:namedClusterHdfsPassword@namedClusterHdfsHost:12345/input/file.txt\",\n      namedCluster.processURLsubstitution( \"hc://cluster/input/file.txt\", metaStore, null ) );\n  }\n\n  @Ignore @Test\n  public void testProcessURLSubstitutionNC_variable() {\n    String pathWithoutPrefix = \"//\" + namedClusterHdfsUsername + \":\" + namedClusterHdfsPassword + \"@\"\n      + namedClusterHdfsHost + \":\" + namedClusterHdfsPort + \"//ncUrl//test\";\n    String pathWithPrefix = HDFS_PREFIX + \":\" + pathWithoutPrefix;\n    String incomingURL = \"${ncUrl}/test\";\n    when( variableSpace.environmentSubstitute( incomingURL ) ).thenReturn( \"hc://cluster/test\" );\n    buildAppendEncodedUserPassMocks( namedClusterHdfsUsername, namedClusterHdfsPassword );\n    buildExtractSchemeMocks( HDFS_PREFIX, pathWithPrefix, pathWithoutPrefix );\n    assertEquals( incomingURL, namedCluster.processURLsubstitution( incomingURL, metaStore, variableSpace ) );\n  }\n\n  @Test\n  public void testGenerateURLHDFSNoPort() {\n    String scheme = \"hdfs\";\n    String testHost = \"testHost\";\n    namedCluster.setHdfsHost( \" \" + testHost + \" \" );\n    namedCluster.setHdfsPort( null );\n    namedCluster.setHdfsUsername( null );\n    namedCluster.setHdfsPassword( null );\n    assertEquals( scheme + \"://\" + testHost, namedCluster.generateURL( scheme, metaStore, null ) );\n  }\n\n  @Ignore @Test\n  public void testGenerateURLHDFSVariableSpace() {\n    String schemeVar = \"schemeVar\";\n    String testScheme = \"hdfs\";\n    String hostVar = \"hostVar\";\n    String testHost = \"testHost\";\n    String portVar = \"portVar\";\n    String testPort = \"9333\";\n    String usernameVar = \"usernameVar\";\n    String testUsername = \"testUsername\";\n    String passwordVar = \"passwordVar\";\n    String testPassword = \"testPassword\";\n    namedCluster.setStorageScheme( \"${\" + schemeVar + \"}\" );\n    namedCluster.setHdfsHost( \"${\" + hostVar + \"}\" );\n    namedCluster.setHdfsPort( \"${\" + portVar + \"}\" );\n    namedCluster.setHdfsUsername( \"${\" + usernameVar + \"}\" );\n    namedCluster.setHdfsPassword( \"${\" + passwordVar + \"}\" );\n    when( variableSpace.getVariable( schemeVar ) ).thenReturn( testScheme );\n    when( variableSpace.getVariable( hostVar ) ).thenReturn( testHost );\n    when( variableSpace.getVariable( portVar ) ).thenReturn( testPort );\n    when( variableSpace.getVariable( usernameVar ) ).thenReturn( testUsername );\n    when( variableSpace.getVariable( passwordVar ) ).thenReturn( testPassword );\n    when( variableSpace.environmentSubstitute( namedCluster.getStorageScheme() ) ).thenReturn( testScheme );\n    when( variableSpace.environmentSubstitute( namedCluster.getHdfsHost() ) ).thenReturn( testHost );\n    when( variableSpace.environmentSubstitute( namedCluster.getHdfsPort() ) ).thenReturn( testPort );\n    when( variableSpace.environmentSubstitute( namedCluster.getHdfsUsername() ) ).thenReturn( testUsername );\n    when( variableSpace.environmentSubstitute( namedCluster.getHdfsPassword() ) ).thenReturn( testPassword );\n    buildAppendEncodedUserPassMocks( testUsername, testPassword );\n    assertEquals( testScheme + \"://\" + testUsername + \":\" + testPassword + \"@\" + testHost + \":\" + testPort,\n      namedCluster.generateURL( \"${\" + schemeVar + \"}\", metaStore, variableSpace ) );\n  }\n\n  @Test\n  public void testGenerateURLHDFSVariableSpace_noVariable() {\n    String scheme = \"hdfs\";\n    String hostVar = \"hostVar\";\n    String portVar = \"portVar\";\n    String usernameVar = \"usernameVar\";\n    String passwordVar = \"passwordVar\";\n    namedCluster.setStorageScheme( \"${\" + scheme + \"}\" );\n    namedCluster.setHdfsHost( \"${\" + hostVar + \"}\" );\n    namedCluster.setHdfsPort( \"${\" + portVar + \"}\" );\n    namedCluster.setHdfsUsername( \"${\" + usernameVar + \"}\" );\n    namedCluster.setHdfsPassword( \"${\" + passwordVar + \"}\" );\n    assertEquals( scheme + \":\", namedCluster.generateURL( scheme, metaStore, variableSpace ) );\n  }\n\n  @Test\n  public void testXMLEmbedding() throws Exception {\n    Element node = createNodeFromNamedCluster();\n\n    NamedCluster nc = new NamedClusterImpl();\n    nc = nc.fromXmlForEmbed( node );\n\n    assertNamedClusterEquality( nc );\n  }\n\n  @Test\n  public void testLegacyXMLEmbedding() throws Exception {\n    Element node = createNodeFromNamedCluster();\n\n    XPath xPath = XPathFactory.newInstance().newXPath();\n    //Find the node containing the hdfsPassword\n    Node n = ( (Node) xPath.evaluate( \"/NamedCluster/child/id[text()='hdfsPassword']\", node, XPathConstants.NODE ) )\n      .getNextSibling();\n    //Set the password value to what it would be if we were still encoding the legacy way\n    ITwoWayPasswordEncoder passwordEncoder = new Base64TwoWayPasswordEncoder();\n    n.setTextContent( passwordEncoder.encode( namedCluster.getHdfsPassword() ) );\n\n    //Now check that we can still decode it\n    NamedCluster nc = new NamedClusterImpl();\n    nc = nc.fromXmlForEmbed( node );\n\n    assertNamedClusterEquality( nc );\n  }\n\n  private Element createNodeFromNamedCluster() throws Exception {\n    String clusterXml = namedCluster.toXmlForEmbed( \"NamedCluster\" );\n    System.out.println( clusterXml );\n\n    return DocumentBuilderFactory.newInstance().newDocumentBuilder().parse(\n        new ByteArrayInputStream( clusterXml.getBytes() ) )\n        .getDocumentElement();\n  }\n\n  private void assertNamedClusterEquality( NamedCluster nc ) {\n\n    assertEquals( namedCluster.getHdfsHost(), nc.getHdfsHost() );\n    assertEquals( namedCluster.getHdfsPort(), nc.getHdfsPort() );\n    assertEquals( namedCluster.getHdfsUsername(), nc.getHdfsUsername() );\n    assertEquals( namedCluster.getHdfsPassword(), nc.getHdfsPassword() );\n    assertEquals( namedCluster.getName(), nc.getName() );\n    assertEquals( namedCluster.getShimIdentifier(), nc.getShimIdentifier() );\n    assertEquals( namedCluster.getStorageScheme(), nc.getStorageScheme() );\n    assertEquals( namedCluster.getJobTrackerHost(), nc.getJobTrackerHost() );\n    assertEquals( namedCluster.getJobTrackerPort(), nc.getJobTrackerPort() );\n    assertEquals( namedCluster.getZooKeeperHost(), nc.getZooKeeperHost() );\n    assertEquals( namedCluster.getZooKeeperPort(), nc.getZooKeeperPort() );\n    assertEquals( namedCluster.getOozieUrl(), nc.getOozieUrl() );\n    assertEquals( namedCluster.getKafkaBootstrapServers(), nc.getKafkaBootstrapServers() );\n    assertEquals( namedCluster.getLastModifiedDate(), nc.getLastModifiedDate() );\n    assertEquals( namedCluster.getSiteFiles().size(), nc.getSiteFiles().size() );\n    for ( NamedClusterSiteFile siteFile : namedCluster.getSiteFiles() ) {\n      String contents = getSiteFileContents( nc, siteFile.getSiteFileName() );\n      assertEquals( siteFile.getSiteFileContents(), contents );\n      if ( \"hbase-site.xml\".equals( siteFile.getSiteFileName() ) ) {\n        assertEquals( 11111L, siteFile.getSourceFileModificationTime() );\n      }\n    }\n  }\n\n  private Answer buildSchemeAnswer( String prefix, String buildPath ) {\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( (StringBuilder) args[2] ).append( buildPath );\n      return prefix;\n    };\n  }\n\n  private Answer buildUrlEncodeAnswer( String value ) {\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( (StringBuilder) args[0] ).append( (String) args[1] );\n      return null;\n    };\n  }\n\n  private void buildExtractSchemeMocks( String prefix, String fullPath, String pathWithoutPrefix ) {\n    String[] schemes = { \"hc\", \"hdfs\", \"maprfs\", \"wasb\" };\n    when( fsm.getSchemes() ).thenReturn( schemes );\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( eq( schemes ), eq( fullPath ), or( isNull(), any( StringBuilder.class ) ) ) )\n      .thenAnswer( buildSchemeAnswer( prefix, pathWithoutPrefix ) );\n  }\n\n  private void buildAppendEncodedUserPassMocks( String username, String password ) {\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( or( isNull(), any( StringBuilder.class ) ), eq( username ), any( char[].class ) ) )\n        .thenAnswer( buildUrlEncodeAnswer( username ) );\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( or( isNull(), any( StringBuilder.class ) ), eq( password ), any( char[].class ) ) )\n      .thenAnswer( buildUrlEncodeAnswer( password ) );\n  }\n\n  private String getSiteFileContents( NamedCluster nc, String siteFileName ) {\n    NamedClusterSiteFile n = nc.getSiteFiles().stream().filter( sf -> sf.getSiteFileName().equals( siteFileName ) )\n      .findFirst().orElse( null );\n    return n == null ? null : n.getSiteFileContents();\n  }\n}\n"
  },
  {
    "path": "impl/cluster/src/test/java/org/pentaho/big/data/impl/cluster/NamedClusterManagerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.attributes.metastore.EmbeddedMetaStore;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.plugins.LifecyclePluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.persist.MetaStoreFactory;\nimport org.pentaho.metastore.stores.delegate.DelegatingMetaStore;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.nio.file.Path;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.*;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 7/14/15.\n */\npublic class NamedClusterManagerTest {\n  private IMetaStore metaStore;\n  private MetaStoreFactory<NamedClusterImpl> metaStoreFactory;\n  private NamedClusterManager namedClusterManager;\n  private PluginInterface mockBigDataPlugin;\n  private Path tempDirectoryName;\n\n  @Before\n  @SuppressWarnings( \"unchecked\" )\n  public void setup() throws KettleException, IOException {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n    KettleLogStore.init();\n    metaStore = mock( IMetaStore.class );\n    metaStoreFactory = mock( MetaStoreFactory.class );\n    namedClusterManager = new NamedClusterManager();\n\n    // the protected method NamedClusterManager.getMetaStoreFactory() will always create a new Factory\n    // by reading xml from local store.  For these tests, create a Mockito spy that will always return the mock\n    // MetaStore factory\n    namedClusterManager = spy( namedClusterManager );\n    doReturn( metaStoreFactory ).when( namedClusterManager ).getMetaStoreFactory( metaStore );\n\n    namedClusterManager.putMetaStoreFactory( metaStore, metaStoreFactory );\n\n    mockBigDataPlugin = mock( PluginInterface.class );\n    when( mockBigDataPlugin.getIds() ).thenReturn( new String[] { \"HadoopSpoonPlugin\" } );\n    when( mockBigDataPlugin.matches( \"HadoopSpoonPlugin\" ) ).thenReturn( true );\n    PluginRegistry.getInstance().registerPlugin( LifecyclePluginType.class, mockBigDataPlugin );\n  }\n\n  private boolean deleteDirectory( File directoryToBeDeleted ) {\n    File[] allContents = directoryToBeDeleted.listFiles();\n    if ( allContents != null ) {\n      for ( File file : allContents ) {\n        deleteDirectory( file );\n      }\n    }\n    return directoryToBeDeleted.delete();\n  }\n\n  @Test\n  public void testGetClusterTemplate() {\n    NamedCluster clusterTemplate = namedClusterManager.getClusterTemplate();\n    assertFalse( clusterTemplate == namedClusterManager.getClusterTemplate() );\n    assertTrue( clusterTemplate.equals( namedClusterManager.getClusterTemplate() ) );\n    NamedCluster template = mock( NamedCluster.class );\n    NamedCluster clone = mock( NamedCluster.class );\n    when( template.clone() ).thenReturn( clone );\n    namedClusterManager.setClusterTemplate( template );\n    assertEquals( clone, namedClusterManager.getClusterTemplate() );\n  }\n\n  @Test\n  public void testCreate() throws MetaStoreException {\n    NamedClusterImpl namedCluster = new NamedClusterImpl();\n    String testName = \"testName\";\n    namedCluster.setName( testName );\n    namedClusterManager.create( namedCluster, metaStore );\n    verify( metaStoreFactory ).saveElement( eq( namedCluster ) );\n  }\n\n  @Test\n  public void testRead() throws MetaStoreException {\n    String testName = \"testName\";\n    NamedClusterImpl namedCluster = new NamedClusterImpl();\n    when( metaStoreFactory.loadElement( testName ) ).thenReturn( namedCluster );\n    assertTrue( namedCluster == namedClusterManager.read( testName, metaStore ) );\n  }\n\n  @Test\n  public void testUpdate() throws MetaStoreException {\n    NamedClusterImpl namedCluster = new NamedClusterImpl();\n    String testName = \"testName\";\n    namedCluster.setName( testName );\n    List namedClusters = new ArrayList<>( Arrays.asList( namedCluster ) );\n    when( metaStoreFactory.getElements( true ) ).thenReturn( namedClusters ).thenReturn( namedClusters ).thenThrow(\n      new MetaStoreException() );\n    NamedClusterImpl updatedNamedCluster = new NamedClusterImpl();\n    updatedNamedCluster.setName( testName + \"updated\" );\n    namedClusterManager.update( updatedNamedCluster, metaStore );\n  }\n\n  @Test\n  public void testDeleteElement() throws MetaStoreException {\n    String testName = \"testName\";\n    namedClusterManager.delete( testName, metaStore );\n    verify( metaStoreFactory ).deleteElement( testName );\n  }\n\n  @Test\n  public void testList() throws MetaStoreException {\n    NamedClusterImpl namedCluster = new NamedClusterImpl();\n    namedCluster.setName( \"testName\" );\n    List<NamedClusterImpl> value = new ArrayList<>( Arrays.asList( namedCluster ) );\n    when( metaStoreFactory.getElements( anyBoolean(), any( List.class ) ) ).thenReturn( value );\n    assertEquals( value, namedClusterManager.list( metaStore ) );\n  }\n\n  @Test\n  public void testListNames() throws MetaStoreException {\n    List<String> names = new ArrayList<>( Arrays.asList( \"testName\" ) );\n    when( metaStoreFactory.getElementNames( false ) ).thenReturn( names );\n    assertEquals( names, namedClusterManager.listNames( metaStore ) );\n  }\n\n  @Test\n  public void testListNames_emptymetaStoreFactory() throws MetaStoreException {\n    IMetaStore metaStore = mock( IMetaStore.class );\n    List<String> expectedNames = new ArrayList<>();\n    verify( metaStoreFactory, never() ).getElementNames();\n    assertEquals( expectedNames, namedClusterManager.listNames( metaStore ) );\n  }\n\n  @Test\n  public void testContains() throws MetaStoreException {\n    String testName = \"testName\";\n    List<String> names = new ArrayList<>( Arrays.asList( testName ) );\n    when( metaStoreFactory.getElementNames( false ) ).thenReturn( names );\n    assertFalse( namedClusterManager.contains( testName, null ) );\n    assertTrue( namedClusterManager.contains( testName, metaStore ) );\n    assertFalse( namedClusterManager.contains( \"testName2\", metaStore ) );\n  }\n\n  @Test\n  public void testContainsSlaveServer() throws MalformedURLException, MetaStoreException {\n    String pluginFilePath = getClass().getResource( \"/plugin.properties\" ).getFile();\n    String resourceDir = pluginFilePath.substring( 0, pluginFilePath.lastIndexOf( \"/\" ) );\n    when( mockBigDataPlugin.getPluginDirectory() ).thenReturn( new URL( \"file://\" + resourceDir ) );\n    String testName = \"testName\";\n    assertFalse( namedClusterManager.contains( testName, null ) );\n    verify( namedClusterManager, times( 1 ) ).getSlaveServerMetastore();\n  }\n\n  @Test\n  @SuppressWarnings( \"unchecked\" )\n  public void testGetNamedClusterByName() throws MetaStoreException {\n    String testName = \"testName\";\n    NamedCluster namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getName() ).thenReturn( testName );\n    List namedClusters = new ArrayList<>( Arrays.asList( namedCluster ) );\n    when( metaStoreFactory.getElements( anyBoolean(), any( List.class ) ) ).thenReturn( namedClusters )\n      .thenReturn( namedClusters ).thenThrow( new MetaStoreException() );\n    assertNull( namedClusterManager.getNamedClusterByName( testName, null ) );\n    assertEquals( namedCluster, namedClusterManager.getNamedClusterByName( testName, metaStore ) );\n    assertNull( namedClusterManager.getNamedClusterByName( \"fakeName\", metaStore ) );\n    assertNull( namedClusterManager.getNamedClusterByName( testName, metaStore ) );\n  }\n\n  @Test\n  @SuppressWarnings( \"unchecked\" )\n  public void testGetNamedClusterByHost() throws MetaStoreException {\n    String testName = \"testName\";\n    String testHostName = \"testHostName\";\n    NamedCluster namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getName() ).thenReturn( testName );\n    when( namedCluster.getHdfsHost() ).thenReturn( testHostName );\n    List namedClusters = new ArrayList<>( Arrays.asList( namedCluster ) );\n    when( metaStoreFactory.getElements( anyBoolean(), any( List.class ) ) ).thenReturn( namedClusters )\n      .thenReturn( namedClusters ).thenThrow( new MetaStoreException() );\n    assertNull( namedClusterManager.getNamedClusterByHost( testHostName, null ) );\n    assertEquals( namedCluster, namedClusterManager.getNamedClusterByHost( testHostName, metaStore ) );\n    assertNull( namedClusterManager.getNamedClusterByHost( \"fakeName\", metaStore ) );\n    assertNull( namedClusterManager.getNamedClusterByHost( testHostName, metaStore ) );\n  }\n\n  @Test\n  public void testGetMetaStoreFactoryEmbeddedMetaStoreSuccess() throws MetaStoreException {\n    NamedClusterManager namedClusterManager = new NamedClusterManager();\n    MetaStoreFactory<NamedClusterImpl> metaStoreFactoryFirst = null;\n    MetaStoreFactory<NamedClusterImpl> metaStoreFactorySecond = null;\n\n    EmbeddedMetaStore embeddedMetaStore = mock( EmbeddedMetaStore.class );\n\n    // get the metastore factory - the first time called, it should create a new one and cache it\n    metaStoreFactoryFirst = namedClusterManager.getMetaStoreFactory( embeddedMetaStore );\n\n    // get the metastore factory again - this time it should return the same instance as the first (the cached instance)\n    metaStoreFactorySecond = namedClusterManager.getMetaStoreFactory( embeddedMetaStore );\n\n    assertNotNull( \"metaStoreFactoryFirst is expected to NOT be null\", metaStoreFactoryFirst );\n    assertNotNull( \"metaStoreFactorySecond is expected to NOT be null\", metaStoreFactoryFirst );\n    assertEquals( \"Called NamedClusterManager.getMetaStoreFactory twice, passing in the same EmbeddedMetaStore.  \"\n        + \"Both calls should return the same instance of MetaStoreFactory\", metaStoreFactoryFirst,\n      metaStoreFactorySecond );\n  }\n\n  @Test\n  public void testGetMetaStoreFactoryNonEmbeddedMetaStore() throws MetaStoreException {\n    NamedClusterManager namedClusterManager = new NamedClusterManager();\n    MetaStoreFactory<NamedClusterImpl> metaStoreFactoryFirst = null;\n    MetaStoreFactory<NamedClusterImpl> metaStoreFactorySecond = null;\n\n    DelegatingMetaStore nonEmbeddedMetaStore = mock( DelegatingMetaStore.class );\n\n    // get the metastore factory - the first time called, it should create a new one and cache it\n    metaStoreFactoryFirst = namedClusterManager.getMetaStoreFactory( nonEmbeddedMetaStore );\n\n    // get the metastore factory again - this time it should return the same instance as the first (the cached instance)\n    metaStoreFactorySecond = namedClusterManager.getMetaStoreFactory( nonEmbeddedMetaStore );\n\n    assertNotNull( \"metaStoreFactoryFirst is expected to NOT be null\", metaStoreFactoryFirst );\n    assertNotNull( \"metaStoreFactorySecond is expected to NOT be null\", metaStoreFactoryFirst );\n    assertNotEquals(\n      \"Called NamedClusterManager.getMetaStoreFactory twice, passing in the same non EmbeddedMetaStore.  \"\n        + \"Both calls should return the different instances of MetaStoreFactory\", metaStoreFactoryFirst,\n      metaStoreFactorySecond );\n  }\n\n  @Test\n  public void testUpdateNamedClusterTemplate() {\n    namedClusterManager.getClusterTemplate();\n    namedClusterManager.updateNamedClusterTemplate( \"testHostName\", 9999, true );\n    assertEquals( \"testHostName\", namedClusterManager.getClusterTemplate().getHdfsHost() );\n    assertEquals( \"9999\", namedClusterManager.getClusterTemplate().getHdfsPort() );\n    assertTrue( namedClusterManager.getClusterTemplate().isMapr() );\n  }\n}\n"
  },
  {
    "path": "impl/cluster/src/test/java/org/pentaho/big/data/impl/cluster/NamedClusterMetastoreIT.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.cluster;\n\nimport org.apache.commons.io.FileUtils;\nimport org.apache.commons.vfs2.impl.StandardFileSystemManager;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.KettleLoggingEvent;\nimport org.pentaho.di.core.logging.KettleLoggingEventListener;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.stores.xml.XmlMetaStore;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.LinkedList;\nimport java.util.List;\nimport java.util.UUID;\n\nimport static org.junit.Assert.*;\n\npublic class NamedClusterMetastoreIT {\n  private static final String HDFS_PREFIX = \"hdfs\";\n\n  private VariableSpace variableSpace;\n  private NamedClusterImpl namedCluster;\n\n  private String namedClusterName;\n  private String namedClusterHdfsHost;\n  private String namedClusterHdfsPort;\n  private String namedClusterHdfsUsername;\n  private String namedClusterHdfsPassword;\n  private String namedClusterJobTrackerPort;\n  private String namedClusterJobTrackerHost;\n  private String namedClusterZookeeperHost;\n  private String namedClusterZookeeperPort;\n  private String namedClusterOozieUrl;\n  private String namedClusterStorageScheme;\n  private String namedClusterKafkaBootstrapServers;\n  private boolean isMapr;\n  private IMetaStore metaStore;\n  private StandardFileSystemManager fsm;\n  private NamedClusterService namedClusterService;\n  private String fileContents1;\n  private String metastoreRootFolder;\n  private KettleLoggingEventListener kettleLoggingEventListener;\n  private LinkedList<KettleLoggingEvent> loggingEventList;\n\n  @Before\n  public void setup() throws Exception {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n\n    loggingEventList = new LinkedList<>();\n    kettleLoggingEventListener = new KettleLoggingEventListener() {\n      @Override public void eventAdded( KettleLoggingEvent event ) {\n        loggingEventList.add( event );\n      }\n    };\n    KettleLogStore.init();\n    KettleLogStore.getAppender().addLoggingEventListener( kettleLoggingEventListener );\n    Encr.init( \"Kettle\" );\n    fileContents1 =\n      FileUtils.readFileToString( new File( getClass().getResource( \"/core-site.xml\" ).getFile() ), \"UTF-8\" );\n\n    metastoreRootFolder = System.getProperty( \"java.io.tmpdir\" ) + File.separator + UUID.randomUUID();\n    metaStore = new XmlMetaStore( metastoreRootFolder );\n    variableSpace = new Variables();\n    namedCluster = new NamedClusterImpl();\n    namedCluster.shareVariablesWith( variableSpace );\n    namedClusterName = \"namedClusterName\";\n    namedClusterHdfsHost = \"namedClusterHdfsHost\";\n    namedClusterHdfsPort = \"12345\";\n    namedClusterHdfsUsername = \"namedClusterHdfsUsername\";\n    namedClusterHdfsPassword = \"namedClusterHdfsPassword\";\n    namedClusterJobTrackerHost = \"namedClusterJobTrackerHost\";\n    namedClusterJobTrackerPort = \"namedClusterJobTrackerPort\";\n    namedClusterZookeeperHost = \"namedClusterZookeeperHost\";\n    namedClusterZookeeperPort = \"namedClusterZookeeperPort\";\n    namedClusterOozieUrl = \"namedClusterOozieUrl\";\n    namedClusterStorageScheme = \"hdfs\";\n    namedClusterKafkaBootstrapServers = \"kafkaBootstrapServers\";\n    isMapr = true;\n\n    namedCluster.setName( namedClusterName );\n    namedCluster.setHdfsHost( namedClusterHdfsHost );\n    namedCluster.setHdfsPort( namedClusterHdfsPort );\n    namedCluster.setHdfsUsername( namedClusterHdfsUsername );\n    namedCluster.setHdfsPassword( namedCluster.encodePassword( namedClusterHdfsPassword ) );\n    namedCluster.setJobTrackerHost( namedClusterJobTrackerHost );\n    namedCluster.setJobTrackerPort( namedClusterJobTrackerPort );\n    namedCluster.setZooKeeperHost( namedClusterZookeeperHost );\n    namedCluster.setZooKeeperPort( namedClusterZookeeperPort );\n    namedCluster.setOozieUrl( namedClusterOozieUrl );\n    namedCluster.setMapr( isMapr );\n    namedCluster.setStorageScheme( namedClusterStorageScheme );\n    namedCluster.setKafkaBootstrapServers( namedClusterKafkaBootstrapServers );\n    namedCluster.addSiteFile( \"core-site.xml\", fileContents1 );\n    namedCluster.addSiteFile( \"fileName2\", \"fileContents2\" );\n\n    namedClusterService = new NamedClusterManager( );\n  }\n  @Test\n  public void testWriteAndRead() throws Exception {\n    namedClusterService.create( namedCluster, metaStore );\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedClusterName, metaStore );\n    assertEquals( namedClusterName, nc.getName() );\n    assertEquals( fileContents1, getSiteFileContents( nc, \"core-site.xml\" ) );\n  }\n\n  @Test\n  public void testAutoEmbedSiteFiles() throws Exception {\n    commonAutoEmbedSetupLogic();\n\n    namedClusterService.create( namedCluster, metaStore );\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedClusterName, metaStore );\n    assertEquals( namedClusterName, nc.getName() );\n    assertEquals( fileContents1, getSiteFileContents( nc, \"core-site.xml\" ) );\n\n  }\n\n  @Test\n  public void testAutoEmbedWhenUpdateMetastoreAndRecoveryFails() throws Exception {\n    commonAutoEmbedSetupLogic();\n    NamedClusterService disabledNamedClusterService = new NamedClusterManager() {\n      @Override public void update( NamedCluster namedCluster, IMetaStore metastore ) throws MetaStoreException {\n        throw new MetaStoreException( \"Something bad happened\" );\n      }\n    };\n    namedClusterService = disabledNamedClusterService;\n    namedClusterService.create( namedCluster, metaStore );\n\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedClusterName, metaStore );\n    assertEquals( 4, loggingEventList.size() );\n  }\n\n  @Test\n  public void testAutoEmbedWhenUpdateMetastoreFails() throws Exception {\n    commonAutoEmbedSetupLogic();\n    NamedClusterService disabledNamedClusterService = new NamedClusterManager() {\n      private int counter;\n\n      @Override public void update( NamedCluster namedCluster, IMetaStore metastore ) throws MetaStoreException {\n        counter++;\n        if ( counter == 1 ) {\n          //Force the first update (when we try to add the site files) to fail\n          throw new MetaStoreException( \"Something bad happened\" );\n        } else {\n          //Thereafter the recovery update works\n          super.update( namedCluster, metastore );\n        }\n      }\n    };\n    namedClusterService = disabledNamedClusterService;\n    namedClusterService\n      .create( namedCluster, metaStore ); //Create the namedCluster (without site files) in the metastore\n\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedClusterName, metaStore );\n\n    assertEquals( 2, loggingEventList.size() );\n    assertEquals( namedClusterName, nc.getName() );\n    assert ( nc.getSiteFiles().isEmpty() );\n  }\n\n  private void commonAutoEmbedSetupLogic() throws IOException {\n    namedCluster.setSiteFiles( new ArrayList<NamedClusterSiteFile>() ); //No site files in named cluster\n    File destFile = new File(\n      metastoreRootFolder + File.separator + \"metastore\" + File.separator + \"pentaho\" + File.separator + \"NamedCluster\"\n        + File.separator + \"Configs\" + File.separator + \"namedClusterName\" + File.separator + \"core-site.xml\" );\n    destFile.getParentFile().mkdirs();\n    //Put a site file out on the metastore\n    FileUtils.copyFile( new File( getClass().getResource( \"/core-site.xml\" ).getFile() ), destFile );\n  }\n\n\n  private String getSiteFileContents( NamedCluster nc, String siteFileName ) {\n    NamedClusterSiteFile n = nc.getSiteFiles().stream().filter( sf -> sf.getSiteFileName().equals( siteFileName ) )\n      .findFirst().orElse( null );\n    return n == null ? null : n.getSiteFileContents();\n  }\n\n  @Test\n  public void testCorruptedFileWithList() throws Exception {\n    NamedClusterImpl corruptedNamedCluster = new NamedClusterImpl( namedCluster );\n    final String corruptedName = \"corruptedNamedCluster\";\n    corruptedNamedCluster.setName( corruptedName );\n    corruptedNamedCluster.addSiteFile( \"core-site.xml\", Character.toString( (char) 5 ) ); //Make the site file corrupt\n\n    namedClusterService.create( namedCluster, metaStore ); //Write the good one ...\n    namedClusterService.create( corruptedNamedCluster, metaStore ); //... and the bad one\n\n    //We should not get an error when we try to get the cluster by name because it uses a tolerant list\n    //The list must be tolerant or a good clusters will never be returned.\n    assertNotNull( namedClusterService.getNamedClusterByName( namedClusterName, metaStore ) );\n    assertNull( namedClusterService.getNamedClusterByName( corruptedName, metaStore ) );\n    List<MetaStoreException> exceptionList = new ArrayList<MetaStoreException>();\n\n    //Getting the list with a non-null exceptionList is tolerant of corrupt entries.\n    List<NamedCluster> namedClusterList = namedClusterService.list( metaStore, exceptionList );\n    //The list contains the good cluster only\n    assertEquals( 1, namedClusterList.size() );\n    assertEquals( namedCluster, namedClusterList.get( 0 ) );\n    assertEquals( 1, exceptionList.size() );\n    assert\n      ( exceptionList.get( 0 ).getMessage().contains( \"Could not load metaStore element '\" + corruptedName + \"'\" ) );\n\n    //Even if we didn't ask for the exception list, NamedClusters should still be tolerant, even if the metastore would\n    // not be.\n    namedClusterService.list( metaStore );\n  }\n}\n"
  },
  {
    "path": "impl/cluster/src/test/resources/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>fs.defaultFS</name>\n    <value>hdfs://CDH61Secure</value>\n  </property>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>kerberos</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>privacy</value>\n  </property>\n  <!--'hadoop.security.key.provider.path', originally set to 'kms://https@svqxobcdh61secn1.pentaho.net:16000/kms' (non-final), is overridden below by a safety valve-->\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.security.key.provider.path</name>\n    <value>kms://https@svqxobcdh61secn1.pentaho.net:16000/kms</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "impl/cluster/src/test/resources/plugin.properties",
    "content": "big.data.slave.metastore.dir="
  },
  {
    "path": "impl/clusterTests/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.kafka</groupId>\n      <artifactId>kafka-clients</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>metastore</artifactId>\n      <version>${metastore.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.code.bean-matchers</groupId>\n      <artifactId>bean-matchers</artifactId>\n      <version>${dependency.bean-matchers.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/ClusterRuntimeTestEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests;\n\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.impl.HelpUrlPayload;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionImpl;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\nimport org.pentaho.di.core.Const;\n\n/**\n * This is a convenience class that will add a shim troubleshooting guide action if none is specified and the severity\n * is >= WARNING\n */\npublic class ClusterRuntimeTestEntry extends RuntimeTestResultEntryImpl {\n  public static final String RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_TROUBLESHOOTING_GUIDE =\n    \"RuntimeTestResultEntryWithDefaultShimHelp.TroubleshootingGuide\";\n  public static final String RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC =\n    \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc\";\n  public static final String RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC_TITLE =\n    \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Title\";\n  public static final String RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC_HEADER =\n    \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Header\";\n  private static final Class<?> PKG = ClusterRuntimeTestEntry.class;\n\n  public ClusterRuntimeTestEntry( MessageGetterFactory messageGetterFactory, RuntimeTestEntrySeverity severity,\n                                  String description, String message, DocAnchor docAnchor ) {\n    this( messageGetterFactory, severity, description, message, null, docAnchor );\n  }\n\n  public ClusterRuntimeTestEntry( RuntimeTestEntrySeverity severity, String description, String message,\n                                  RuntimeTestAction runtimeTestAction ) {\n    this( severity, description, message, null, runtimeTestAction );\n  }\n\n  public ClusterRuntimeTestEntry( MessageGetterFactory messageGetterFactory,\n                                  RuntimeTestResultEntry runtimeTestResultEntry, DocAnchor docAnchor ) {\n    this( runtimeTestResultEntry.getSeverity(), runtimeTestResultEntry.getDescription(),\n      runtimeTestResultEntry.getMessage(), runtimeTestResultEntry.getException(),\n      getDefaultAction( messageGetterFactory, runtimeTestResultEntry, docAnchor ) );\n  }\n\n  public ClusterRuntimeTestEntry( MessageGetterFactory messageGetterFactory, RuntimeTestEntrySeverity severity,\n                                  String description, String message, Throwable exception, DocAnchor docAnchor ) {\n    this( severity, description, message, exception, createDefaultAction( messageGetterFactory, severity, docAnchor ) );\n  }\n\n  public ClusterRuntimeTestEntry( RuntimeTestEntrySeverity severity, String description, String message,\n                                  Throwable exception, RuntimeTestAction runtimeTestAction ) {\n    super( severity, description, message, exception, runtimeTestAction );\n  }\n\n  private static RuntimeTestAction getDefaultAction( MessageGetterFactory messageGetterFactory,\n                                                     RuntimeTestResultEntry runtimeTestResultEntry,\n                                                     DocAnchor docAnchor ) {\n    RuntimeTestAction action = runtimeTestResultEntry.getAction();\n    if ( action != null ) {\n      return action;\n    }\n    return createDefaultAction( messageGetterFactory, runtimeTestResultEntry.getSeverity(), docAnchor );\n  }\n\n  private static RuntimeTestAction createDefaultAction( MessageGetterFactory messageGetterFactory,\n                                                        RuntimeTestEntrySeverity severity, DocAnchor docAnchor ) {\n    if ( severity == null || severity.ordinal() >= RuntimeTestEntrySeverity.WARNING.ordinal() ) {\n      MessageGetter messageGetter = messageGetterFactory.create( PKG );\n      String docUrl =\n          Const.getDocUrl( messageGetter.getMessage( RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC ) );\n      if ( docAnchor != null ) {\n        docUrl += messageGetter.getMessage( docAnchor.getAnchorTextKey() );\n      }\n      return new RuntimeTestActionImpl( messageGetter.getMessage(\n        RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_TROUBLESHOOTING_GUIDE ),\n        docUrl, severity, new HelpUrlPayload( messageGetterFactory,\n          messageGetter.getMessage(\n            RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC_TITLE ),\n          messageGetter.getMessage(\n            RUNTIME_TEST_RESULT_ENTRY_WITH_DEFAULT_SHIM_HELP_SHELL_DOC_HEADER ),\n          docUrl ) );\n    }\n    return null;\n  }\n\n  public enum DocAnchor {\n    GENERAL( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.General\" ),\n    SHIM_LOAD( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ShimLoad\" ),\n    CLUSTER_CONNECT( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ClusterConnect\" ),\n    CLUSTER_CONNECT_GATEWAY( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ClusterConnectGateway\" ),\n    ACCESS_DIRECTORY( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.AccessDirectory\" ),\n    OOZIE( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Oozie\" ),\n    ZOOKEEPER( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Zookeeper\" ),\n    KAFKA( \"RuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Kafka\" );\n    private final String anchorTextKey;\n\n    DocAnchor( String anchorTextKey ) {\n      this.anchorTextKey = anchorTextKey;\n    }\n\n    public String getAnchorTextKey() {\n      return anchorTextKey;\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/Constants.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class Constants {\n  public static final String HADOOP_FILE_SYSTEM = \"Hadoop File System\";\n  public static final String MAP_REDUCE = \"Map Reduce\";\n  public static final String OOZIE = \"Oozie\";\n  public static final String ZOOKEEPER = \"Zookeeper\";\n  public static final String KAFKA = \"Kafka\";\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/GatewayListHomeDirectoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by vamshidhar on 02/02/23.\n */\npublic class GatewayListHomeDirectoryTest extends ListHomeDirectoryTest {\n\n  public static final String TEST_PATH = \"/webhdfs/v1/~?op=LISTSTATUS\";\n\n  private final ConnectivityTestFactory connectivityTestFactory;\n\n  public GatewayListHomeDirectoryTest( MessageGetterFactory messageGetterFactory,\n                                       ConnectivityTestFactory connectivityTestFactory,\n                                       HadoopFileSystemLocator fileSystemLocator ) {\n    super( messageGetterFactory, fileSystemLocator );\n    this.connectivityTestFactory = connectivityTestFactory;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n            variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayUrl() ) ), TEST_PATH,\n            variables.environmentSubstitute( namedCluster.getGatewayUsername() ),\n            variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) )\n          .runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT_GATEWAY ) );\n    }\n  }\n}"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/GatewayListRootDirectoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by vamshidhar on 02/02/23.\n */\npublic class GatewayListRootDirectoryTest extends ListRootDirectoryTest {\n\n  public static final String TEST_PATH = \"/webhdfs/v1/?op=LISTSTATUS\";\n\n  private final ConnectivityTestFactory connectivityTestFactory;\n\n  public GatewayListRootDirectoryTest( MessageGetterFactory messageGetterFactory,\n                                       ConnectivityTestFactory connectivityTestFactory,\n                                       HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    super( messageGetterFactory,  hadoopFileSystemLocator);\n    this.connectivityTestFactory = connectivityTestFactory;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n            variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayUrl() ) ), TEST_PATH,\n            variables.environmentSubstitute( namedCluster.getGatewayUsername() ),\n            variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) )\n          .runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT_GATEWAY ) );\n    }\n  }\n}"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/GatewayPingFileSystemEntryPoint.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by dstepanov on 26/04/17.\n */\npublic class GatewayPingFileSystemEntryPoint extends PingFileSystemEntryPointTest {\n\n  public static final String TEST_PATH = \"/webhdfs/v1/?op=LISTSTATUS\";\n\n  public GatewayPingFileSystemEntryPoint( MessageGetterFactory messageGetterFactory,\n                                          ConnectivityTestFactory connectivityTestFactory ) {\n    super( messageGetterFactory, connectivityTestFactory );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayUrl() ) ), TEST_PATH,\n          variables.environmentSubstitute( namedCluster.getGatewayUsername() ),\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) )\n          .runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT_GATEWAY ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/GatewayWriteToAndDeleteFromUsersHomeFolderTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.http.Header;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.methods.HttpDelete;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.methods.HttpPut;\nimport org.apache.http.client.methods.HttpUriRequest;\nimport org.apache.http.client.protocol.HttpClientContext;\nimport org.apache.http.entity.StringEntity;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.util.HttpClientManager;\nimport org.pentaho.di.core.util.HttpClientUtil;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport javax.net.ssl.KeyManager;\nimport javax.net.ssl.SSLContext;\nimport javax.net.ssl.TrustManager;\nimport javax.net.ssl.X509TrustManager;\nimport java.io.IOException;\nimport java.io.UnsupportedEncodingException;\nimport java.net.URI;\nimport java.security.KeyManagementException;\nimport java.security.NoSuchAlgorithmException;\nimport java.security.SecureRandom;\nimport java.security.cert.X509Certificate;\nimport java.util.Arrays;\n\n/**\n * Created by vamshidhar on 8/14/15.\n */\npublic class GatewayWriteToAndDeleteFromUsersHomeFolderTest extends WriteToAndDeleteFromUsersHomeFolderTest {\n\n  public static final String CONNECT_TEST_HOST_BLANK_DESC = \"WriteToAndDeleteFromUsersHomeFolderTest.HostBlank.Desc\";\n  public static final String CONNECT_TEST_HOST_BLANK_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.HostBlank.Message\";\n\n  public static final String CONNECT_FILE_SYSTEM_TEST_PATH = \"/webhdfs/v1/?op=LISTSTATUS\";\n  public static final String PENTAHO_SHIM_TEST_FILE_PATH = \"/webhdfs/v1/~/pentaho-shim-test-file.test?op=LISTSTATUS\";\n  public static final String PENTAHO_SHIM_TEST_FILE_PATH_DELETE = \"/webhdfs/v1/~/pentaho-shim-test-file.test?op=DELETE\";\n  public static final String PENTAHO_SHIM_TEST_FILE_PATH_CREATE = \"/webhdfs/v1/~/pentaho-shim-test-file.test?op=CREATE\";\n\n  private final HttpClientManager httpClientManager = HttpClientManager.getInstance();\n\n  public GatewayWriteToAndDeleteFromUsersHomeFolderTest( MessageGetterFactory messageGetterFactory,\n                                                         HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    super( messageGetterFactory, hadoopFileSystemLocator );\n  }\n\n  @Override\n  public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      String url = namedCluster.decodePassword( namedCluster.getGatewayUrl() );\n      String password =\n        namedCluster.decodePassword( variables.environmentSubstitute( namedCluster.getGatewayPassword() ) );\n      String username = variables.environmentSubstitute( namedCluster.getGatewayUsername() );\n\n      URI uri = URI.create( url );\n      String hostname = uri.getHost();\n      int port = uri.getPort();\n      boolean ignoreSSL = variables.getBooleanValueOfVariable( \"${KETTLE_KNOX_IGNORE_SSL}\", false );\n\n      if ( StringUtils.isBlank( hostname ) ) {\n        return new RuntimeTestResultSummaryImpl( new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.WARNING,\n          messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_DESC ),\n          messageGetter.getMessage( CONNECT_TEST_HOST_BLANK_MESSAGE ) ) );\n      }\n\n      boolean exists;\n      try {\n        exists = doesFileExists( url, username, password, port, ignoreSSL );\n      } catch ( IOException | NoSuchAlgorithmException | KeyManagementException e ) {\n        return new RuntimeTestResultSummaryImpl(\n          new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL, messageGetter\n            .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_DESC ),\n            messageGetter\n              .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_MESSAGE,\n                CONNECT_FILE_SYSTEM_TEST_PATH, CONNECT_FILE_SYSTEM_TEST_PATH ),\n            e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n      }\n\n      if ( exists ) {\n        return new RuntimeTestResultSummaryImpl(\n          new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n            messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_DESC ),\n            messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_MESSAGE,\n              CONNECT_FILE_SYSTEM_TEST_PATH,\n              CONNECT_FILE_SYSTEM_TEST_PATH ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n      } else {\n        String fileLocationUrl;\n        try {\n          fileLocationUrl = createFile( url, username, password, port, ignoreSSL );\n        } catch ( IOException | NoSuchAlgorithmException | KeyManagementException e ) {\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_DESC ),\n              messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_MESSAGE,\n                CONNECT_FILE_SYSTEM_TEST_PATH, CONNECT_FILE_SYSTEM_TEST_PATH ),\n              e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        }\n        RuntimeTestResultEntry writeExceptionEntry = null;\n        try {\n          if ( !appendContentToFile( fileLocationUrl, username, password, port, ignoreSSL ) ) {\n            writeExceptionEntry = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter\n                .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_DESC ),\n              messageGetter.getMessage(\n                WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_MESSAGE,\n                CONNECT_FILE_SYSTEM_TEST_PATH, CONNECT_FILE_SYSTEM_TEST_PATH ),\n              ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY );\n          }\n        } catch ( IOException | NoSuchAlgorithmException | KeyManagementException e ) {\n          writeExceptionEntry = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n            messageGetter\n              .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_DESC ),\n            messageGetter.getMessage(\n              WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_MESSAGE,\n              CONNECT_FILE_SYSTEM_TEST_PATH, CONNECT_FILE_SYSTEM_TEST_PATH ),\n            e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY );\n        }\n\n        try {\n          if ( deleteFile( url, username, password, port, ignoreSSL ) ) {\n            if ( writeExceptionEntry == null ) {\n              return new RuntimeTestResultSummaryImpl(\n                new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n                  messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_DESC ),\n                  messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_MESSAGE,\n                    PENTAHO_SHIM_TEST_FILE_PATH ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n            } else {\n              return new RuntimeTestResultSummaryImpl( writeExceptionEntry );\n            }\n          } else {\n            if ( writeExceptionEntry == null ) {\n              return new RuntimeTestResultSummaryImpl(\n                new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                  messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_DESC ),\n                  messageGetter.getMessage(\n                    WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_MESSAGE,\n                    PENTAHO_SHIM_TEST_FILE_PATH,\n                    PENTAHO_SHIM_TEST_FILE_PATH ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n            } else {\n              return new RuntimeTestResultSummaryImpl( writeExceptionEntry );\n            }\n          }\n        } catch ( IOException | NoSuchAlgorithmException | KeyManagementException e ) {\n          RuntimeTestResultEntryImpl deleteExceptionEntry =\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter\n                .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_DESC ),\n              messageGetter.getMessage(\n                WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_MESSAGE,\n                PENTAHO_SHIM_TEST_FILE_PATH, PENTAHO_SHIM_TEST_FILE_PATH ),\n              e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY );\n          if ( writeExceptionEntry == null ) {\n            return new RuntimeTestResultSummaryImpl( deleteExceptionEntry );\n          } else {\n            return new RuntimeTestResultSummaryImpl(\n              new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                messageGetter.getMessage(\n                  WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_DESC ), messageGetter\n                .getMessage(\n                  WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_MESSAGE ),\n                ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ),\n              Arrays.asList( writeExceptionEntry, deleteExceptionEntry ) );\n          }\n        }\n      }\n    }\n  }\n\n  private boolean doesFileExists( String url, String user, String password, int port, boolean ignoreSSL )\n    throws NoSuchAlgorithmException, IOException, KeyManagementException {\n\n    ServerResponse response =\n      runServiceTest( RequestType.GET, URI.create( url + PENTAHO_SHIM_TEST_FILE_PATH ), user, password, port,\n        ignoreSSL );\n\n    return response.getStatusCode() == 200;\n  }\n\n  private String createFile( String url, String user, String password, int port, boolean ignoreSSL )\n    throws NoSuchAlgorithmException, IOException, KeyManagementException {\n    ServerResponse responseCreate =\n      runServiceTest( RequestType.CREATE, URI.create( url + PENTAHO_SHIM_TEST_FILE_PATH_CREATE ), user, password, port,\n        ignoreSSL );\n\n    if ( responseCreate.getStatusCode() == 307 ) {\n      return responseCreate.getLocationHeader();\n    } else {\n      return null;\n    }\n  }\n\n  private boolean appendContentToFile( String location, String user, String password, int port, boolean ignoreSSL )\n    throws NoSuchAlgorithmException, IOException, KeyManagementException {\n    ServerResponse responseWrite =\n      runServiceTest( RequestType.APPEND, URI.create( location ), user, password, port,\n        ignoreSSL );\n\n    return responseWrite.getStatusCode() == 201;\n  }\n\n  private boolean deleteFile( String url, String user, String password, int port, boolean ignoreSSL )\n    throws NoSuchAlgorithmException, IOException, KeyManagementException {\n\n    ServerResponse response =\n      runServiceTest( RequestType.DELETE, URI.create( url + PENTAHO_SHIM_TEST_FILE_PATH_DELETE ), user, password, port,\n        ignoreSSL );\n\n    return response.getStatusCode() == 200;\n  }\n\n  private ServerResponse runServiceTest( RequestType requestType, URI uri, String user, String password, int port,\n                                         boolean ignoreSSL )\n    throws NoSuchAlgorithmException, KeyManagementException, IOException {\n\n    // Ignore ssl certificate issues if KETTLE_KNOX_IGNORE_SSL = true\n    if ( ignoreSSL ) {\n      SSLContext ctx = getTlsContext();\n      initContextWithTrustAll( ctx );\n      SSLContext.setDefault( ctx );\n    }\n    HttpClientContext context = null;\n\n    HttpUriRequest method = getHttpRequestMethod( requestType, uri );\n\n    CloseableHttpClient httpClient = null;\n    try {\n      if ( StringUtils.isNotBlank( user ) ) {\n        httpClient = getHttpClient( user, password );\n        context = HttpClientUtil.createPreemptiveBasicAuthentication( uri.getHost(), port, user, password );\n      } else {\n        httpClient = httpClientManager.createDefaultClient();\n      }\n\n      HttpResponse httpResponse =\n        context != null ? httpClient.execute( method, context ) : httpClient.execute( method );\n\n      Header locationHeader = httpResponse.getFirstHeader( \"Location\" );\n      return new ServerResponse( locationHeader != null ? locationHeader.getValue() : null,\n        httpResponse.getStatusLine().getStatusCode() );\n    } finally {\n      if ( httpClient != null ) {\n        httpClient.close();\n      }\n    }\n  }\n\n  private HttpUriRequest getHttpRequestMethod( RequestType requestType, URI uri ) throws UnsupportedEncodingException {\n    if ( requestType == RequestType.GET ) {\n      return new HttpGet( uri.toString() );\n    } else if ( requestType == RequestType.APPEND ) {\n      HttpPut putMethod = new HttpPut( uri );\n      putMethod.setEntity( new StringEntity( HELLO_CLUSTER ) );\n      return putMethod;\n    } else if ( requestType == RequestType.CREATE ) {\n      return new HttpPut( uri );\n    } else if ( requestType == RequestType.DELETE ) {\n      return new HttpDelete( uri );\n    }\n\n    return null;\n  }\n\n  void initContextWithTrustAll( SSLContext ctx ) throws KeyManagementException {\n    ctx.init( new KeyManager[ 0 ], new TrustManager[] { new X509TrustManager() {\n\n      @Override public void checkClientTrusted( X509Certificate[] x509Certificates, String s ) {\n        // Nothing to do\n      }\n\n      @Override public void checkServerTrusted( X509Certificate[] x509Certificates, String s ) {\n        // Nothing to do\n      }\n\n      @Override public X509Certificate[] getAcceptedIssuers() {\n        return new X509Certificate[ 0 ];\n      }\n    } }, new SecureRandom() );\n  }\n\n  SSLContext getTlsContext() throws NoSuchAlgorithmException {\n    return SSLContext.getInstance( \"TLS\" );\n  }\n\n  CloseableHttpClient getHttpClient( String user, String password ) {\n    HttpClientManager.HttpClientBuilderFacade clientBuilder = httpClientManager.createBuilder();\n    clientBuilder.setCredentials( user, password );\n    return clientBuilder.build();\n  }\n\n  enum RequestType {\n    GET, APPEND, CREATE, DELETE\n  }\n\n  static class ServerResponse {\n    private final String locationHeader;\n    private final int statusCode;\n\n    ServerResponse( String locationHeader, int statusCode ) {\n      this.locationHeader = locationHeader;\n      this.statusCode = statusCode;\n    }\n\n    public String getLocationHeader() {\n      return locationHeader;\n    }\n\n    public int getStatusCode() {\n      return statusCode;\n    }\n  }\n}"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListDirectoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.exceptions.AccessControlException;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileStatus;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemPath;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.io.IOException;\nimport java.util.Arrays;\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class ListDirectoryTest extends BaseRuntimeTest {\n  public static final String LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_DESC =\n    \"ListDirectoryTest.CouldntGetFileSystem.Desc\";\n  public static final String LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE =\n    \"ListDirectoryTest.CouldntGetFileSystem.Message\";\n  public static final String LIST_DIRECTORY_TEST_SUCCESS_DESC = \"ListDirectoryTest.Success.Desc\";\n  public static final String LIST_DIRECTORY_TEST_SUCCESS_MESSAGE = \"ListDirectoryTest.Success.Message\";\n  public static final String LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_DESC =\n    \"ListDirectoryTest.AccessControlException.Desc\";\n  public static final String LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_MESSAGE =\n    \"ListDirectoryTest.AccessControlException.Message\";\n  public static final String LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_DESC =\n    \"ListDirectoryTest.ErrorListingDirectory.Desc\";\n  public static final String LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_MESSAGE =\n    \"ListDirectoryTest.ErrorListingDirectory.Message\";\n  public static final String LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_DESC =\n    \"ListDirectoryTest.ErrorInitializingCluster.Desc\";\n  public static final String LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE =\n    \"ListDirectoryTest.ErrorInitializingCluster.Message\";\n  private static final Class<?> PKG = ListDirectoryTest.class;\n  private final HadoopFileSystemLocator hadoopFileSystemLocator;\n  private final String directory;\n  protected final MessageGetterFactory messageGetterFactory;\n  private final MessageGetter messageGetter;\n\n  public ListDirectoryTest( MessageGetterFactory messageGetterFactory, HadoopFileSystemLocator hadoopFileSystemLocator,\n                            String directory, String id, String name ) {\n    super( NamedCluster.class, Constants.HADOOP_FILE_SYSTEM, id, name, new HashSet<>(\n      Arrays.asList( PingFileSystemEntryPointTest.HADOOP_FILE_SYSTEM_PING_FILE_SYSTEM_ENTRY_POINT_TEST ) ) );\n    this.hadoopFileSystemLocator = hadoopFileSystemLocator;\n    this.directory = directory;\n    this.messageGetterFactory = messageGetterFactory;\n    this.messageGetter = messageGetterFactory.create( PKG );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n    try {\n      HadoopFileSystem hadoopFilesystem = hadoopFileSystemLocator.getHadoopFilesystem( namedCluster );\n      if ( hadoopFilesystem == null ) {\n        return new RuntimeTestResultSummaryImpl(\n          new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n            messageGetter.getMessage( LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_DESC ),\n            messageGetter.getMessage( LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE, directory ),\n            ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n      } else {\n        HadoopFileSystemPath hadoopFilesystemPath;\n        if ( Const.isEmpty( directory ) ) {\n          hadoopFilesystemPath = hadoopFilesystem.getHomeDirectory();\n        } else {\n          hadoopFilesystemPath = hadoopFilesystem.getPath( directory );\n        }\n        try {\n          HadoopFileStatus[] hadoopFileStatuses = hadoopFilesystem.listStatus( hadoopFilesystemPath );\n          StringBuilder paths = new StringBuilder();\n          for ( HadoopFileStatus hadoopFileStatus : hadoopFileStatuses ) {\n            paths.append( hadoopFileStatus.getPath() );\n            paths.append( \", \" );\n          }\n          if ( paths.length() > 0 ) {\n            paths.setLength( paths.length() - 2 );\n          }\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n              messageGetter.getMessage( LIST_DIRECTORY_TEST_SUCCESS_DESC ),\n              messageGetter.getMessage( LIST_DIRECTORY_TEST_SUCCESS_MESSAGE, paths.toString() ),\n              ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        } catch ( AccessControlException e ) {\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter.getMessage( LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_DESC ), messageGetter\n              .getMessage( LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_MESSAGE, hadoopFilesystemPath.toString() ),\n              e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        } catch ( IOException e ) {\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n              messageGetter.getMessage( LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_DESC ), messageGetter.getMessage(\n                LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_MESSAGE, hadoopFilesystemPath.toString() ), e,\n              ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        }\n      }\n    } catch ( ClusterInitializationException e ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_DESC ),\n          messageGetter.getMessage( LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE, namedCluster.getName() ),\n          e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListHomeDirectoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class ListHomeDirectoryTest extends ListDirectoryTest {\n  public static final String HADOOP_FILE_SYSTEM_LIST_HOME_DIRECTORY_TEST =\n    \"hadoopFileSystemListHomeDirectoryTest\";\n  public static final String LIST_HOME_DIRECTORY_TEST_NAME = \"ListHomeDirectoryTest.Name\";\n  private static final Class<?> PKG = ListHomeDirectoryTest.class;\n\n  public ListHomeDirectoryTest( MessageGetterFactory messageGetterFactory,\n                                HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    super( messageGetterFactory, hadoopFileSystemLocator, \"\", HADOOP_FILE_SYSTEM_LIST_HOME_DIRECTORY_TEST,\n      messageGetterFactory.create( PKG ).getMessage( LIST_HOME_DIRECTORY_TEST_NAME ) );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListRootDirectoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class ListRootDirectoryTest extends ListDirectoryTest {\n  public static final String HADOOP_FILE_SYSTEM_LIST_ROOT_DIRECTORY_TEST =\n    \"hadoopFileSystemListRootDirectoryTest\";\n  public static final String LIST_ROOT_DIRECTORY_TEST_NAME = \"ListRootDirectoryTest.Name\";\n  private static final Class<?> PKG = ListRootDirectoryTest.class;\n\n  public ListRootDirectoryTest( MessageGetterFactory messageGetterFactory,\n                                HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    super( messageGetterFactory, hadoopFileSystemLocator, \"/\", HADOOP_FILE_SYSTEM_LIST_ROOT_DIRECTORY_TEST,\n      messageGetterFactory.create( PKG ).getMessage( LIST_ROOT_DIRECTORY_TEST_NAME ) );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/PingFileSystemEntryPointTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class PingFileSystemEntryPointTest extends BaseRuntimeTest {\n  public static final String HADOOP_FILE_SYSTEM_PING_FILE_SYSTEM_ENTRY_POINT_TEST =\n    \"_hadoopFileSystemPingFileSystemEntryPointTest\";\n  public static final String PING_FILE_SYSTEM_ENTRY_POINT_TEST_NAME = \"PingFileSystemEntryPointTest.Name\";\n  private static final Class<?> PKG = PingFileSystemEntryPointTest.class;\n  public static final String PING_FILE_SYSTEM_ENTRY_POINT_TEST_IS_MAPR_DESC =\n    \"PingFileSystemEntryPointTest.isMapr.Desc\";\n  public static final String PING_FILE_SYSTEM_ENTRY_POINT_TEST_IS_MAPR_MESSAGE =\n    \"PingFileSystemEntryPointTest.isMapr.Message\";\n  protected final MessageGetterFactory messageGetterFactory;\n  private final MessageGetter messageGetter;\n  protected final ConnectivityTestFactory connectivityTestFactory;\n\n  public PingFileSystemEntryPointTest( MessageGetterFactory messageGetterFactory,\n                                       ConnectivityTestFactory connectivityTestFactory ) {\n    super( NamedCluster.class, Constants.HADOOP_FILE_SYSTEM, HADOOP_FILE_SYSTEM_PING_FILE_SYSTEM_ENTRY_POINT_TEST,\n      messageGetterFactory.create( PKG ).getMessage( PING_FILE_SYSTEM_ENTRY_POINT_TEST_NAME ), new HashSet<String>() );\n    this.messageGetterFactory = messageGetterFactory;\n    this.messageGetter = messageGetterFactory.create( PKG );\n    this.connectivityTestFactory = connectivityTestFactory;\n  }\n\n  @Override\n  public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    // The connectivity test (ping the name node) is not applicable for MapR clusters due to their native client, so\n    // just pass this test and move on\n    if ( namedCluster.isMapr() ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( PING_FILE_SYSTEM_ENTRY_POINT_TEST_IS_MAPR_DESC ),\n          messageGetter.getMessage( PING_FILE_SYSTEM_ENTRY_POINT_TEST_IS_MAPR_MESSAGE ), null\n        )\n      );\n    } else {\n\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n          variables.environmentSubstitute( namedCluster.getHdfsHost() ),\n          variables.environmentSubstitute( namedCluster.getHdfsPort() ),\n          true ).runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/hdfs/WriteToAndDeleteFromUsersHomeFolderTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemPath;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.charset.Charset;\nimport java.util.Arrays;\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class WriteToAndDeleteFromUsersHomeFolderTest extends BaseRuntimeTest {\n  public static final String HADOOP_FILE_SYSTEM_WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST =\n    \"hadoopFileSystemWriteToAndDeleteFromUsersHomeFolderTest\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_NAME =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.Name\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.CouldntGetFileSystem.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.CouldntGetFileSystem.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.FileExists.Desc\";\n  public static final String PENTAHO_SHIM_TEST_FILE_TEST = \"pentaho-shim-test-file.test\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.FileExists.Message\";\n  public static final String HELLO_CLUSTER = \"Hello, Cluster\";\n  public static final Charset UTF8 = Charset.forName( \"UTF-8\" );\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.Success.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.Success.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.UnableToDelete.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.UnableToDelete.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorInitializingCluster.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorInitializingCluster.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorCheckingIfFileExists.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorCheckingIfFileExists.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorCreatingFile.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorCreatingFile.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingToFile.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingToFile.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorDeletingFile.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorDeletingFile.Message\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_DESC =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingDeletingFile.Desc\";\n  public static final String WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_MESSAGE =\n    \"WriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingDeletingFile.Message\";\n  private static final Class<?> PKG = WriteToAndDeleteFromUsersHomeFolderTest.class;\n  private final HadoopFileSystemLocator hadoopFileSystemLocator;\n  protected final MessageGetterFactory messageGetterFactory;\n  protected final MessageGetter messageGetter;\n\n  public WriteToAndDeleteFromUsersHomeFolderTest( MessageGetterFactory messageGetterFactory,\n                                                  HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    super( NamedCluster.class, Constants.HADOOP_FILE_SYSTEM,\n      HADOOP_FILE_SYSTEM_WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST,\n      messageGetterFactory.create( PKG ).getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_NAME ),\n      new HashSet<>( Arrays.asList( ListHomeDirectoryTest.HADOOP_FILE_SYSTEM_LIST_HOME_DIRECTORY_TEST ) ) );\n    this.hadoopFileSystemLocator = hadoopFileSystemLocator;\n    this.messageGetterFactory = messageGetterFactory;\n    this.messageGetter = messageGetterFactory.create( PKG );\n  }\n\n  @Override\n  public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n    try {\n      HadoopFileSystem hadoopFilesystem = hadoopFileSystemLocator.getHadoopFilesystem( namedCluster );\n      if ( hadoopFilesystem == null ) {\n        return new RuntimeTestResultSummaryImpl(\n          new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n            messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_DESC ),\n            messageGetter.getMessage(\n              WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE,\n              namedCluster.getName() ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n      } else {\n        HadoopFileSystemPath path = hadoopFilesystem.getPath( PENTAHO_SHIM_TEST_FILE_TEST );\n        HadoopFileSystemPath qualifiedPath = hadoopFilesystem.makeQualified( path );\n        Boolean exists;\n        try {\n          exists = hadoopFilesystem.exists( path );\n        } catch ( IOException e ) {\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL, messageGetter\n              .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_DESC ),\n              messageGetter\n                .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_MESSAGE,\n                  qualifiedPath.getName(), qualifiedPath.getPath() ),\n              e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        }\n        if ( exists ) {\n          return new RuntimeTestResultSummaryImpl(\n            new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_DESC ),\n              messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_MESSAGE,\n                qualifiedPath.getName(),\n                qualifiedPath.getPath() ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n        } else {\n          OutputStream outputStream;\n          try {\n            outputStream = hadoopFilesystem.create( path );\n          } catch ( IOException e ) {\n            return new RuntimeTestResultSummaryImpl(\n              new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_DESC ),\n                messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_MESSAGE,\n                  qualifiedPath.getName(), qualifiedPath.getPath() ),\n                e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n          }\n          RuntimeTestResultEntry writeExceptionEntry = null;\n          try {\n            outputStream.write( HELLO_CLUSTER.getBytes( UTF8 ) );\n          } catch ( IOException e ) {\n            writeExceptionEntry = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n              messageGetter\n                .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_DESC ),\n              messageGetter.getMessage(\n                WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_MESSAGE,\n                qualifiedPath.getName(), qualifiedPath.getPath() ),\n              e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY );\n          } finally {\n            try {\n              outputStream.close();\n            } catch ( IOException e ) {\n              //Ignore\n            }\n          }\n\n          try {\n            if ( hadoopFilesystem.delete( path, false ) ) {\n              if ( writeExceptionEntry == null ) {\n                return new RuntimeTestResultSummaryImpl(\n                  new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n                    messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_DESC ),\n                    messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_MESSAGE,\n                      qualifiedPath.toString() ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n              } else {\n                return new RuntimeTestResultSummaryImpl( writeExceptionEntry );\n              }\n            } else {\n              if ( writeExceptionEntry == null ) {\n                return new RuntimeTestResultSummaryImpl(\n                  new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                    messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_DESC ),\n                    messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_MESSAGE,\n                      qualifiedPath.getName(),\n                      qualifiedPath.getPath() ), ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n              } else {\n                return new RuntimeTestResultSummaryImpl( writeExceptionEntry );\n              }\n            }\n          } catch ( IOException e ) {\n            RuntimeTestResultEntryImpl deleteExceptionEntry =\n              new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                messageGetter\n                  .getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_DESC ),\n                messageGetter.getMessage(\n                  WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_MESSAGE,\n                  qualifiedPath.getName(), qualifiedPath.getPath() ),\n                e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY );\n            if ( writeExceptionEntry == null ) {\n              return new RuntimeTestResultSummaryImpl( deleteExceptionEntry );\n            } else {\n              return new RuntimeTestResultSummaryImpl(\n                new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n                  messageGetter.getMessage(\n                    WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_DESC ), messageGetter\n                  .getMessage(\n                    WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_DELETING_FILE_MESSAGE ),\n                  ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ),\n                Arrays.asList( writeExceptionEntry, deleteExceptionEntry ) );\n            }\n          }\n        }\n      }\n    } catch ( ClusterInitializationException e ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_DESC ),\n          messageGetter.getMessage( WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE,\n            namedCluster.getName() ),\n          e, ClusterRuntimeTestEntry.DocAnchor.ACCESS_DIRECTORY ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/kafka/KafkaConnectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.cluster.tests.kafka;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.kafka.clients.CommonClientConfigs;\nimport org.apache.kafka.clients.consumer.Consumer;\nimport org.apache.kafka.clients.consumer.ConsumerConfig;\nimport org.apache.kafka.clients.consumer.KafkaConsumer;\nimport org.apache.kafka.common.config.SaslConfigs;\nimport org.apache.kafka.common.serialization.StringDeserializer;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.hadoop.shim.api.jaas.JaasConfigService;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.function.Function;\n\npublic class KafkaConnectTest extends BaseRuntimeTest {\n  public static final String KAFKA_CONNECT_TEST = \"KafkaConnectTest\";\n  public static final String KAFKA_CONNECT_TEST_NAME = \"KafkaConnectTest.Name\";\n  public static final String KAFKA_CONNECT_TEST_MALFORMED_URL_DESC = \"KafkaConnectTest.MalformedUrl.Desc\";\n  public static final String KAFKA_CONNECT_TEST_MALFORMED_URL_MESSAGE = \"KafkaConnectTest.MalformedUrl.Message\";\n  public static final String KAFKA_CONNECT_TEST_SUCCESS_DESC = \"KafkaConnectTest.Success.Desc\";\n  public static final String KAFKA_CONNECT_TEST_SUCCESS_MESSAGE = \"KafkaConnectTest.Success.Message\";\n  public static final String KAFKA_CONNECT_TEST_EMPTY_DESC = \"KafkaConnectTest.Empty.Desc\";\n  public static final String KAFKA_CONNECT_TEST_EMPTY_MESSAGE = \"KafkaConnectTest.Empty.Message\";\n  private final MessageGetter messageGetter;\n  Function<Map<String, Object>, Consumer> consumerFunction;\n  static final Class<?> PKG = KafkaConnectTest.class;\n  protected final MessageGetterFactory messageGetterFactory;\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n\n  public KafkaConnectTest( MessageGetterFactory messageGetterFactory, NamedClusterServiceLocator namedClusterServiceLocator ) {\n    this( messageGetterFactory, KafkaConsumer::new, namedClusterServiceLocator );\n  }\n\n  KafkaConnectTest( MessageGetterFactory messageGetterFactory, Function<Map<String, Object>, Consumer> consumerFunction,\n                    final NamedClusterServiceLocator namedClusterServiceLocator ) {\n    super( NamedCluster.class, Constants.KAFKA, KAFKA_CONNECT_TEST,\n      messageGetterFactory.create( PKG ).getMessage( KAFKA_CONNECT_TEST_NAME ), Collections.emptySet() );\n    this.messageGetterFactory = messageGetterFactory;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    messageGetter = messageGetterFactory.create( PKG );\n    this.consumerFunction = consumerFunction;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( final Object objectUnderTest ) {\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n    String bootstrapServers = variables.environmentSubstitute( namedCluster.getKafkaBootstrapServers() );\n    if ( StringUtils.isBlank( bootstrapServers ) ) {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry(\n        messageGetterFactory,\n        new RuntimeTestResultEntryImpl(\n          RuntimeTestEntrySeverity.SKIPPED,\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_EMPTY_DESC ),\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_EMPTY_MESSAGE ) ),\n        ClusterRuntimeTestEntry.DocAnchor.KAFKA ) );\n    }\n    HashMap<String, Object> configs = new HashMap<>();\n    configs.put( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers );\n    configs.put( ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class );\n    configs.put( ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class );\n    configs.put( ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 10000 );\n    configs.put( ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 9000 );\n    try {\n      JaasConfigService jaasConfigService = namedClusterServiceLocator.getService( namedCluster, JaasConfigService.class );\n      if ( jaasConfigService != null ) {\n        if ( jaasConfigService.isKerberos() ) {\n          configs.put( SaslConfigs.SASL_JAAS_CONFIG, jaasConfigService.getJaasConfig() );\n          configs.put( CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, \"SASL_PLAINTEXT\" );\n        }\n      }\n    } catch ( ClusterInitializationException e ) {\n      //ok, try and connect anyway.  If kafka requires kerberos we'll still get an error\n    }\n    try ( Consumer consumer = consumerFunction.apply( configs ) ) {\n      consumer.listTopics();\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry(\n        messageGetterFactory,\n        new RuntimeTestResultEntryImpl( RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_SUCCESS_DESC ),\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_SUCCESS_MESSAGE ) ),\n        ClusterRuntimeTestEntry.DocAnchor.KAFKA ) );\n    } catch ( Exception e ) {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry(\n        messageGetterFactory,\n        new RuntimeTestResultEntryImpl(\n          RuntimeTestEntrySeverity.ERROR,\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_MALFORMED_URL_DESC ),\n          messageGetter.getMessage( KAFKA_CONNECT_TEST_MALFORMED_URL_MESSAGE, bootstrapServers ) ),\n        ClusterRuntimeTestEntry.DocAnchor.KAFKA ) );\n    }\n  }\n}\n\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/mr/GatewayPingJobTrackerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.mr;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by dstepanov on 27/04/17.\n */\npublic class GatewayPingJobTrackerTest extends PingJobTrackerTest {\n  private static final String TEST_PATH = \"/resourcemanager/v1/cluster/info\";\n\n  public GatewayPingJobTrackerTest( MessageGetterFactory messageGetterFactory,\n                                    ConnectivityTestFactory connectivityTestFactory ) {\n    super( messageGetterFactory, connectivityTestFactory );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayUrl() ) ), TEST_PATH,\n          variables.environmentSubstitute( namedCluster.getGatewayUsername() ),\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) )\n          .runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT_GATEWAY ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/mr/PingJobTrackerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.mr;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class PingJobTrackerTest extends BaseRuntimeTest {\n  public static final String JOB_TRACKER_PING_JOB_TRACKER_TEST =\n    \"jobTrackerPingJobTrackerTest\";\n  public static final String PING_JOB_TRACKER_TEST_NAME = \"PingJobTrackerTest.Name\";\n  private static final Class<?> PKG = PingJobTrackerTest.class;\n  protected final MessageGetterFactory messageGetterFactory;\n  private final MessageGetter messageGetter;\n  protected final ConnectivityTestFactory connectivityTestFactory;\n\n\n  public PingJobTrackerTest( MessageGetterFactory messageGetterFactory,\n                             ConnectivityTestFactory connectivityTestFactory ) {\n    super( NamedCluster.class, Constants.MAP_REDUCE, JOB_TRACKER_PING_JOB_TRACKER_TEST,\n      messageGetterFactory.create( PKG ).getMessage( PING_JOB_TRACKER_TEST_NAME ), new HashSet<String>() );\n    this.messageGetterFactory = messageGetterFactory;\n    this.messageGetter = messageGetterFactory.create( PKG );\n    this.connectivityTestFactory = connectivityTestFactory;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    // The connectivity test (ping the name node) is not applicable for MapR clusters due to their native client, so\n    // just pass this test and move on\n    if ( namedCluster.isMapr() ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( \"PingJobTrackerTest.isMapr.Desc\" ),\n          messageGetter.getMessage( \"PingJobTrackerTest.isMapr.Message\" ), null\n        )\n      );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory, connectivityTestFactory\n        .create( messageGetterFactory,\n          variables.environmentSubstitute( namedCluster.getJobTrackerHost() ),\n          variables.environmentSubstitute( namedCluster.getJobTrackerPort() ), true )\n        .runTest(), ClusterRuntimeTestEntry.DocAnchor.CLUSTER_CONNECT ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/oozie/GatewayPingOozieHostTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.oozie;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by dstepanov on 27/04/17.\n */\npublic class GatewayPingOozieHostTest extends PingOozieHostTest {\n  private static final String TEST_PATH = \"/oozie/v1/admin/status\";\n\n  public GatewayPingOozieHostTest( MessageGetterFactory messageGetterFactory,\n                                   ConnectivityTestFactory connectivityTestFactory ) {\n    super( messageGetterFactory, connectivityTestFactory );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway() ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl( new ClusterRuntimeTestEntry( messageGetterFactory,\n        connectivityTestFactory.create( messageGetterFactory,\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayUrl() ) ), TEST_PATH,\n          variables.environmentSubstitute( namedCluster.getGatewayUsername() ),\n          variables.environmentSubstitute( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) )\n          .runTest(), ClusterRuntimeTestEntry.DocAnchor.OOZIE ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/oozie/PingOozieHostTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.oozie;\n\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class PingOozieHostTest extends BaseRuntimeTest {\n  public static final String OOZIE_PING_OOZIE_HOST_TEST =\n    \"ooziePingOozieHostTest\";\n  public static final String PING_OOZIE_HOST_TEST_NAME = \"PingOozieHostTest.Name\";\n  public static final String PING_OOZIE_HOST_TEST_MALFORMED_URL_DESC = \"PingOozieHostTest.MalformedUrl.Desc\";\n  public static final String PING_OOZIE_HOST_TEST_MALFORMED_URL_MESSAGE = \"PingOozieHostTest.MalformedUrl.Message\";\n  private static final Class<?> PKG = PingOozieHostTest.class;\n  protected final MessageGetterFactory messageGetterFactory;\n  protected final ConnectivityTestFactory connectivityTestFactory;\n  private final MessageGetter messageGetter;\n\n  public PingOozieHostTest( MessageGetterFactory messageGetterFactory,\n                            ConnectivityTestFactory connectivityTestFactory ) {\n    super( NamedCluster.class, Constants.OOZIE, OOZIE_PING_OOZIE_HOST_TEST,\n      messageGetterFactory.create( PKG ).getMessage( PING_OOZIE_HOST_TEST_NAME ), new HashSet<String>() );\n    this.messageGetterFactory = messageGetterFactory;\n    this.messageGetter = messageGetterFactory.create( PKG );\n    this.connectivityTestFactory = connectivityTestFactory;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    String oozieUrl = variables.environmentSubstitute( namedCluster.getOozieUrl() );\n    try {\n      URL url = new URL( oozieUrl );\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, connectivityTestFactory.create(\n          messageGetterFactory, url.getHost(), String.valueOf( url.getPort() ), false ).runTest(),\n          ClusterRuntimeTestEntry.DocAnchor.OOZIE ) );\n    } catch ( MalformedURLException e ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( PING_OOZIE_HOST_TEST_MALFORMED_URL_DESC ),\n          messageGetter.getMessage( PING_OOZIE_HOST_TEST_MALFORMED_URL_MESSAGE, oozieUrl ), e,\n          ClusterRuntimeTestEntry.DocAnchor.OOZIE ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/zookeeper/GatewayPingZookeeperEnsembleTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.zookeeper;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\n\n/**\n * Created by dstepanov on 27/04/17.\n */\npublic class GatewayPingZookeeperEnsembleTest extends PingZookeeperEnsembleTest {\n\n  public static final String GATEWAY_PING_ZOOKEEPER_NOT_SUPPORT_DESC =\n    \"GatewayPingZookeeperEnsembleTest.ZookeeperNotSupport.Desc\";\n  public static final String GATEWAY_PING_ZOOKEEPER_NOT_SUPPORT_MESSAGE =\n    \"GatewayPingZookeeperEnsembleTest.ZookeeperNotSupport.Message\";\n\n  public GatewayPingZookeeperEnsembleTest( MessageGetterFactory messageGetterFactory,\n                                           ConnectivityTestFactory connectivityTestFactory ) {\n    super( messageGetterFactory, connectivityTestFactory );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    if ( !namedCluster.isUseGateway()  ) {\n      return super.runTest( objectUnderTest );\n    } else {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( RuntimeTestEntrySeverity.SKIPPED,\n          messageGetter.getMessage( GATEWAY_PING_ZOOKEEPER_NOT_SUPPORT_DESC ),\n          messageGetter.getMessage( GATEWAY_PING_ZOOKEEPER_NOT_SUPPORT_MESSAGE ), null\n        )\n      );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/java/org/pentaho/big/data/impl/cluster/tests/zookeeper/PingZookeeperEnsembleTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.zookeeper;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.big.data.impl.cluster.tests.Constants;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\nimport org.pentaho.runtime.test.test.impl.RuntimeTestResultEntryImpl;\n\nimport java.util.ArrayList;\nimport java.util.HashSet;\nimport java.util.List;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class PingZookeeperEnsembleTest extends BaseRuntimeTest {\n  public static final String HADOOP_FILE_SYSTEM_PING_FILE_SYSTEM_ENTRY_POINT_TEST =\n    \"zookeeperPingZookeeperEnsembleTest\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_NAME = \"PingZookeeperEnsembleTest.Name\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_DESC = \"PingZookeeperEnsembleTest.BlankHost.Desc\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_MESSAGE =\n    \"PingZookeeperEnsembleTest.BlankHost.Message\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_DESC = \"PingZookeeperEnsembleTest.BlankPort.Desc\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_MESSAGE =\n    \"PingZookeeperEnsembleTest.BlankPort.Message\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_DESC =\n    \"PingZookeeperEnsembleTest.NoNodesSucceeded.Desc\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_MESSAGE =\n    \"PingZookeeperEnsembleTest.NoNodesSucceeded.Message\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_DESC =\n    \"PingZookeeperEnsembleTest.SomeNodesFailed.Desc\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_MESSAGE =\n    \"PingZookeeperEnsembleTest.SomeNodesFailed.Message\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_ALL_NODES_SUCCEEDED_DESC =\n    \"PingZookeeperEnsembleTest.AllNodesSucceeded.Desc\";\n  public static final String PING_ZOOKEEPER_ENSEMBLE_TEST_ALL_NODES_SUCCEEDED_MESSAGE =\n    \"PingZookeeperEnsembleTest.AllNodesSucceeded.Message\";\n  private static final Class<?> PKG = PingZookeeperEnsembleTest.class;\n  private final MessageGetterFactory messageGetterFactory;\n  protected final MessageGetter messageGetter;\n  private final ConnectivityTestFactory connectivityTestFactory;\n\n  public PingZookeeperEnsembleTest( MessageGetterFactory messageGetterFactory,\n                                    ConnectivityTestFactory connectivityTestFactory ) {\n    super( NamedCluster.class, Constants.ZOOKEEPER, HADOOP_FILE_SYSTEM_PING_FILE_SYSTEM_ENTRY_POINT_TEST,\n      messageGetterFactory.create( PKG ).getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_NAME ), new HashSet<String>() );\n    this.messageGetterFactory = messageGetterFactory;\n    this.connectivityTestFactory = connectivityTestFactory;\n    messageGetter = messageGetterFactory.create( PKG );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    // Safe to cast as our accepts method will only return true for named clusters\n    NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n\n    // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n    // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n    // Here we try to resolve the parameters if we can:\n    Variables variables = new Variables();\n    variables.initializeVariablesFrom( null );\n\n    String zooKeeperHost = variables.environmentSubstitute( namedCluster.getZooKeeperHost() );\n    String zooKeeperPort = variables.environmentSubstitute( namedCluster.getZooKeeperPort() );\n    if ( Const.isEmpty( zooKeeperHost ) ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_DESC ),\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_MESSAGE ),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER ) );\n    } else if ( Const.isEmpty( zooKeeperPort ) ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_DESC ),\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_MESSAGE ),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER ) );\n    } else {\n      String[] quorum = zooKeeperHost.split( \",\" );\n      List<RuntimeTestResultEntry> clusterTestResultEntries = new ArrayList<>();\n      int failedNodes = 0;\n      StringBuilder failedNodeString = new StringBuilder();\n      for ( String node : quorum ) {\n        RuntimeTestResultEntry nodeResults = new ClusterRuntimeTestEntry( messageGetterFactory, connectivityTestFactory\n          .create( messageGetterFactory, node, zooKeeperPort, false, RuntimeTestEntrySeverity.WARNING ).runTest(),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER );\n        if ( nodeResults.getSeverity() == RuntimeTestEntrySeverity.WARNING ) {\n          failedNodeString.append( node ).append( \", \" );\n          failedNodes++;\n        }\n        clusterTestResultEntries.add( nodeResults );\n      }\n      if ( failedNodes > 0 ) {\n        failedNodeString.setLength( failedNodeString.length() - 2 );\n      }\n      RuntimeTestResultEntryImpl overallResult;\n      if ( failedNodes == quorum.length ) {\n        overallResult = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.FATAL,\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_DESC ),\n          messageGetter\n            .getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_MESSAGE, failedNodeString.toString() ),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER );\n      } else if ( failedNodes > 0 ) {\n        overallResult = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_DESC ),\n          messageGetter\n            .getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_MESSAGE, failedNodeString.toString() ),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER );\n      } else {\n        overallResult = new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_ALL_NODES_SUCCEEDED_DESC ),\n          messageGetter.getMessage( PING_ZOOKEEPER_ENSEMBLE_TEST_ALL_NODES_SUCCEEDED_MESSAGE ),\n          ClusterRuntimeTestEntry.DocAnchor.ZOOKEEPER );\n      }\n      return new RuntimeTestResultSummaryImpl( overallResult, clusterTestResultEntries );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n  <bean id=\"pingFileSystemEntryPointTest\"\n        class=\"org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayPingFileSystemEntryPoint\" scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n  </bean>\n  <bean id=\"pingJobTrackerTest\" class=\"org.pentaho.big.data.impl.cluster.tests.mr.GatewayPingJobTrackerTest\"\n        scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n  </bean>\n  <bean id=\"pingOozieHostTest\" class=\"org.pentaho.big.data.impl.cluster.tests.oozie.GatewayPingOozieHostTest\"\n        scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n  </bean>\n  <bean id=\"pingZookeeperEnsembleTest\"\n        class=\"org.pentaho.big.data.impl.cluster.tests.zookeeper.GatewayPingZookeeperEnsembleTest\" scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n  </bean>\n\n  <bean id=\"listRootDirectoryTest\" class=\"org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayListRootDirectoryTest\"\n        scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n    <argument ref=\"hadoopFileSystemLocator\"/>\n  </bean>\n  <bean id=\"listHomeDirectoryTest\" class=\"org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayListHomeDirectoryTest\"\n        scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"connectivityTestFactory\"/>\n    <argument ref=\"hadoopFileSystemLocator\"/>\n  </bean>\n  <bean id=\"writeToAndDeleteFromUsersHomeFolderTest\"\n        class=\"org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayWriteToAndDeleteFromUsersHomeFolderTest\" scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"hadoopFileSystemLocator\"/>\n  </bean>\n  <bean id=\"kafkaConnectTest\" class=\"org.pentaho.big.data.impl.cluster.tests.kafka.KafkaConnectTest\"\n        scope=\"singleton\">\n    <argument ref=\"messageGetterFactory\"/>\n    <argument ref=\"namedClusterServiceLocator\"/>\n  </bean>\n\n  <reference id=\"hadoopFileSystemLocator\" interface=\"org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator\"/>\n  <reference id=\"connectivityTestFactory\" interface=\"org.pentaho.runtime.test.network.ConnectivityTestFactory\"/>\n  <reference id=\"messageGetterFactory\" interface=\"org.pentaho.runtime.test.i18n.MessageGetterFactory\"/>\n  <reference id=\"namedClusterServiceLocator\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator\"/>\n\n  <service ref=\"pingFileSystemEntryPointTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"pingJobTrackerTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"pingOozieHostTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"pingZookeeperEnsembleTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"listRootDirectoryTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"listHomeDirectoryTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"writeToAndDeleteFromUsersHomeFolderTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"kafkaConnectTest\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n</blueprint>"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/hdfs/messages/messages_en_US.properties",
    "content": "ListDirectoryTest.CouldntGetFileSystem.Desc=Unable to access the directory.\nListDirectoryTest.CouldntGetFileSystem.Message=Could not access this directory: {0}.\nListDirectoryTest.Success.Desc=Successfully read directory contents.\nListDirectoryTest.Success.Message=Successfully read the contents of {0} directory.\nListDirectoryTest.AccessControlException.Desc=User does not have permission to read directory contents.\nListDirectoryTest.AccessControlException.Message=Could not read the contents of {0} directory.  User does not have read permission for this directory.\nListDirectoryTest.ErrorListingDirectory.Desc=Could not read directory contents.\nListDirectoryTest.ErrorListingDirectory.Message=Could not read the contents of {0} directory.\nListDirectoryTest.ErrorInitializingCluster.Desc=Could not initialize the cluster.\nListDirectoryTest.ErrorInitializingCluster.Message=Unable to initialize this cluster: {0}.  Verify that the shim is configured properly.\n\nPingFileSystemEntryPointTest.Name=Hadoop File System Connection\nPingFileSystemEntryPointTest.isMapr.Desc=Test not applicable for MapR clusters\nPingFileSystemEntryPointTest.isMapr.Message=The Namenode connectivity test is not applicable to MapR clusters as they use a native client to connect.\nListRootDirectoryTest.Name=Root Directory Access\nListHomeDirectoryTest.Name=User Home Directory Access\n\nWriteToAndDeleteFromUsersHomeFolderTest.Name=Verify User Home Permissions\nWriteToAndDeleteFromUsersHomeFolderTest.CouldntGetFileSystem.Desc=File system not found.\nWriteToAndDeleteFromUsersHomeFolderTest.CouldntGetFileSystem.Message=We cannot access the file system on this cluster: {0}. Verify the path to the user home directory on the cluster.\nWriteToAndDeleteFromUsersHomeFolderTest.FileExists.Desc=Test file already exists in the users home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.FileExists.Message={0} already exists in the {1} directory.\nWriteToAndDeleteFromUsersHomeFolderTest.Success.Desc=User has write and delete permissions for their home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.Success.Message=User has write and delete permissions for {0} directory.\nWriteToAndDeleteFromUsersHomeFolderTest.UnableToDelete.Desc=Unable to delete the test file from users home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.UnableToDelete.Message=Unable to delete {0} from {1} directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorCheckingIfFileExists.Desc=Unable to find the test file in the users home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorCheckingIfFileExists.Message=Unable to find {0} in the {1} directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorInitializingCluster.Desc=Unable to initialize the cluster.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorInitializingCluster.Message=Unable to initialize the cluster {0}.  Verify the shim was configured correctly.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorCreatingFile.Desc=Unable to create the test file in the users home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorCreatingFile.Message=Could not create {0} in the {1} directory. Verify that you have permission to create a file in the directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingToFile.Desc=Could not write to the test file in the users home directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorWritingToFile.Message=Could not write to {0} in the {1} directory. Verify that you have permission to write to a file in the directory.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorDeletingFile.Desc=Could not delete test file.\nWriteToAndDeleteFromUsersHomeFolderTest.ErrorDeletingFile.Message=Could not delete {0} from the {1} directory. Verify that you have permission to delete a file from the directory.\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/kafka/messages/messages_en_US.properties",
    "content": "KafkaConnectTest.Name=Kafka Connection\nKafkaConnectTest.MalformedUrl.Desc=Unable to connect to Kafka.\nKafkaConnectTest.MalformedUrl.Message=We are unable to connect to Kafka at {0}.  Please verify the Kafka Bootstrap URL and network access.\nKafkaConnectTest.Success.Desc=Successfully connected to Kafka.\nKafkaConnectTest.Success.Message=Successfully connected to Kafka.\nKafkaConnectTest.Empty.Desc=This test was skipped because Bootstrap server field is empty.\nKafkaConnectTest.Empty.Message=This test was skipped because Bootstrap server field is empty.\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/messages/messages_en_US.properties",
    "content": "RuntimeTestResultEntryWithDefaultShimHelp.TroubleshootingGuide=Learn more\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc=Setup/Administration/Troubleshooting/Big_Data_Issues\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.General=\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ShimLoad=#Shim_and_Configuration_Issues\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ClusterConnect=#Connection_Problems\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.ClusterConnectGateway=#Connection_Problems\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.AccessDirectory=#Directory_Access_or_Permissions_Issues\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Oozie=#Oozie_Issues\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Zookeeper=#Zookeeper_Problems\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Anchor.Kafka=#Kafka_Problems\n\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Title=Hadoop Cluster Test\nRuntimeTestResultEntryWithDefaultShimHelp.Shell.Doc.Header=Hadoop Cluster Test details\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/mr/messages/messages_en_US.properties",
    "content": "PingJobTrackerTest.Name=Ping Job Tracker / Resource Manager\nPingJobTrackerTest.isMapr.Desc=Test not applicable for MapR clusters\nPingJobTrackerTest.isMapr.Message=The JobTracker / ResourceManager connectivity test is not applicable to MapR clusters as they use a native client to connect.\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/oozie/messages/messages_en_US.properties",
    "content": "PingOozieHostTest.Name=Oozie Host Connection\nPingOozieHostTest.MalformedUrl.Desc=Unable to connect to Oozie.\nPingOozieHostTest.MalformedUrl.Message=We are unable to connect to Oozie at {0}.  Please verify the Oozie URL and network access.\n"
  },
  {
    "path": "impl/clusterTests/src/main/resources/org/pentaho/big/data/impl/cluster/tests/zookeeper/messages/messages_en_US.properties",
    "content": "PingZookeeperEnsembleTest.Name=Zookeeper Ensemble Connection\nPingZookeeperEnsembleTest.BlankHost.Desc=One or more Zookeeper hostnames were not set.\nPingZookeeperEnsembleTest.BlankHost.Message=Please specify at least one Zookeeper hostname.  Separate multiple hostnames with a comma.\nPingZookeeperEnsembleTest.BlankPort.Desc=The Zookeeper port was not set.\nPingZookeeperEnsembleTest.BlankPort.Message=Please specify the Zookeeper port.\nPingZookeeperEnsembleTest.NoNodesSucceeded.Desc=Unable to connect to the Zookeeper Ensemble.\nPingZookeeperEnsembleTest.NoNodesSucceeded.Message=Unable to connect to any Zookeeper host.  We tried to connect to these hosts: {0}.  Please verify Zookeeper settings.\nPingZookeeperEnsembleTest.AllNodesSucceeded.Desc=Connected to all Zookeeper nodes.\nPingZookeeperEnsembleTest.AllNodesSucceeded.Message=Successfully connected to all Zookeeper nodes for all ports.\nPingZookeeperEnsembleTest.SomeNodesFailed.Desc=Unable to connect to all Zookeper nodes.\nPingZookeeperEnsembleTest.SomeNodesFailed.Message=Unable to connect to the following Zookeeper nodes: {0}. Please verify Zookeeper settings.\nGatewayPingZookeeperEnsembleTest.ZookeeperNotSupport.Desc=Access to zookeeper services is not supported with knox.\nGatewayPingZookeeperEnsembleTest.ZookeeperNotSupport.Message=Access to zookeeper services is not supported with knox."
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListDirectoryTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\n\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.exceptions.AccessControlException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileStatus;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemPath;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.io.IOException;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class ListDirectoryTestTest {\n  private TestMessageGetterFactory testMessageGetterFactory;\n  private MessageGetter messageGetter;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n  private String directory;\n  private String id;\n  private String name;\n  private ListDirectoryTest listDirectoryTest;\n  private NamedCluster namedCluster;\n  private HadoopFileSystem hadoopFileSystem;\n  private String namedClusterName;\n  private HadoopFileSystemPath directoryPath;\n  private HadoopFileSystemPath homeDirectoryPath;\n\n  @Before\n  public void setup() throws ClusterInitializationException {\n    testMessageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = testMessageGetterFactory.create( ListDirectoryTest.class );\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    directory = \"directory\";\n    id = \"id\";\n    name = \"name\";\n    namedCluster = mock( NamedCluster.class );\n    namedClusterName = \"namedCluster\";\n    when( namedCluster.getName() ).thenReturn( namedClusterName );\n    hadoopFileSystem = mock( HadoopFileSystem.class );\n    directoryPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileSystem.getPath( directory ) ).thenReturn( directoryPath );\n    homeDirectoryPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileSystem.getHomeDirectory() ).thenReturn( homeDirectoryPath );\n    when( hadoopFileSystemLocator.getHadoopFilesystem( namedCluster ) ).thenReturn( hadoopFileSystem );\n    init();\n  }\n\n  private void init() {\n    listDirectoryTest = new ListDirectoryTest( testMessageGetterFactory, hadoopFileSystemLocator, directory, id, name );\n  }\n\n  @Test\n  public void testNullHadoopFileSystem() {\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    init();\n    RuntimeTestResultSummary runtimeTestResultSummary = listDirectoryTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_DESC ), messageGetter\n        .getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE, directory ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testClusterInitializationException() throws ClusterInitializationException {\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    when( hadoopFileSystemLocator.getHadoopFilesystem( namedCluster ) )\n      .thenThrow( new ClusterInitializationException( null ) );\n    init();\n    RuntimeTestResultSummary runtimeTestResultSummary = listDirectoryTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_DESC ), messageGetter\n        .getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE, namedClusterName ),\n      ClusterInitializationException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testAccessControlException() throws IOException {\n    when( hadoopFileSystem.listStatus( directoryPath ) ).thenThrow( new AccessControlException( null, null ) );\n    RuntimeTestResultSummary runtimeTestResultSummary = listDirectoryTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING,\n      messageGetter.getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_DESC ), messageGetter\n        .getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ACCESS_CONTROL_EXCEPTION_MESSAGE, directoryPath.toString() ),\n      AccessControlException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIOException() throws IOException {\n    when( hadoopFileSystem.listStatus( directoryPath ) ).thenThrow( new IOException() );\n    RuntimeTestResultSummary runtimeTestResultSummary = listDirectoryTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_DESC ), messageGetter\n        .getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_ERROR_LISTING_DIRECTORY_MESSAGE, directoryPath.toString() ),\n      IOException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testHomeDirectorySuccess() throws IOException {\n    directory = \"\";\n    HadoopFileStatus hadoopFileStatus1 = mock( HadoopFileStatus.class );\n    HadoopFileStatus hadoopFileStatus2 = mock( HadoopFileStatus.class );\n    HadoopFileSystemPath hadoopFileSystemPath1 = mock( HadoopFileSystemPath.class );\n    HadoopFileSystemPath hadoopFileSystemPath2 = mock( HadoopFileSystemPath.class );\n    when( hadoopFileStatus1.getPath() ).thenReturn( hadoopFileSystemPath1 );\n    when( hadoopFileStatus2.getPath() ).thenReturn( hadoopFileSystemPath2 );\n    HadoopFileStatus[] hadoopFileStatuses = { hadoopFileStatus1, hadoopFileStatus2 };\n    when( hadoopFileSystem.listStatus( homeDirectoryPath ) ).thenReturn( hadoopFileStatuses );\n    init();\n    RuntimeTestResultSummary runtimeTestResultSummary = listDirectoryTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.INFO,\n      messageGetter.getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_SUCCESS_DESC ), messageGetter\n        .getMessage( ListDirectoryTest.LIST_DIRECTORY_TEST_SUCCESS_MESSAGE,\n          hadoopFileSystemPath1.toString() + \", \" + hadoopFileSystemPath2.toString() ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListHomeDirectoryTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class ListHomeDirectoryTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n  private ListHomeDirectoryTest listHomeDirectoryTest;\n  private MessageGetter messageGetter;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( ListHomeDirectoryTest.class );\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    listHomeDirectoryTest = new ListHomeDirectoryTest( messageGetterFactory, hadoopFileSystemLocator );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( ListHomeDirectoryTest.LIST_HOME_DIRECTORY_TEST_NAME ),\n      listHomeDirectoryTest.getName() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/hdfs/ListRootDirectoryTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class ListRootDirectoryTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n  private ListRootDirectoryTest listRootDirectoryTest;\n  private MessageGetter messageGetter;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( ListRootDirectoryTest.class );\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    listRootDirectoryTest = new ListRootDirectoryTest( messageGetterFactory, hadoopFileSystemLocator );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( ListRootDirectoryTest.LIST_ROOT_DIRECTORY_TEST_NAME ),\n      listRootDirectoryTest.getName() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/hdfs/PingFileSystemEntryPointTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class PingFileSystemEntryPointTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private ConnectivityTestFactory connectivityTestFactory;\n  private PingFileSystemEntryPointTest fileSystemEntryPointTest;\n  private NamedCluster namedCluster;\n  private MessageGetter messageGetter;\n  private String hdfsHost;\n  private String hdfsPort;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( PingFileSystemEntryPointTest.class );\n    connectivityTestFactory = mock( ConnectivityTestFactory.class );\n    fileSystemEntryPointTest = new PingFileSystemEntryPointTest( messageGetterFactory, connectivityTestFactory );\n    hdfsHost = \"hdfsHost\";\n    hdfsPort = \"8025\";\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.isMapr() ).thenReturn( false );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( PingFileSystemEntryPointTest.PING_FILE_SYSTEM_ENTRY_POINT_TEST_NAME ),\n      fileSystemEntryPointTest.getName() );\n  }\n\n  @Test\n  public void testSuccess() {\n    RuntimeTestResultEntry results = mock( RuntimeTestResultEntry.class );\n    String testDescription = \"test-description\";\n    when( results.getDescription() ).thenReturn( testDescription );\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    when( connectivityTestFactory.create( messageGetterFactory, hdfsHost, hdfsPort, true ) )\n      .thenReturn( connectivityTest );\n    when( connectivityTest.runTest() ).thenReturn( results );\n    RuntimeTestResultSummary runtimeTestResultSummary = fileSystemEntryPointTest.runTest( namedCluster );\n    assertEquals( testDescription, runtimeTestResultSummary.getOverallStatusEntry().getDescription() );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIsMapR() {\n    when( namedCluster.isMapr() ).thenReturn( true );\n    RuntimeTestResultSummary runtimeTestResultSummary = fileSystemEntryPointTest.runTest( namedCluster );\n    RuntimeTestResultEntry results = runtimeTestResultSummary.getOverallStatusEntry();\n    assertEquals( RuntimeTestEntrySeverity.INFO, results.getSeverity() );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/hdfs/WriteToAndDeleteFromUsersHomeFolderTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.hdfs;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemPath;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.io.OutputStream;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Matchers.isA;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/21/15.\n */\npublic class WriteToAndDeleteFromUsersHomeFolderTestTest {\n  private TestMessageGetterFactory messageGetterFactory;\n  private MessageGetter messageGetter;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n  private WriteToAndDeleteFromUsersHomeFolderTest writeToAndDeleteFromUsersHomeFolderTest;\n  private NamedCluster namedCluster;\n  private String namedClusterName;\n  private HadoopFileSystem hadoopFileSystem;\n  private HadoopFileSystemPath hadoopFileSystemPath;\n  private HadoopFileSystemPath qualifiedPath;\n\n  @Before\n  public void setup() throws ClusterInitializationException {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( WriteToAndDeleteFromUsersHomeFolderTest.class );\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    namedCluster = mock( NamedCluster.class );\n    namedClusterName = \"namedClusterName\";\n    when( namedCluster.getName() ).thenReturn( namedClusterName );\n    hadoopFileSystem = mock( HadoopFileSystem.class );\n    when( hadoopFileSystemLocator.getHadoopFilesystem( namedCluster ) ).thenReturn( hadoopFileSystem );\n    hadoopFileSystemPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileSystem.getPath( WriteToAndDeleteFromUsersHomeFolderTest.PENTAHO_SHIM_TEST_FILE_TEST ) )\n      .thenReturn( hadoopFileSystemPath );\n    qualifiedPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileSystem.makeQualified( hadoopFileSystemPath ) ).thenReturn( qualifiedPath );\n    when( qualifiedPath.getName() ).thenReturn( WriteToAndDeleteFromUsersHomeFolderTest.PENTAHO_SHIM_TEST_FILE_TEST );\n    when( qualifiedPath.getPath() ).thenReturn( \"\" );\n    init();\n  }\n\n  private void init() {\n    writeToAndDeleteFromUsersHomeFolderTest =\n      new WriteToAndDeleteFromUsersHomeFolderTest( messageGetterFactory, hadoopFileSystemLocator );\n  }\n\n  @Test\n  public void testNullFileSystem() {\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    init();\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_COULDNT_GET_FILE_SYSTEM_MESSAGE,\n        namedClusterName ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testClusterInitializationException() throws ClusterInitializationException {\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    when( hadoopFileSystemLocator.getHadoopFilesystem( namedCluster ) )\n      .thenThrow( new ClusterInitializationException( null ) );\n    init();\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_INITIALIZING_CLUSTER_MESSAGE,\n        namedClusterName ), ClusterInitializationException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIOExceptionExists() throws ClusterInitializationException, IOException {\n    when( hadoopFileSystem.exists( hadoopFileSystemPath ) ).thenThrow( new IOException() );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CHECKING_IF_FILE_EXISTS_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ), IOException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIOExceptionCreate() throws ClusterInitializationException, IOException {\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenThrow( new IOException() );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_CREATING_FILE_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ), IOException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIOExceptionWrite() throws ClusterInitializationException, IOException {\n    OutputStream outputStream = mock( OutputStream.class );\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenReturn( outputStream );\n    when( hadoopFileSystem.delete( hadoopFileSystemPath, false ) ).thenReturn( true );\n    doThrow( new IOException() ).when( outputStream ).write( isA( byte[].class ) );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_WRITING_TO_FILE_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ), IOException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIOExceptionDelete() throws ClusterInitializationException, IOException {\n    OutputStream outputStream = mock( OutputStream.class );\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenReturn( outputStream );\n    when( hadoopFileSystem.delete( hadoopFileSystemPath, false ) ).thenThrow( new IOException() );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_ERROR_DELETING_FILE_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ), IOException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testPathExists() throws IOException {\n    when( hadoopFileSystem.exists( hadoopFileSystemPath ) ).thenReturn( true );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_FILE_EXISTS_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testUnableToDelete() throws IOException {\n    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenReturn( byteArrayOutputStream );\n    when( hadoopFileSystem.delete( hadoopFileSystemPath, false ) ).thenReturn( false );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.WARNING, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_UNABLE_TO_DELETE_MESSAGE,\n        qualifiedPath.getName(), qualifiedPath.getPath() ) );\n    assertEquals( WriteToAndDeleteFromUsersHomeFolderTest.HELLO_CLUSTER,\n      byteArrayOutputStream.toString( WriteToAndDeleteFromUsersHomeFolderTest.UTF8.name() ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testSuccess() throws IOException {\n    ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenReturn( byteArrayOutputStream );\n    when( hadoopFileSystem.delete( hadoopFileSystemPath, false ) ).thenReturn( true );\n    RuntimeTestResultSummary runtimeTestResultSummary = writeToAndDeleteFromUsersHomeFolderTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.INFO, messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_DESC ),\n      messageGetter.getMessage(\n        WriteToAndDeleteFromUsersHomeFolderTest\n          .WRITE_TO_AND_DELETE_FROM_USERS_HOME_FOLDER_TEST_SUCCESS_MESSAGE,\n        qualifiedPath.toString() ) );\n    assertEquals( WriteToAndDeleteFromUsersHomeFolderTest.HELLO_CLUSTER,\n      byteArrayOutputStream.toString( WriteToAndDeleteFromUsersHomeFolderTest.UTF8.name() ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/kafka/KafkaConnectTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.cluster.tests.kafka;\n\nimport org.apache.kafka.clients.consumer.Consumer;\nimport org.apache.kafka.common.KafkaException;\nimport org.apache.kafka.common.config.SaslConfigs;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.runners.MockitoJUnitRunner;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.hadoop.shim.api.jaas.JaasConfigService;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.Collections;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Matchers.anyString;\nimport static org.mockito.Matchers.eq;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class KafkaConnectTestTest {\n  @Mock Consumer consumer;\n  @Mock NamedClusterServiceLocator namedClusterServiceLocator;\n  @Mock NamedCluster namedCluster;\n  @Mock MessageGetter messageGetter;\n  @Mock MessageGetterFactory messageGetterFactory;\n  @Mock JaasConfigService jaasConfigService;\n\n  @Before\n  public void setUp() throws Exception {\n    when( messageGetterFactory.create( KafkaConnectTest.PKG ) ).thenReturn( messageGetter );\n    when( messageGetterFactory.create( ClusterRuntimeTestEntry.class ) ).thenReturn( messageGetter );\n  }\n\n  @Test\n  public void testSuccess() throws Exception {\n    when( consumer.listTopics() ).thenReturn( Collections.emptyMap() );\n    when( namedCluster.getKafkaBootstrapServers() ).thenReturn( \"kafkaHost:9092\" );\n    when( messageGetter.getMessage( anyString() ) ).thenReturn( \"success message\" );\n    KafkaConnectTest kafkaConnectTest =\n      new KafkaConnectTest( messageGetterFactory, (map) -> consumer, namedClusterServiceLocator );\n    RuntimeTestResultSummary summary = kafkaConnectTest.runTest( namedCluster );\n    assertEquals( RuntimeTestEntrySeverity.INFO, summary.getOverallStatusEntry().getSeverity() );\n    assertEquals( \"success message\", summary.getOverallStatusEntry().getMessage() );\n  }\n\n  @Test\n  public void testSuccessKerberos() throws Exception {\n    when( consumer.listTopics() ).thenReturn( Collections.emptyMap() );\n    when( namedCluster.getKafkaBootstrapServers() ).thenReturn( \"kafkaHost:9092\" );\n    when( messageGetter.getMessage( anyString() ) ).thenReturn( \"success message\" );\n    when( namedClusterServiceLocator.getService( namedCluster, JaasConfigService.class ) )\n      .thenReturn( jaasConfigService );\n    when( jaasConfigService.isKerberos() ).thenReturn( true );\n    when( jaasConfigService.getJaasConfig() ).thenReturn( \"pretend-jaas-config\" );\n    KafkaConnectTest kafkaConnectTest =\n      new KafkaConnectTest( messageGetterFactory, this::assertConsumer, namedClusterServiceLocator );\n    RuntimeTestResultSummary summary = kafkaConnectTest.runTest( namedCluster );\n    assertEquals( RuntimeTestEntrySeverity.INFO, summary.getOverallStatusEntry().getSeverity() );\n    assertEquals( \"success message\", summary.getOverallStatusEntry().getMessage() );\n  }\n\n  @Test\n  public void testError() throws Exception {\n    when( consumer.listTopics() ).thenThrow( new KafkaException( \"oops\" ) );\n    when( namedCluster.getKafkaBootstrapServers() ).thenReturn( \"kafkaHost:9092\" );\n    when( messageGetter.getMessage( anyString(), eq( \"kafkaHost:9092\" ) ) ).thenReturn( \"error message\" );\n    KafkaConnectTest kafkaConnectTest =\n      new KafkaConnectTest( messageGetterFactory, (map) -> consumer, namedClusterServiceLocator );\n    RuntimeTestResultSummary summary = kafkaConnectTest.runTest( namedCluster );\n    assertEquals( RuntimeTestEntrySeverity.ERROR, summary.getOverallStatusEntry().getSeverity() );\n    assertEquals( \"error message\", summary.getOverallStatusEntry().getMessage() );\n  }\n\n  @Test\n  public void testSkip() throws Exception {\n    when( namedCluster.getKafkaBootstrapServers() ).thenReturn( \"  \" );\n    when( messageGetter.getMessage( anyString() ) ).thenReturn( \"skipped message\" );\n    KafkaConnectTest kafkaConnectTest =\n      new KafkaConnectTest( messageGetterFactory, (map) -> consumer, namedClusterServiceLocator );\n    RuntimeTestResultSummary summary = kafkaConnectTest.runTest( namedCluster );\n    assertEquals( RuntimeTestEntrySeverity.SKIPPED, summary.getOverallStatusEntry().getSeverity() );\n    assertEquals( \"skipped message\", summary.getOverallStatusEntry().getMessage() );\n    verify( consumer, never() ).listTopics();\n  }\n\n  private Consumer assertConsumer( Map<String, Object> actualMap ) {\n    assertEquals( \"pretend-jaas-config\", actualMap.get( SaslConfigs.SASL_JAAS_CONFIG ) );\n    return consumer;\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/mr/PingJobTrackerTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.mr;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class PingJobTrackerTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private ConnectivityTestFactory connectivityTestFactory;\n  private PingJobTrackerTest pingJobTrackerTest;\n  private NamedCluster namedCluster;\n  private MessageGetter messageGetter;\n  private String jobTrackerHost;\n  private String jobTrackerPort;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( PingJobTrackerTest.class );\n    connectivityTestFactory = mock( ConnectivityTestFactory.class );\n    pingJobTrackerTest = new PingJobTrackerTest( messageGetterFactory, connectivityTestFactory );\n    jobTrackerHost = \"jobTrackerHost\";\n    jobTrackerPort = \"829\";\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( PingJobTrackerTest.PING_JOB_TRACKER_TEST_NAME ),\n      pingJobTrackerTest.getName() );\n  }\n\n  @Test\n  public void testSuccess() {\n    RuntimeTestResultEntry results = mock( RuntimeTestResultEntry.class );\n    String testDescription = \"test-description\";\n    when( results.getDescription() ).thenReturn( testDescription );\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    when( connectivityTestFactory.create( messageGetterFactory, jobTrackerHost, jobTrackerPort, true ) )\n      .thenReturn( connectivityTest );\n    when( connectivityTest.runTest() ).thenReturn( results );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingJobTrackerTest.runTest( namedCluster );\n    assertEquals( testDescription, runtimeTestResultSummary.getOverallStatusEntry().getDescription() );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testIsMapR() {\n    when( namedCluster.isMapr() ).thenReturn( true );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingJobTrackerTest.runTest( namedCluster );\n    RuntimeTestResultEntry results = runtimeTestResultSummary.getOverallStatusEntry();\n    assertEquals( RuntimeTestEntrySeverity.INFO, results.getSeverity() );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/oozie/PingOozieHostTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.oozie;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.net.MalformedURLException;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class PingOozieHostTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private ConnectivityTestFactory connectivityTestFactory;\n  private PingOozieHostTest pingOozieHostTest;\n  private NamedCluster namedCluster;\n  private MessageGetter messageGetter;\n  private String oozieUrl;\n  private String oozieHost;\n  private String ooziePort;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( PingOozieHostTest.class );\n    connectivityTestFactory = mock( ConnectivityTestFactory.class );\n    pingOozieHostTest = new PingOozieHostTest( messageGetterFactory, connectivityTestFactory );\n    oozieHost = \"oozieHost\";\n    ooziePort = \"8080\";\n    oozieUrl = \"http://\" + oozieHost + \":\" + ooziePort + \"/oozie\";\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getOozieUrl() ).thenReturn( oozieUrl );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( PingOozieHostTest.PING_OOZIE_HOST_TEST_NAME ),\n      pingOozieHostTest.getName() );\n  }\n\n  @Test\n  public void testMalformedURLException() {\n    oozieUrl = \"one-malformed-url\";\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getOozieUrl() ).thenReturn( oozieUrl );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingOozieHostTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( PingOozieHostTest.PING_OOZIE_HOST_TEST_MALFORMED_URL_DESC ),\n      messageGetter.getMessage( PingOozieHostTest.PING_OOZIE_HOST_TEST_MALFORMED_URL_MESSAGE, oozieUrl ),\n      MalformedURLException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testSuccess() {\n    RuntimeTestResultEntry results = mock( RuntimeTestResultEntry.class );\n    String testDescription = \"test-description\";\n    when( results.getDescription() ).thenReturn( testDescription );\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    when( connectivityTestFactory.create( messageGetterFactory, oozieHost, ooziePort, false ) )\n      .thenReturn( connectivityTest );\n    when( connectivityTest.runTest() ).thenReturn( results );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingOozieHostTest.runTest( namedCluster );\n    assertEquals( testDescription, runtimeTestResultSummary.getOverallStatusEntry().getDescription() );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/clusterTests/src/test/java/org/pentaho/big/data/impl/cluster/tests/zookeeper/PingZookeeperEnsembleTestTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.cluster.tests.zookeeper;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.network.ConnectivityTest;\nimport org.pentaho.runtime.test.network.ConnectivityTestFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class PingZookeeperEnsembleTestTest {\n  private MessageGetterFactory messageGetterFactory;\n  private ConnectivityTestFactory connectivityTestFactory;\n  private PingZookeeperEnsembleTest pingZookeeperEnsembleTest;\n  private NamedCluster namedCluster;\n  private String zookeeperHosts;\n  private String zookeeperPort;\n  private MessageGetter messageGetter;\n  private String host1;\n  private String host2;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( PingZookeeperEnsembleTest.class );\n    connectivityTestFactory = mock( ConnectivityTestFactory.class );\n    pingZookeeperEnsembleTest = new PingZookeeperEnsembleTest( messageGetterFactory, connectivityTestFactory );\n    host1 = \"host1\";\n    host2 = \"host2\";\n    zookeeperHosts = host1 + \",\" + host2;\n    zookeeperPort = \"2181\";\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( zookeeperHosts );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( zookeeperPort );\n  }\n\n  @Test\n  public void testBlankHost() {\n    namedCluster = mock( NamedCluster.class );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingZookeeperEnsembleTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_DESC ),\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_HOST_MESSAGE ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testBlankPort() {\n    namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( zookeeperHosts );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingZookeeperEnsembleTest.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_DESC ),\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_BLANK_PORT_MESSAGE ) );\n  }\n\n  @Test\n  public void testNoFailures() {\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host1, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host2, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest );\n    RuntimeTestResultEntry clusterTestResultEntry = mock( RuntimeTestResultEntry.class );\n    when( clusterTestResultEntry.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.INFO );\n    when( connectivityTest.runTest() ).thenReturn( clusterTestResultEntry );\n    String testDescription = \"test-description\";\n    when( clusterTestResultEntry.getDescription() ).thenReturn( testDescription );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingZookeeperEnsembleTest.runTest( namedCluster );\n    List<RuntimeTestResultEntry> clusterTestResultEntries = runtimeTestResultSummary\n      .getRuntimeTestResultEntries();\n    assertEquals( 2, clusterTestResultEntries.size() );\n    assertEquals( testDescription, clusterTestResultEntries.get( 0 ).getDescription() );\n    assertEquals( testDescription, clusterTestResultEntries.get( 1 ).getDescription() );\n  }\n\n  @Test\n  public void testOneFailure() {\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    ConnectivityTest connectivityTest2 = mock( ConnectivityTest.class );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host1, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host2, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest2 );\n    RuntimeTestResultEntry clusterTestResultEntry = mock( RuntimeTestResultEntry.class );\n    when( clusterTestResultEntry.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.INFO );\n    when( connectivityTest.runTest() ).thenReturn( clusterTestResultEntry );\n    RuntimeTestResultEntry clusterTestResultEntry2 = mock( RuntimeTestResultEntry.class );\n    when( clusterTestResultEntry2.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.WARNING );\n    when( connectivityTest2.runTest() ).thenReturn( clusterTestResultEntry2 );\n    String testDescription = \"test-description\";\n    when( clusterTestResultEntry.getDescription() ).thenReturn( testDescription );\n    String testDescription2 = \"test-description2\";\n    when( clusterTestResultEntry2.getDescription() ).thenReturn( testDescription2 );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingZookeeperEnsembleTest.runTest( namedCluster );\n    List<RuntimeTestResultEntry> clusterTestResultEntries = runtimeTestResultSummary\n      .getRuntimeTestResultEntries();\n    assertEquals( 2, clusterTestResultEntries.size() );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(), RuntimeTestEntrySeverity.WARNING,\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_DESC ),\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_SOME_NODES_FAILED_MESSAGE,\n        host2 ) );\n    assertEquals( testDescription, clusterTestResultEntries.get( 0 ).getDescription() );\n    assertEquals( testDescription2, clusterTestResultEntries.get( 1 ).getDescription() );\n  }\n\n  @Test\n  public void testAllFailures() {\n    ConnectivityTest connectivityTest = mock( ConnectivityTest.class );\n    ConnectivityTest connectivityTest2 = mock( ConnectivityTest.class );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host1, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest );\n    when( connectivityTestFactory\n      .create( messageGetterFactory, host2, zookeeperPort, false, RuntimeTestEntrySeverity.WARNING ) ).thenReturn(\n        connectivityTest2 );\n    RuntimeTestResultEntry clusterTestResultEntry = mock( RuntimeTestResultEntry.class );\n    when( clusterTestResultEntry.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.WARNING );\n    when( connectivityTest.runTest() ).thenReturn( clusterTestResultEntry );\n    RuntimeTestResultEntry clusterTestResultEntry2 = mock( RuntimeTestResultEntry.class );\n    when( clusterTestResultEntry2.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.WARNING );\n    when( connectivityTest2.runTest() ).thenReturn( clusterTestResultEntry2 );\n    String testDescription = \"test-description\";\n    when( clusterTestResultEntry.getDescription() ).thenReturn( testDescription );\n    String testDescription2 = \"test-description2\";\n    when( clusterTestResultEntry2.getDescription() ).thenReturn( testDescription2 );\n    RuntimeTestResultSummary runtimeTestResultSummary = pingZookeeperEnsembleTest.runTest( namedCluster );\n    List<RuntimeTestResultEntry> clusterTestResultEntries = runtimeTestResultSummary\n      .getRuntimeTestResultEntries();\n    assertEquals( 2, clusterTestResultEntries.size() );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(), RuntimeTestEntrySeverity.FATAL,\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_DESC ),\n      messageGetter.getMessage( PingZookeeperEnsembleTest.PING_ZOOKEEPER_ENSEMBLE_TEST_NO_NODES_SUCCEEDED_MESSAGE,\n        host1 + \", \" + host2 ) );\n    assertEquals( testDescription, clusterTestResultEntries.get( 0 ).getDescription() );\n    assertEquals( testDescription2, clusterTestResultEntries.get( 1 ).getDescription() );\n  }\n}\n"
  },
  {
    "path": "impl/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <modules>\n    <module>cluster</module>\n    <module>clusterTests</module>\n    <module>shim</module>\n    <module>vfs-hdfs</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "impl/shim/jaas/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl-shim</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-shim-jaas</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "impl/shim/jaas/src/main/java/org/pentaho/big/data/impl/shim/jaas/JaasConfigServiceFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.shim.jaas;\n\nimport org.pentaho.hadoop.shim.api.jaas.JaasConfigService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceFactory;\n\nimport java.util.Properties;\n\npublic class JaasConfigServiceFactory implements NamedClusterServiceFactory<JaasConfigService> {\n\n  public JaasConfigServiceFactory(\n    @SuppressWarnings( \"unused\" ) boolean isActiveConfiguration, Object hadoopConfiguration ) {\n  }\n  @Override public Class<JaasConfigService> getServiceClass() {\n    return JaasConfigService.class;\n  }\n\n  @Override public boolean canHandle( NamedCluster namedCluster ) {\n    return true;\n  }\n\n  @Override public JaasConfigService create( NamedCluster namedCluster ) {\n    return new JaasConfigServiceImpl( new Properties() );\n  }\n}\n"
  },
  {
    "path": "impl/shim/jaas/src/main/java/org/pentaho/big/data/impl/shim/jaas/JaasConfigServiceImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.shim.jaas;\n\nimport org.pentaho.hadoop.shim.api.jaas.JaasConfigService;\n\nimport java.util.Properties;\n\npublic class JaasConfigServiceImpl implements JaasConfigService {\n  public static final String KERBEROS_PRINCIPAL = \"pentaho.authentication.default.kerberos.principal\";\n  public static final String KERBEROS_KEYTAB = \"pentaho.authentication.default.kerberos.keytabLocation\";\n  private Properties configProperties;\n\n  public JaasConfigServiceImpl( Properties configProperties ) {\n\n    this.configProperties = configProperties;\n  }\n\n  @Override public String getJaasConfig() {\n    return\n      \"com.sun.security.auth.module.Krb5LoginModule required\\n\"\n        + \"useKeyTab=true\\n\"\n        + \"serviceName=kafka\\n\"\n        + \"keyTab=\\\"\" + configProperties.getProperty( KERBEROS_KEYTAB ) + \"\\\"\\n\"\n        + \"principal=\\\"\" + configProperties.getProperty( KERBEROS_PRINCIPAL ) + \"\\\";\";\n  }\n\n  @Override public boolean isKerberos() {\n    Object principal = configProperties.get( KERBEROS_PRINCIPAL );\n    Object keytab = configProperties.get( KERBEROS_KEYTAB );\n    return principal != null && keytab != null && !\"\".equals( principal ) && !\"\".equals( keytab );\n  }\n}\n"
  },
  {
    "path": "impl/shim/jaas/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n</blueprint>\n"
  },
  {
    "path": "impl/shim/jaas/src/main/resources/org/pentaho/big/data/impl/shim/jaas/messages.properties",
    "content": "jaas.config.service.load.error=Unable to register JaasConfigService for ? shim\n"
  },
  {
    "path": "impl/shim/jaas/src/test/java/org/pentaho/big/data/impl/shim/jaas/JaasConfigServiceFactoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.shim.jaas;\n\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n\nimport java.util.Properties;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\n\npublic class JaasConfigServiceFactoryTest {\n\n  @Test\n  public void testCreatesAJaasConfig() {\n    NamedCluster namedCluster = mock( NamedCluster.class );\n    JaasConfigServiceFactory factory = new JaasConfigServiceFactory( true, null );\n    Properties configProperties = new Properties();\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_PRINCIPAL, \"three@domain.com\" );\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_KEYTAB, \"/user/two/file.keytab\" );\n    assertTrue( factory.canHandle( namedCluster ) );\n    assertEquals( \"com.sun.security.auth.module.Krb5LoginModule required\\n\"\n      + \"useKeyTab=true\\n\"\n      + \"serviceName=kafka\\n\"\n      + \"keyTab=\\\"/user/two/file.keytab\\\"\\n\"\n      + \"principal=\\\"three@domain.com\\\";\",\n      factory.create( namedCluster ).getJaasConfig() );\n\n\n  }\n}\n"
  },
  {
    "path": "impl/shim/jaas/src/test/java/org/pentaho/big/data/impl/shim/jaas/JaasConfigServiceImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.shim.jaas;\n\nimport org.junit.Test;\n\nimport java.util.Properties;\n\nimport static org.junit.Assert.*;\n\npublic class JaasConfigServiceImplTest {\n\n  @Test\n  public void testEmptyPrincipalIsNotKerberos() throws Exception {\n    Properties configProperties = new Properties();\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_KEYTAB, \"/user/path/file.keytab\" );\n    JaasConfigServiceImpl service = new JaasConfigServiceImpl( configProperties );\n    assertFalse( service.isKerberos() );\n  }\n\n  @Test\n  public void testEmptyKeytabIsNotKerberos() throws Exception {\n    Properties configProperties = new Properties();\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_PRINCIPAL, \"me@host.com\" );\n    JaasConfigServiceImpl service = new JaasConfigServiceImpl( configProperties );\n    assertFalse( service.isKerberos() );\n  }\n\n  @Test\n  public void testJaasWithKerberosKeytab() throws Exception {\n    Properties configProperties = new Properties();\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_PRINCIPAL, \"user@domain.com\" );\n    configProperties.setProperty( JaasConfigServiceImpl.KERBEROS_KEYTAB, \"/user/path/file.keytab\" );\n    JaasConfigServiceImpl service = new JaasConfigServiceImpl( configProperties );\n    assertTrue( service.isKerberos() );\n    assertEquals(\n      \"com.sun.security.auth.module.Krb5LoginModule required\\n\"\n        + \"useKeyTab=true\\n\"\n        + \"serviceName=kafka\\n\"\n        + \"keyTab=\\\"/user/path/file.keytab\\\"\\n\"\n        + \"principal=\\\"user@domain.com\\\";\",\n      service.getJaasConfig() );\n  }\n}\n"
  },
  {
    "path": "impl/shim/pig/pdi-testName",
    "content": ""
  },
  {
    "path": "impl/shim/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-shim</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <modules>\n    <module>shimTests</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "impl/shim/shimTests/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl-shim</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-shim-shimTests</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-core</artifactId>\n      <version>0.20.2</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.aries.blueprint</groupId>\n      <artifactId>org.apache.aries.blueprint.core</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.osgi</groupId>\n      <artifactId>osgi.core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n      <dependency>\n          <groupId>org.pentaho</groupId>\n          <artifactId>shim-api</artifactId>\n          <version>${pentaho-hadoop-shims.version}</version>\n      </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "impl/shim/shimTests/src/main/java/org/pentaho/big/data/impl/shim/tests/TestShimConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.shim.tests;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.util.Arrays;\nimport java.util.HashSet;\n\n/**\n * Created by mburgess on 9/1/15.\n */\npublic class TestShimConfig extends BaseRuntimeTest {\n\n  public static final String HADOOP_CONFIGURATION_TEST_SHIM_CONFIG = \"hadoopConfigurationTestShimXConfig\";\n  public static final String TEST_SHIM_CONFIG_NAME = \"TestShimConfig.Name\";\n  public static final String TEST_SHIM_CONFIG_FS_MATCH_DESC = \"TestShimConfig.FileSystemMatch.Desc\";\n  public static final String TEST_SHIM_CONFIG_FS_MATCH_MESSAGE = \"TestShimConfig.FileSystemMatch.Message\";\n  public static final String TEST_SHIM_CONFIG_FS_NOMATCH_DESC = \"TestShimConfig.FileSystemNoMatch.Desc\";\n  public static final String TEST_SHIM_CONFIG_FS_NOMATCH_MESSAGE = \"TestShimConfig.FileSystemNoMatch.Message\";\n\n  private static final Class<?> PKG = TestShimConfig.class;\n  private final MessageGetterFactory messageGetterFactory;\n  private final MessageGetter messageGetter;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n\n  public TestShimConfig( HadoopFileSystemLocator hadoopFileSystemLocator, MessageGetterFactory messageGetterFactory ) {\n    super( NamedCluster.class, TestShimLoad.HADOOP_CONFIGURATION_MODULE, HADOOP_CONFIGURATION_TEST_SHIM_CONFIG,\n      messageGetterFactory.create( PKG ).getMessage( TEST_SHIM_CONFIG_NAME ), true,\n      new HashSet<>( Arrays.asList( TestShimLoad.HADOOP_CONFIGURATION_TEST_SHIM_LOAD ) ) );\n    this.messageGetterFactory = messageGetterFactory;\n    messageGetter = messageGetterFactory.create( PKG );\n    this.hadoopFileSystemLocator = hadoopFileSystemLocator;\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    try {\n      // Get the active shim\n      NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n      HadoopFileSystem hadoopFileSystem = hadoopFileSystemLocator.getHadoopFilesystem( namedCluster );\n      String defaultFS = hadoopFileSystem.getFsDefaultName();\n\n      // Get the named cluster\n      // The connection information might be parameterized. Since we aren't tied to a transformation or job, in order to\n      // use a parameter, the value would have to be set as a system property or in kettle.properties, etc.\n      // Here we try to resolve the parameters if we can:\n      Variables variables = new Variables();\n      variables.initializeVariablesFrom( null );\n\n      // Build up a \"defaultFS\" property to check against the config\n      StringBuilder ncFS = new StringBuilder( namedCluster.getStorageScheme() + \"://\" );\n      ncFS.append( variables.environmentSubstitute( namedCluster.getHdfsHost() ) );\n      String port = variables.environmentSubstitute( namedCluster.getHdfsPort() );\n      if ( !Const.isEmpty( port ) ) {\n        ncFS.append( \":\" );\n        ncFS.append( port );\n      }\n\n      if ( !ncFS.toString().equalsIgnoreCase( defaultFS ) ) {\n        return new RuntimeTestResultSummaryImpl(\n          new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.WARNING,\n            messageGetter.getMessage( TEST_SHIM_CONFIG_FS_NOMATCH_DESC ),\n            messageGetter.getMessage( TEST_SHIM_CONFIG_FS_NOMATCH_MESSAGE, ncFS.toString() ),\n            ClusterRuntimeTestEntry.DocAnchor.SHIM_LOAD ) );\n      }\n\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( TEST_SHIM_CONFIG_FS_MATCH_DESC ),\n          messageGetter.getMessage( TEST_SHIM_CONFIG_FS_MATCH_MESSAGE ),\n          ClusterRuntimeTestEntry.DocAnchor.SHIM_LOAD ) );\n    } catch ( Exception e ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.ERROR,\n          messageGetter.getMessage( TestShimLoad.TEST_SHIM_LOAD_NO_SHIM_SPECIFIED_DESC ), e.getMessage(), e,\n          ClusterRuntimeTestEntry.DocAnchor.SHIM_LOAD ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/shim/shimTests/src/main/java/org/pentaho/big/data/impl/shim/tests/TestShimLoad.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.shim.tests;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.tests.ClusterRuntimeTestEntry;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\nimport org.pentaho.runtime.test.result.org.pentaho.runtime.test.result.impl.RuntimeTestResultSummaryImpl;\nimport org.pentaho.runtime.test.test.impl.BaseRuntimeTest;\n\nimport java.util.HashSet;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class TestShimLoad extends BaseRuntimeTest {\n  public static final String HADOOP_CONFIGURATION_TEST_SHIM_LOAD = \"hadoopConfigurationTestShimLoad\";\n  public static final String TEST_SHIM_LOAD_NAME = \"TestShimLoad.Name\";\n  public static final String TEST_SHIM_LOAD_SHIM_LOADED_DESC = \"TestShimLoad.ShimLoaded.Desc\";\n  public static final String TEST_SHIM_LOAD_SHIM_LOADED_MESSAGE = \"TestShimLoad.ShimLoaded.Message\";\n  public static final String TEST_SHIM_LOAD_NO_SHIM_SPECIFIED_DESC = \"TestShimLoad.NoShimSpecified.Desc\";\n  public static final String TEST_SHIM_LOAD_UNABLE_TO_LOAD_SHIM_DESC = \"TestShimLoad.UnableToLoadShim.Desc\";\n  public static final String HADOOP_CONFIGURATION_MODULE = \"Hadoop Configuration\";\n  private static final Class<?> PKG = TestShimLoad.class;\n  private final MessageGetterFactory messageGetterFactory;\n  private final MessageGetter messageGetter;\n\n  public TestShimLoad( MessageGetterFactory messageGetterFactory ) {\n    super( NamedCluster.class, HADOOP_CONFIGURATION_MODULE, HADOOP_CONFIGURATION_TEST_SHIM_LOAD,\n      messageGetterFactory.create( PKG ).getMessage( TEST_SHIM_LOAD_NAME ), true, new HashSet<String>() );\n    this.messageGetterFactory = messageGetterFactory;\n    messageGetter = messageGetterFactory.create( PKG );\n  }\n\n  @Override public RuntimeTestResultSummary runTest( Object objectUnderTest ) {\n    try {\n      NamedCluster namedCluster = (NamedCluster) objectUnderTest;\n      String shimIdentifier = namedCluster.getShimIdentifier();\n\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.INFO,\n          messageGetter.getMessage( TEST_SHIM_LOAD_SHIM_LOADED_DESC, shimIdentifier ),\n          messageGetter.getMessage( TEST_SHIM_LOAD_SHIM_LOADED_MESSAGE, shimIdentifier ),\n          ClusterRuntimeTestEntry.DocAnchor.SHIM_LOAD ) );\n    } catch ( Exception e ) {\n      return new RuntimeTestResultSummaryImpl(\n        new ClusterRuntimeTestEntry( messageGetterFactory, RuntimeTestEntrySeverity.ERROR,\n          messageGetter.getMessage( TEST_SHIM_LOAD_NO_SHIM_SPECIFIED_DESC ), e.getMessage(), e,\n          ClusterRuntimeTestEntry.DocAnchor.SHIM_LOAD ) );\n    }\n  }\n}\n"
  },
  {
    "path": "impl/shim/shimTests/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n  <bean id=\"testShimLoad\" class=\"org.pentaho.big.data.impl.shim.tests.TestShimLoad\">\n    <argument ref=\"messageGetterFactory\"/>\n  </bean>\n\n  <bean id=\"testShimConfig\" class=\"org.pentaho.big.data.impl.shim.tests.TestShimConfig\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"messageGetterFactory\"/>\n  </bean>\n\n  <service ref=\"testShimLoad\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <service ref=\"testShimConfig\" interface=\"org.pentaho.runtime.test.RuntimeTest\"/>\n  <reference id=\"messageGetterFactory\" interface=\"org.pentaho.runtime.test.i18n.MessageGetterFactory\"/>\n  <reference id=\"hadoopFileSystemService\" interface=\"org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator\"/>\n</blueprint>\n"
  },
  {
    "path": "impl/shim/shimTests/src/main/resources/org/pentaho/big/data/impl/shim/tests/messages/messages_en_US.properties",
    "content": "TestShimLoad.Name=Active Shim Load\nTestShimLoad.ShimLoaded.Desc=Successfully loaded the {0} shim.\nTestShimLoad.ShimLoaded.Message=Successfully loaded the {0} shim.\nTestShimLoad.NoShimSpecified.Desc=The Active Shim has not been set.\nTestShimLoad.UnableToLoadShim.Desc=Unable to load the {0} Shim.\n\nTestShimConfig.Name=Shim Configuration Verification\nTestShimConfig.FileSystemMatch.Desc=The Hadoop File System URL matches the Active shim.\nTestShimConfig.FileSystemMatch.Message=The Hadoop File System URL matches the URL in the shim configuration file.\nTestShimConfig.FileSystemNoMatch.Desc=The Hadoop File System URL does not match the URL in the shim's core-site.xml.\nTestShimConfig.FileSystemNoMatch.Message=The Hadoop File System URL {0} does not match the defaultFS Hadoop config property in the shim's core-site.xml. Be sure to get the site configuration files from the Hadoop cluster.\n"
  },
  {
    "path": "impl/shim/shimTests/src/test/java/org/pentaho/big/data/impl/shim/tests/TestShimLoadTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.shim.tests;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.ConfigurationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.TestMessageGetterFactory;\nimport org.pentaho.runtime.test.i18n.MessageGetter;\nimport org.pentaho.runtime.test.i18n.MessageGetterFactory;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResultSummary;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.runtime.test.RuntimeTestEntryUtil.verifyRuntimeTestResultEntry;\n\n/**\n * Created by bryan on 8/24/15.\n */\npublic class TestShimLoadTest {\n  private MessageGetterFactory messageGetterFactory;\n  private MessageGetter messageGetter;\n  private TestShimLoad testShimLoad;\n  private NamedCluster namedCluster;\n\n  @Before\n  public void setup() {\n    messageGetterFactory = new TestMessageGetterFactory();\n    messageGetter = messageGetterFactory.create( TestShimLoad.class );\n    testShimLoad = new TestShimLoad( messageGetterFactory );\n    namedCluster = mock( NamedCluster.class );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( messageGetter.getMessage( TestShimLoad.TEST_SHIM_LOAD_NAME ), testShimLoad.getName() );\n  }\n\n  @Test\n  public void testConfigurationException() throws ConfigurationException {\n    String testMessage = \"testMessage\";\n    when( namedCluster.getShimIdentifier() ).thenThrow( new RuntimeException( testMessage ) );\n    RuntimeTestResultSummary runtimeTestResultSummary = testShimLoad.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.ERROR, messageGetter.getMessage( TestShimLoad.TEST_SHIM_LOAD_NO_SHIM_SPECIFIED_DESC ),\n      testMessage, RuntimeException.class );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n\n  @Test\n  public void testSuccess() throws ConfigurationException {\n    String testShim = \"testShim\";\n    when( namedCluster.getShimIdentifier() ).thenReturn( testShim );\n    RuntimeTestResultSummary runtimeTestResultSummary = testShimLoad.runTest( namedCluster );\n    verifyRuntimeTestResultEntry( runtimeTestResultSummary.getOverallStatusEntry(),\n      RuntimeTestEntrySeverity.INFO, messageGetter.getMessage( TestShimLoad.TEST_SHIM_LOAD_SHIM_LOADED_DESC, testShim ),\n      messageGetter.getMessage( TestShimLoad.TEST_SHIM_LOAD_SHIM_LOADED_MESSAGE, testShim ) );\n    assertEquals( 0, runtimeTestResultSummary.getRuntimeTestResultEntries().size() );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-impl</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-impl-vfs-hdfs-core</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <mockito.version>5.14.2</mockito.version>\n    <metastore.version>11.1.0.0-SNAPSHOT</metastore.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>metastore</artifactId>\n      <version>${metastore.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-vfs2</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-lang3</artifactId>\n      <version>${commons-lang3.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n      <dependency>\n          <groupId>org.apache.logging.log4j</groupId>\n          <artifactId>log4j-1.2-api</artifactId>\n          <version>${log4j.version}</version>\n      </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/AzureHdInsightsFileNameParser.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.provider.URLFileNameParser;\n\n@SuppressWarnings( \"deprecation\" )\npublic class AzureHdInsightsFileNameParser extends URLFileNameParser {\n\n  public static final String EMPTY_HOSTNAME = \"\";\n\n  private static final AzureHdInsightsFileNameParser INSTANCE = new AzureHdInsightsFileNameParser();\n\n  private AzureHdInsightsFileNameParser() {\n    super( -1 );\n  }\n\n  public static AzureHdInsightsFileNameParser getInstance() {\n    return INSTANCE;\n  }\n\n  /**\n   * Extracts the hostname from a URI.\n   *\n   * @param name string buffer with the \"scheme://[userinfo@]\" part has been removed already. Will be modified.\n   * @return the host name  or null.\n   */\n  @Override protected String extractHostName( StringBuilder name ) {\n    final String hostname = super.extractHostName( name );\n    // Trick the URLFileNameParser into thinking we have a hostname so we don't have to refactor it.\n    return hostname == null ? EMPTY_HOSTNAME : hostname;\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileNameParser.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.provider.URLFileName;\nimport org.apache.commons.vfs2.provider.URLFileNameParser;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.apache.commons.vfs2.provider.VfsComponentContext;\n\npublic class HDFSFileNameParser extends URLFileNameParser {\n\n  private static final HDFSFileNameParser INSTANCE = new HDFSFileNameParser();\n\n  private HDFSFileNameParser() {\n    super( -1 );\n  }\n\n  public static HDFSFileNameParser getInstance() {\n    return INSTANCE;\n  }\n\n  @Override public FileName parseUri( VfsComponentContext context, FileName base, String filename )\n    throws FileSystemException {\n    URLFileName fileNameURLFileName = (URLFileName) super.parseUri( context, base, filename );\n\n    return new URLFileName(\n      fileNameURLFileName.getScheme(),\n      getHostNameCaseSensitive( filename ),\n      fileNameURLFileName.getPort(),\n      fileNameURLFileName.getDefaultPort(),\n      fileNameURLFileName.getUserName(),\n      fileNameURLFileName.getPassword(),\n      fileNameURLFileName.getPath(),\n      fileNameURLFileName.getType(),\n      fileNameURLFileName.getQueryString() );\n  }\n\n  /**\n   * PDI-15565\n   * <p>\n   * the same logic as for extracting in org.apache.commons.vfs2.provider.HostFileNameParser.extractToPath\n   *\n   * @param fileUri file uri for hdfs file\n   * @return case sensitive host name\n   * @throws FileSystemException when format of url is not correct\n   */\n  private String getHostNameCaseSensitive( String fileUri ) throws FileSystemException {\n    StringBuilder fullNameBuilder = new StringBuilder();\n    UriParser.extractScheme( fileUri, fullNameBuilder );\n    if ( fullNameBuilder.length() < 2 || fullNameBuilder.charAt( 0 ) != '/' || fullNameBuilder.charAt( 1 ) != '/' ) {\n      throw new FileSystemException( \"vfs.provider/missing-double-slashes.error\", fileUri );\n    }\n    fullNameBuilder.delete( 0, 2 );\n    extractPort( fullNameBuilder, fileUri );\n    extractUserInfo( fullNameBuilder );\n    return extractHostName( fullNameBuilder );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileObject.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileType;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.apache.commons.vfs2.provider.AbstractFileObject;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileStatus;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\n\n\nimport java.io.InputStream;\nimport java.io.OutputStream;\n\npublic class HDFSFileObject extends AbstractFileObject  {\n\n  private HadoopFileSystem hdfs;\n\n  public HDFSFileObject( final AbstractFileName name, final HDFSFileSystem fileSystem ) throws FileSystemException {\n    super( name, fileSystem );\n    hdfs = fileSystem.getHDFSFileSystem();\n  }\n\n  @Override\n  protected long doGetContentSize() throws Exception {\n    return hdfs.getFileStatus( hdfs.getPath( getName().getPath() ) ).getLen();\n  }\n\n  @Override\n  protected OutputStream doGetOutputStream( boolean append ) throws Exception {\n    OutputStream out;\n    if ( append ) {\n      out = hdfs.append( hdfs.getPath( getName().getPath() ) );\n    } else {\n      out = hdfs.create( hdfs.getPath( getName().getPath() ) );\n    }\n    return out;\n  }\n\n  @Override\n  protected InputStream doGetInputStream() throws Exception {\n    return hdfs.open( hdfs.getPath( getName().getPath() ) );\n  }\n\n  @Override\n  protected InputStream doGetInputStream( final int bufferSize ) throws Exception {\n    return this.doGetInputStream();\n  }\n\n  @Override\n  protected FileType doGetType() throws Exception {\n    HadoopFileStatus status = null;\n    if ( null == hdfs ) {\n      throw new IllegalStateException( \"No HDFS file system present\" );\n    }\n    try {\n      status = hdfs.getFileStatus( hdfs.getPath( getName().getPath() ) );\n    } catch ( Exception ex ) {\n      // Ignore\n    }\n\n    if ( status == null ) {\n      return FileType.IMAGINARY;\n    } else if ( status.isDir() ) {\n      return FileType.FOLDER;\n    } else {\n      return FileType.FILE;\n    }\n  }\n\n  @Override\n  public void doCreateFolder() throws Exception {\n    hdfs.mkdirs( hdfs.getPath( getName().getPath() ) );\n  }\n\n  @Override\n  public void doDelete() throws Exception {\n    hdfs.delete( hdfs.getPath( getName().getPath() ), true );\n  }\n\n  @Override\n  protected void doRename( FileObject newfile ) throws Exception {\n    hdfs.rename( hdfs.getPath( getName().getPath() ), hdfs.getPath( newfile.getName().getPath() ) );\n  }\n\n  @Override\n  protected long doGetLastModifiedTime() throws Exception {\n    return hdfs.getFileStatus( hdfs.getPath( getName().getPath() ) ).getModificationTime();\n  }\n\n  @Override\n  protected boolean doSetLastModifiedTime( long modtime ) throws Exception {\n    hdfs.setTimes( hdfs.getPath( getName().getPath() ), modtime, System.currentTimeMillis() );\n    return true;\n  }\n\n  @Override\n  protected String[] doListChildren() throws Exception {\n    HadoopFileStatus[] statusList = hdfs.listStatus( hdfs.getPath( getName().getPath() ) );\n    String[] children = new String[ statusList.length ];\n    for ( int i = 0; i < statusList.length; i++ ) {\n      children[ i ] = statusList[ i ].getPath().getName();\n    }\n    return children;\n  }\n\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileProvider.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.Capability;\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystem;\nimport org.apache.commons.vfs2.FileSystemConfigBuilder;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.UserAuthenticationData;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemManager;\nimport org.apache.commons.vfs2.provider.AbstractOriginatingFileProvider;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.GenericFileName;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.pentaho.big.data.impl.vfs.hdfs.nc.NamedClusterConfigBuilder;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\n\nimport java.net.URI;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.Collections;\n\npublic class HDFSFileProvider extends AbstractOriginatingFileProvider {\n\n  protected static Logger logger = LogManager.getLogger( HDFSFileProvider.class );\n  private MetastoreLocator metaStoreService;\n  /**\n   * The scheme this provider was designed to support\n   */\n  public static final String SCHEME = \"hdfs\";\n  public static final String MAPRFS = \"maprfs\";\n  /**\n   * User Information.\n   */\n  public static final String ATTR_USER_INFO = \"UI\";\n  /**\n   * Authentication types.\n   */\n  public static final UserAuthenticationData.Type[] AUTHENTICATOR_TYPES =\n    new UserAuthenticationData.Type[] { UserAuthenticationData.USERNAME,\n      UserAuthenticationData.PASSWORD };\n  /**\n   * The provider's capabilities.\n   */\n  public static final Collection<Capability> capabilities =\n    Collections.unmodifiableCollection( Arrays.asList( new Capability[] { Capability.CREATE, Capability.DELETE,\n      Capability.RENAME, Capability.GET_TYPE, Capability.LIST_CHILDREN, Capability.READ_CONTENT, Capability.URI,\n      Capability.WRITE_CONTENT, Capability.GET_LAST_MODIFIED, Capability.SET_LAST_MODIFIED_FILE,\n      Capability.RANDOM_ACCESS_READ } ) );\n  protected final HadoopFileSystemLocator hadoopFileSystemLocator;\n  protected final NamedClusterService namedClusterService;\n\n  @Deprecated\n  public HDFSFileProvider( HadoopFileSystemLocator hadoopFileSystemLocator,\n                           NamedClusterService namedClusterService, MetastoreLocator metaStore )\n    throws FileSystemException {\n    this( hadoopFileSystemLocator, namedClusterService,\n      (DefaultFileSystemManager) KettleVFS.getInstance().getFileSystemManager(), metaStore );\n  }\n\n  @Deprecated\n  public HDFSFileProvider( HadoopFileSystemLocator hadoopFileSystemLocator, NamedClusterService namedClusterService,\n                           DefaultFileSystemManager fileSystemManager, MetastoreLocator metaStore )\n    throws FileSystemException {\n    this( hadoopFileSystemLocator, namedClusterService, fileSystemManager, HDFSFileNameParser.getInstance(),\n      new String[] { SCHEME, MAPRFS }, metaStore );\n  }\n\n  public HDFSFileProvider( HadoopFileSystemLocator hadoopFileSystemLocator, String schema, FileNameParser fileNameParser )\n    throws FileSystemException {\n    this( hadoopFileSystemLocator, NamedClusterManager.getInstance(), fileNameParser, schema );\n  }\n\n  public HDFSFileProvider( HadoopFileSystemLocator hadoopFileSystemLocator, NamedClusterService namedClusterService,\n                           FileNameParser fileNameParser, String schema )\n    throws FileSystemException {\n    this( hadoopFileSystemLocator, namedClusterService,\n      (DefaultFileSystemManager) KettleVFS.getInstance().getFileSystemManager(),\n      fileNameParser, new String[] { schema }, null );\n  }\n\n  public HDFSFileProvider( HadoopFileSystemLocator hadoopFileSystemLocator, NamedClusterService namedClusterService,\n                           DefaultFileSystemManager fileSystemManager, FileNameParser fileNameParser, String[] schemes,\n                           MetastoreLocator metaStore )\n    throws FileSystemException {\n    super();\n    this.hadoopFileSystemLocator = hadoopFileSystemLocator;\n    this.namedClusterService = namedClusterService;\n    this.metaStoreService = metaStore;\n    setFileNameParser( fileNameParser );\n    fileSystemManager.addProvider( schemes, this );\n  }\n\n  protected synchronized MetastoreLocator getMetastoreLocator() {\n    if ( this.metaStoreService == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metaStoreService = metastoreLocators.stream().findFirst().get();\n      } catch ( Exception e ) {\n        logger.error( \"Error getting MetastoreLocator\", e );\n      }\n    }\n    return this.metaStoreService;\n  }\n\n  @Override protected FileSystem doCreateFileSystem( final FileName name, final FileSystemOptions fileSystemOptions )\n    throws FileSystemException {\n    GenericFileName genericFileName = (GenericFileName) name.getRoot();\n    String hostName = genericFileName.getHostName();\n    int port = genericFileName.getPort();\n    NamedCluster namedCluster = resolveNamedCluster( hostName, port, name );\n    try {\n      return new HDFSFileSystem( name, fileSystemOptions, hadoopFileSystemLocator.getHadoopFilesystem( namedCluster,\n        URI.create( name.getURI() == null ? \"\" : name.getURI() ) ) );\n    } catch ( ClusterInitializationException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  @Override public Collection<Capability> getCapabilities() {\n    return capabilities;\n  }\n\n  @Override\n  public FileSystemConfigBuilder getConfigBuilder() {\n    return NamedClusterConfigBuilder.getInstance( getMetastoreLocator(), namedClusterService );\n  }\n\n  private NamedCluster resolveNamedCluster( String hostName, int port, final FileName name ) {\n    NamedCluster namedCluster = namedClusterService.getNamedClusterByHost( hostName, getMetastoreLocator().getMetastore() );\n    if ( namedCluster == null ) {\n      namedClusterService.updateNamedClusterTemplate( hostName, port, MAPRFS.equals( name.getScheme() ) );\n      namedCluster = namedClusterService.getClusterTemplate();\n    }\n    return namedCluster;\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileSystem.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.Capability;\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystem;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.apache.commons.vfs2.provider.AbstractFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\n\nimport java.util.Collection;\n\npublic class HDFSFileSystem extends AbstractFileSystem implements FileSystem {\n  private final HadoopFileSystem hdfs;\n\n  public HDFSFileSystem( final FileName rootName, final FileSystemOptions fileSystemOptions,\n                            HadoopFileSystem hdfs ) {\n    super( rootName, null, fileSystemOptions );\n    this.hdfs = hdfs;\n  }\n\n  @Override\n  @SuppressWarnings( { \"unchecked\", \"rawtypes\" } )\n  protected void addCapabilities( Collection caps ) {\n    caps.addAll( HDFSFileProvider.capabilities );\n    // Adding capabilities depending on configuration settings\n    try {\n      if ( getHDFSFileSystem() != null && Boolean.parseBoolean( getHDFSFileSystem().getProperty( \"dfs.support.append\", \"true\" ) ) ) {\n        caps.add( Capability.APPEND_CONTENT );\n      }\n    } catch ( FileSystemException e ) {\n      throw new RuntimeException( e );\n    }\n  }\n\n  @Override protected FileObject createFile( AbstractFileName name ) throws Exception {\n    return new HDFSFileObject( name, this );\n  }\n\n  public HadoopFileSystem getHDFSFileSystem() throws FileSystemException {\n    return hdfs;\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/MapRFileNameParser.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.provider.URLFileNameParser;\n\npublic class MapRFileNameParser extends URLFileNameParser {\n\n  public static final String EMPTY_HOSTNAME = \"\";\n\n  private static final MapRFileNameParser INSTANCE = new MapRFileNameParser();\n\n  private MapRFileNameParser() {\n    super( -1 );\n  }\n\n  public static MapRFileNameParser getInstance() {\n    return INSTANCE;\n  }\n\n  /**\n   * Extracts the hostname from a URI.\n   *\n   * @param name string buffer with the \"scheme://[userinfo@]\" part has been removed already. Will be modified.\n   * @return the host name  or null.\n   */\n  @Override protected String extractHostName( StringBuilder name ) {\n    final String hostname = super.extractHostName( name );\n    // Trick the URLFileNameParser into thinking we have a hostname so we don't have to refactor it.\n    return hostname == null ? EMPTY_HOSTNAME : hostname;\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterConfigBuilder.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\nimport org.apache.commons.vfs2.FileSystem;\nimport org.apache.commons.vfs2.FileSystemConfigBuilder;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileSystem;\nimport org.pentaho.di.core.vfs.configuration.KettleGenericFileSystemConfigBuilder;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.util.List;\n\npublic class NamedClusterConfigBuilder extends KettleGenericFileSystemConfigBuilder {\n\n  private static final NamedClusterConfigBuilder BUILDER = new NamedClusterConfigBuilder();\n  private static final String EMBEDDED_METASTORE_KEY_PROPERTY = \"embeddedMetaStoreKey\";\n  private final MetastoreLocator metastoreLocator;\n  private final NamedClusterService namedClusterService;\n\n  public NamedClusterConfigBuilder() {\n    this( null, null );\n  }\n\n  public NamedClusterConfigBuilder( MetastoreLocator metastoreLocator, NamedClusterService namedClusterService ) {\n    this.metastoreLocator = metastoreLocator;\n    this.namedClusterService = namedClusterService;\n  }\n\n  /**\n   * @return NamedClusterConfigBuilder instance\n   */\n  public static NamedClusterConfigBuilder getInstance() {\n    return BUILDER;\n  }\n\n  public static FileSystemConfigBuilder getInstance( MetastoreLocator metastoreLocator, NamedClusterService namedClusterService ) {\n    return new NamedClusterConfigBuilder( metastoreLocator, namedClusterService );\n  }\n\n  /**\n   * @return HDFSFileSystem\n   */\n  @Override\n  protected Class<? extends FileSystem> getConfigClass() {\n    return HDFSFileSystem.class;\n  }\n\n  public void snapshotNamedClusterToMetaStore( IMetaStore snapshotMetaStore ) throws MetaStoreException {\n    IMetaStore metaStore = metastoreLocator.getMetastore();\n    List<NamedCluster> ncList = namedClusterService.list( metaStore );\n    if ( ncList != null ) {\n      for ( NamedCluster nc : ncList ) {\n        namedClusterService.create( nc, snapshotMetaStore );\n      }\n    }\n  }\n\n  public void setEmbeddedMetastoreKey( final FileSystemOptions opts, final String embeddedMetaStoreKey ) {\n    setParam( opts, EMBEDDED_METASTORE_KEY_PROPERTY, embeddedMetaStoreKey );\n  }\n\n  public String getEmbeddedMetastoreKey( final FileSystemOptions opts ) {\n    return (String) getParam( opts, EMBEDDED_METASTORE_KEY_PROPERTY );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterFileObject.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\n\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.Selectors;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileObject;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\n\npublic class NamedClusterFileObject extends HDFSFileObject implements AliasedFileObject {\n\n  private final String realFileSystemURI;\n\n  public NamedClusterFileObject( final AbstractFileName name, final NamedClusterFileSystem fileSystem ) throws FileSystemException {\n    super( name, fileSystem );\n    realFileSystemURI = fileSystem.getRealFileSystemURI().toString();\n  }\n\n  @Override\n  public String getOriginalURIString() {\n    return realFileSystemURI + getName().getPath();\n  }\n\n  @Override\n  public String getAELSafeURIString() {\n    return getOriginalURIString();\n  }\n\n  @Override\n  public boolean delete() throws FileSystemException {\n    return delete( Selectors.SELECT_SELF_AND_CHILDREN ) > 0;\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterFileSystem.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\n\nimport java.net.URI;\n\n\npublic class NamedClusterFileSystem extends HDFSFileSystem {\n\n  private final URI realFileSystemURI;\n\n  public NamedClusterFileSystem( final FileName rootName, final URI realFileSystemURI, final FileSystemOptions fileSystemOptions,\n                                 HadoopFileSystem hdfs ) {\n    super( rootName, fileSystemOptions, hdfs );\n    this.realFileSystemURI = realFileSystemURI;\n  }\n\n  @Override protected FileObject createFile( AbstractFileName name ) throws Exception {\n    return new NamedClusterFileObject( name, this );\n  }\n\n  public URI getRealFileSystemURI() {\n    return realFileSystemURI;\n  }\n\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterProvider.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystem;\nimport org.apache.commons.vfs2.FileSystemConfigBuilder;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemManager;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.GenericFileName;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider;\nimport org.pentaho.di.core.osgi.api.VfsEmbeddedFileSystemCloser;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.URI;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Set;\n\n/**\n * Created by dstepanov on 11/05/17.\n */\npublic class NamedClusterProvider extends HDFSFileProvider implements VfsEmbeddedFileSystemCloser {\n\n  private Map<String, Set<FileSystem>> cacheEntries =\n    Collections.synchronizedMap( new HashMap<>() );\n\n  public NamedClusterProvider( HadoopFileSystemLocator hadoopFileSystemLocator,\n                               NamedClusterService namedClusterService,\n                               FileNameParser fileNameParser,\n                               String[] schemes,\n                               MetastoreLocator metaStore ) throws FileSystemException {\n    this(\n        hadoopFileSystemLocator,\n        namedClusterService,\n        (DefaultFileSystemManager) KettleVFS.getInstance().getFileSystemManager(),\n        fileNameParser,\n        schemes,\n        metaStore );\n  }\n\n  public NamedClusterProvider( HadoopFileSystemLocator hadoopFileSystemLocator,\n                               String schema ,\n                               FileNameParser fileNameParser) throws FileSystemException {\n    this( hadoopFileSystemLocator,\n            NamedClusterManager.getInstance(),\n            fileNameParser,\n            schema\n    );\n  }\n  public NamedClusterProvider( HadoopFileSystemLocator hadoopFileSystemLocator,\n                               NamedClusterService namedClusterService,\n                               FileNameParser fileNameParser,\n                               String schema ) throws FileSystemException {\n    this(\n        hadoopFileSystemLocator,\n        namedClusterService,\n        (DefaultFileSystemManager) KettleVFS.getInstance().getFileSystemManager(),\n        fileNameParser,\n        new String[] { schema },\n        null );\n  }\n\n\n  public NamedClusterProvider( HadoopFileSystemLocator hadoopFileSystemLocator,\n                               NamedClusterService namedClusterService,\n                               DefaultFileSystemManager fileSystemManager,\n                               FileNameParser fileNameParser,\n                               String[] schemes,\n                               MetastoreLocator metaStore ) throws FileSystemException {\n    super( hadoopFileSystemLocator, namedClusterService, fileSystemManager, fileNameParser, schemes, metaStore );\n  }\n\n\n  @Override\n  protected FileSystem doCreateFileSystem( FileName name, FileSystemOptions fileSystemOptions )\n    throws FileSystemException {\n    GenericFileName genericFileName = (GenericFileName) name.getRoot();\n    String clusterName = genericFileName.getHostName();\n    String path = genericFileName.getPath();\n    NamedCluster namedCluster = getNamedClusterByName( clusterName, fileSystemOptions );\n    try {\n      if ( namedCluster == null ) {\n        namedCluster = namedClusterService.getClusterTemplate();\n      }\n      String generatedUrl = namedCluster\n        .processURLsubstitution( path == null ? \"\" : path,\n          getMetastore( clusterName, fileSystemOptions ), new Variables() );\n      URI uri = URI.create( generatedUrl );\n\n      return new NamedClusterFileSystem( name, uri, fileSystemOptions,\n        hadoopFileSystemLocator.getHadoopFilesystem( namedCluster, uri ) );\n    } catch ( ClusterInitializationException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  @Override\n  public FileSystemConfigBuilder getConfigBuilder() {\n    return NamedClusterConfigBuilder.getInstance( getMetastoreLocator(), namedClusterService );\n  }\n\n  /**\n   * package visibility for test purpose only\n   * @param clusterNameToResolve - name of namedcluster for resolve namedcluster\n   * @param filesSystemOptions - The fileSystemOptions for the file system in play\n   * @return named cluster from metastore or null\n   * @throws FileSystemException\n   */\n  NamedCluster getNamedClusterByName( String clusterNameToResolve, FileSystemOptions fileSystemOptions )\n    throws FileSystemException {\n    IMetaStore metaStore = getMetastore( clusterNameToResolve, fileSystemOptions );\n    NamedCluster namedCluster = null;\n    try {\n      namedCluster = namedClusterService.read( clusterNameToResolve, metaStore );\n    } catch ( MetaStoreException e ) {\n      throw new FileSystemException( e );\n    }\n    return namedCluster;\n  }\n\n  protected synchronized FileSystem getFileSystem( final FileName rootName, final FileSystemOptions fileSystemOptions )\n    throws FileSystemException {\n    FileSystem fs = findFileSystem( rootName, fileSystemOptions );\n    if ( fs == null ) {\n      //  Need to create the file system, and cache it\n      fs = doCreateFileSystem( rootName, fileSystemOptions );\n      addCacheEntry( rootName, fs );\n    }\n    return fs;\n  }\n\n  private String getFileSystemKey( String rootName, FileSystemOptions fileSystemOptions ) {\n    return getEmbeddedMetastoreKey( fileSystemOptions ) == null ? rootName\n      : rootName + getEmbeddedMetastoreKey( fileSystemOptions );\n  }\n\n  private String getEmbeddedMetastoreKey( FileSystemOptions fileSystemOptions ) {\n    return ( (NamedClusterConfigBuilder) getConfigBuilder() ).getEmbeddedMetastoreKey( fileSystemOptions );\n  }\n\n  private IMetaStore getMetastore( String clusterNameToResolve, FileSystemOptions fileSystemOptions ) {\n    String embeddedMetastoreKey = getEmbeddedMetastoreKey( fileSystemOptions );\n    IMetaStore metaStore = ( embeddedMetastoreKey != null ) ? getMetastoreLocator().getMetastore( embeddedMetastoreKey )\n      : getMetastoreLocator().getMetastore();\n    if ( metaStore != null ) {\n      try {\n        if ( namedClusterService.read( clusterNameToResolve, metaStore ) != null ) {\n          return metaStore; // The namedCluster agnostic metaStore has this namedCluster, return it.\n        }\n      } catch ( MetaStoreException e ) {\n        // fall through and return the embedded metastore\n      }\n      if ( getMetastoreLocator().getExplicitMetastore( embeddedMetastoreKey ) != null ) {\n        metaStore = getMetastoreLocator().getExplicitMetastore( embeddedMetastoreKey );\n      }\n    }\n    return metaStore;\n  }\n\n  private void addCacheEntry( FileName rootName, FileSystem fs ) throws FileSystemException {\n    addFileSystem( getFileSystemKey( rootName.toString(), fs.getFileSystemOptions() ), fs );\n    String embeddedMetastoreKey = getEmbeddedMetastoreKey( fs.getFileSystemOptions() );\n    Set<FileSystem> fsSet = cacheEntries.get( embeddedMetastoreKey );\n    if ( fsSet == null ) {\n      fsSet = Collections.synchronizedSet( new HashSet<FileSystem>() );\n      cacheEntries.put( embeddedMetastoreKey, fsSet );\n    }\n    fsSet.add( fs );\n  }\n\n  public void closeFileSystem( String embeddedMetastoreKey ) {\n    IMetaStore defaultMetastore = getMetastoreLocator().getMetastore();\n    IMetaStore embeddedMetastore = getMetastoreLocator().getExplicitMetastore( embeddedMetastoreKey );\n    if ( cacheEntries.get( embeddedMetastoreKey ) != null ) {\n      for ( FileSystem fs : cacheEntries.get( embeddedMetastoreKey ) ) {\n        closeFileSystem( fs );\n      }\n    }\n    cacheEntries.remove( embeddedMetastoreKey );\n    namedClusterService.close( defaultMetastore );\n    if ( defaultMetastore != embeddedMetastore ) {\n      namedClusterService.close( embeddedMetastore );\n    }\n  }\n\n  protected synchronized FileSystem findFileSystem( final Comparable<?> key, final FileSystemOptions fileSystemProps ) {\n    String editedKey = getFileSystemKey( key.toString(), fileSystemProps );\n    return super.findFileSystem( editedKey, fileSystemProps );\n  }\n\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:pen=\"http://www.pentaho.com/xml/schemas/pentaho-blueprint\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n  <bean id=\"hdfsFileNameParser\" class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileNameParser\" scope=\"singleton\" factory-method=\"getInstance\"/>\n  <bean id=\"maprfsFileNameParser\" class=\"org.pentaho.big.data.impl.vfs.hdfs.MapRFileNameParser\" scope=\"singleton\" factory-method=\"getInstance\"/>\n  <bean id=\"azurefsFileNameParser\" class=\"org.pentaho.big.data.impl.vfs.hdfs.AzureHdInsightsFileNameParser\" scope=\"singleton\" factory-method=\"getInstance\"/>\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"hdfsFileNameParser\" />\n    <argument value=\"hdfs\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"maprfsFileNameParser\" />\n    <argument value=\"maprfs\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"maprfsFileNameParser\" />\n    <argument value=\"escalefs\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"azurefsFileNameParser\" />\n    <argument value=\"wasb\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"azurefsFileNameParser\" />\n    <argument value=\"wasbs\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.nc.NamedClusterProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"hdfsFileNameParser\" />\n    <argument value=\"hc\" type=\"java.lang.String\"/>\n  </bean>\n\n  <bean class=\"org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider\" scope=\"singleton\">\n    <argument ref=\"hadoopFileSystemService\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"azurefsFileNameParser\" />\n    <argument value=\"abfs\" type=\"java.lang.String\"/>\n  </bean>\n\n  <reference id=\"hadoopFileSystemService\" interface=\"org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator\"/>\n  <reference id=\"namedClusterService\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n</blueprint>"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/AzureFileNameParserTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemManager;\nimport org.apache.commons.vfs2.VFS;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyInt;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class AzureFileNameParserTest {\n\n  private static final String BASE_PATH = \"//\";\n  private static final String WASB_PREFIX = \"wasb\";\n  private static final String WASB_BASE_URI = WASB_PREFIX + \":\" + BASE_PATH;\n  private static final String ABFS_PREFIX = \"abfs\";\n  private static final String ABFS_BASE_URI = ABFS_PREFIX + \":\" + BASE_PATH;\n\n  private FileSystemManager fsm;\n\n  private MockedStatic<VFS> vfsMockedStatic;\n  private MockedStatic<UriParser> uriParserMockedStatic;\n\n  @Before\n  public void setUp() {\n    vfsMockedStatic = Mockito.mockStatic( VFS.class );\n    uriParserMockedStatic = Mockito.mockStatic( UriParser.class );\n    uriParserMockedStatic.when( () -> UriParser.encode( anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.decode( anyString() ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( any( StringBuilder.class ), anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.canonicalizePath( any( StringBuilder.class ), anyInt(), anyInt(), any( FileNameParser.class ) ) ).thenCallRealMethod();\n\n    fsm = mock( FileSystemManager.class );\n    vfsMockedStatic.when( VFS::getManager ).thenReturn( fsm );\n  }\n\n  @After\n  public void cleanup() {\n    vfsMockedStatic.close();\n    uriParserMockedStatic.close();\n    Mockito.validateMockitoUsage();\n  }\n\n  @Test\n  public void testDefaultPort() {\n    assertEquals( -1, AzureHdInsightsFileNameParser.getInstance().getDefaultPort() );\n  }\n\n  @Test\n  public void rootPathNoClusterNameWasb() throws Exception {\n    final String FILEPATH = \"/\";\n    final String URI = WASB_BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( WASB_PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( WASB_PREFIX, name.getScheme() );\n  }\n\n  @Test\n  public void withPathWasb() throws Exception {\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = WASB_BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( WASB_PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( WASB_PREFIX, name.getScheme() );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  @Test\n  public void withPathAndClusterNameWasb() throws Exception {\n    final String HOST = \"cluster2\";\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = WASB_BASE_URI + HOST + FILEPATH;\n\n    buildExtractSchemeMocks( WASB_PREFIX, URI, BASE_PATH + HOST + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( WASB_PREFIX, name.getScheme() );\n    assertTrue( name.getURI().startsWith( WASB_PREFIX + \":\" + BASE_PATH + HOST ) );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  @Test\n  public void rootPathNoClusterNameAbfs() throws Exception {\n    final String FILEPATH = \"/\";\n    final String URI = ABFS_BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( ABFS_PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( ABFS_PREFIX, name.getScheme() );\n  }\n\n  @Test\n  public void withPathAbfs() throws Exception {\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = ABFS_BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( ABFS_PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( ABFS_PREFIX, name.getScheme() );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  @Test\n  public void withPathAndClusterNameAbfs() throws Exception {\n    final String HOST = \"cluster2\";\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = ABFS_BASE_URI + HOST + FILEPATH;\n\n    buildExtractSchemeMocks( ABFS_PREFIX, URI, BASE_PATH + HOST + FILEPATH );\n\n    FileNameParser parser = AzureHdInsightsFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( ABFS_PREFIX, name.getScheme() );\n    assertTrue( name.getURI().startsWith( ABFS_PREFIX + \":\" + BASE_PATH + HOST ) );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  private Answer buildSchemeAnswer( String prefix, String buildPath ) {\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( ( StringBuilder ) args[2] ).append( buildPath );\n      return prefix;\n    };\n  }\n\n  private void buildExtractSchemeMocks( String prefix, String fullPath, String pathWithoutPrefix ) {\n    String[] schemes = {\"wasb\", \"abfs\"};\n    when( fsm.getSchemes() ).thenReturn( schemes );\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( eq( schemes ), eq( fullPath ), any( StringBuilder.class ) ) )\n      .thenAnswer( buildSchemeAnswer( prefix, pathWithoutPrefix ) );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileNameParserTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemManager;\nimport org.apache.commons.vfs2.VFS;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.URLFileName;\nimport org.apache.commons.vfs2.provider.URLFileNameParser;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.ExpectedException;\nimport org.junit.runner.RunWith;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.ArgumentMatchers.anyInt;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/7/15.\n */\n@RunWith(MockitoJUnitRunner.class)\npublic class HDFSFileNameParserTest {\n  private static final String PREFIX = \"hdfs\";\n  private static final String BASE_PATH = \"//\";\n  private static final String BASE_URI = PREFIX + \":\" + BASE_PATH;\n\n  private FileSystemManager fsm;\n  private MockedStatic<VFS> vfsMockedStatic;\n  private MockedStatic<UriParser> uriParserMockedStatic;\n\n  @Rule\n  public final ExpectedException exception = ExpectedException.none();\n\n  @Before\n  public void setUp() {\n    vfsMockedStatic = Mockito.mockStatic( VFS.class );\n    uriParserMockedStatic = Mockito.mockStatic( UriParser.class );\n    uriParserMockedStatic.when( () -> UriParser.encode( anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.decode( anyString() ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( any( StringBuilder.class ), anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.canonicalizePath( any( StringBuilder.class ), anyInt(), anyInt(), any( FileNameParser.class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.extractQueryString( any( StringBuilder.class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.fixSeparators( any( StringBuilder.class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( anyString(), any( StringBuilder.class ) ) ).thenCallRealMethod();\n\n    fsm = mock( FileSystemManager.class );\n    vfsMockedStatic.when( VFS::getManager ).thenReturn( fsm );\n  }\n\n  @After\n  public void cleanup() {\n    vfsMockedStatic.close();\n    uriParserMockedStatic.close();\n    Mockito.validateMockitoUsage();\n  }\n\n  @Test\n  public void testDefaultPort() {\n    assertEquals( -1, HDFSFileNameParser.getInstance().getDefaultPort() );\n  }\n\n  @Test\n  public void testParseUriNullInput() throws Exception {\n    final String FILEPATH = \"test\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    HDFSFileNameParser.getInstance().parseUri( null, null, URI );\n  }\n\n  @Test\n  public void testParseUriMixedCase() throws Exception {\n    final String FILEPATH = \"testUpperCaseHost\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    URLFileName urlFileName =\n      ( URLFileName ) HDFSFileNameParser.getInstance().parseUri( null, null, URI );\n    assertEquals( \"testUpperCaseHost\", urlFileName.getHostName() );\n  }\n\n  @Test\n  public void testParseUriMixedCaseLongName() throws Exception {\n    final String FILEPATH = \"testUpperCaseHost/long/test/name\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    URLFileName urlFileName =\n      ( URLFileName ) HDFSFileNameParser.getInstance().parseUri( null, null, URI );\n    assertEquals( \"testUpperCaseHost\", urlFileName.getHostName() );\n  }\n\n  @Test\n  public void testParseUriThrowExceptionNoProtocol() throws Exception {\n    final String FILEPATH = \"testUpperCaseHost/long/test/name\";\n    exception.expect( FileSystemException.class );\n    buildExtractSchemeMocks( null, FILEPATH, FILEPATH );\n    HDFSFileNameParser.getInstance().parseUri( null, null, \"testUpperCaseHost/long/test/name\" );\n  }\n\n  @Test\n  public void testParseUriUserNameFilePath() throws Exception {\n    final String FILEPATH = \"root:password@testUpperCaseHost:8080/long/test/name\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    URLFileName hdfsFileName =\n      ( URLFileName ) HDFSFileNameParser.getInstance()\n        .parseUri( null, null, URI );\n    URLFileName urlFileName = ( URLFileName ) new URLFileNameParser( 7000 ).parseUri( null, null, URI );\n    assertEquals( 8080, hdfsFileName.getPort() );\n    assertEquals( \"root\", hdfsFileName.getUserName() );\n    assertEquals( \"/long/test/name\", hdfsFileName.getPath() );\n    assertEquals( \"password\", hdfsFileName.getPassword() );\n    assertEquals( urlFileName.getType(), hdfsFileName.getType() );\n    assertEquals( urlFileName.getQueryString(), hdfsFileName.getQueryString() );\n  }\n\n  private Answer buildSchemeAnswer( String prefix, String buildPath ) {\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( ( StringBuilder ) args[2] ).append( buildPath );\n      return prefix;\n    };\n  }\n\n  private void buildExtractSchemeMocks( String prefix, String fullPath, String pathWithoutPrefix ) {\n    String[] schemes = {\"hdfs\"};\n    when( fsm.getSchemes() ).thenReturn( schemes );\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( eq( schemes ), eq( fullPath ), any( StringBuilder.class ) ) )\n      .thenAnswer( buildSchemeAnswer( prefix, pathWithoutPrefix ) );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileObjectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileType;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileStatus;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemPath;\n\n\nimport java.io.InputStream;\nimport java.io.OutputStream;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/7/15.\n */\npublic class HDFSFileObjectTest {\n  private AbstractFileName abstractFileName;\n  private HDFSFileSystem hdfsFileSystem;\n  private HadoopFileSystem hadoopFileSystem;\n  private HDFSFileObject hdfsFileObject;\n  private HadoopFileSystemPath hadoopFileSystemPath;\n\n  @Before\n  public void setup() throws FileSystemException {\n    abstractFileName = mock( AbstractFileName.class );\n    hadoopFileSystem = mock( HadoopFileSystem.class );\n    hdfsFileSystem =\n      new HDFSFileSystem( mock( AbstractFileName.class ), null, hadoopFileSystem );\n    hdfsFileObject = new HDFSFileObject( abstractFileName, hdfsFileSystem );\n    String path = \"fake-path\";\n    hadoopFileSystemPath = mock( HadoopFileSystemPath.class );\n    when( abstractFileName.getPath() ).thenReturn( path );\n    when( hadoopFileSystem.getPath( path ) ).thenReturn( hadoopFileSystemPath );\n  }\n\n  @Test\n  public void testGetContentSize() throws Exception {\n    long len = 321L;\n    HadoopFileStatus hadoopFileStatus = mock( HadoopFileStatus.class );\n    when( hadoopFileSystem.getFileStatus( hadoopFileSystemPath ) ).thenReturn( hadoopFileStatus );\n    when( hadoopFileStatus.getLen() ).thenReturn( len );\n    assertEquals( len, hdfsFileObject.doGetContentSize() );\n  }\n\n  @Test\n  public void testDoGetOutputStreamAppend() throws Exception {\n    OutputStream outputStream = mock( OutputStream.class );\n    when( hadoopFileSystem.append( hadoopFileSystemPath ) ).thenReturn( outputStream );\n    assertEquals( outputStream, hdfsFileObject.doGetOutputStream( true ) );\n  }\n\n  @Test\n  public void testDoGetOutputStreamCreate() throws Exception {\n    OutputStream outputStream = mock( OutputStream.class );\n    when( hadoopFileSystem.create( hadoopFileSystemPath ) ).thenReturn( outputStream );\n    assertEquals( outputStream, hdfsFileObject.doGetOutputStream( false ) );\n  }\n\n  @Test\n  public void testDoGetInputStream() throws Exception {\n    InputStream inputStream = mock( InputStream.class );\n    when( hadoopFileSystem.open( hadoopFileSystemPath ) ).thenReturn( inputStream );\n    assertEquals( inputStream, hdfsFileObject.doGetInputStream() );\n  }\n\n  @Test\n  public void testDoGetTypeFile() throws Exception {\n    HadoopFileStatus hadoopFileStatus = mock( HadoopFileStatus.class );\n    when( hadoopFileSystem.getFileStatus( hadoopFileSystemPath ) ).thenReturn( hadoopFileStatus );\n    when( hadoopFileStatus.isDir() ).thenReturn( false );\n    assertEquals( FileType.FILE, hdfsFileObject.doGetType() );\n  }\n\n  @Test\n  public void testDoGetTypeFolder() throws Exception {\n    HadoopFileStatus hadoopFileStatus = mock( HadoopFileStatus.class );\n    when( hadoopFileSystem.getFileStatus( hadoopFileSystemPath ) ).thenReturn( hadoopFileStatus );\n    when( hadoopFileStatus.isDir() ).thenReturn( true );\n    assertEquals( FileType.FOLDER, hdfsFileObject.doGetType() );\n  }\n\n  @Test\n  public void testDoGetTypeImaginary() throws Exception {\n    assertEquals( FileType.IMAGINARY, hdfsFileObject.doGetType() );\n  }\n\n  @Test\n  public void testDoCreateFolder() throws Exception {\n    hdfsFileObject.doCreateFolder();\n    verify( hadoopFileSystem ).mkdirs( hadoopFileSystemPath );\n  }\n\n  @Test\n  public void testDoRename() throws Exception {\n    FileObject fileObject = mock( FileObject.class );\n    FileName fileName = mock( FileName.class );\n    when( fileObject.getName() ).thenReturn( fileName );\n    String path2 = \"fake-path-2\";\n    when( fileName.getPath() ).thenReturn( path2 );\n    HadoopFileSystemPath newPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileSystem.getPath( path2 ) ).thenReturn( newPath );\n    hdfsFileObject.doRename( fileObject );\n    verify( hadoopFileSystem ).rename( hadoopFileSystemPath, newPath );\n  }\n\n  @Test\n  public void testDoGetLastModifiedTime() throws Exception {\n    long modificationTime = 8988L;\n    HadoopFileStatus hadoopFileStatus = mock( HadoopFileStatus.class );\n    when( hadoopFileSystem.getFileStatus( hadoopFileSystemPath ) ).thenReturn( hadoopFileStatus );\n    when( hadoopFileStatus.getModificationTime() ).thenReturn( modificationTime );\n    assertEquals( modificationTime, hdfsFileObject.doGetLastModifiedTime() );\n  }\n\n  @Test\n  public void testDoSetLastModifiedTime() throws Exception {\n    long modtime = 48933L;\n    long start = System.currentTimeMillis();\n    assertTrue( hdfsFileObject.doSetLastModifiedTime( modtime ) );\n    ArgumentCaptor<Long> longArgumentCaptor = ArgumentCaptor.forClass( Long.class );\n    verify( hadoopFileSystem ).setTimes( eq( hadoopFileSystemPath ), eq( modtime ), longArgumentCaptor.capture() );\n    Long accessTime = longArgumentCaptor.getValue();\n    assertTrue( start <= accessTime );\n    assertTrue( accessTime <= System.currentTimeMillis() );\n  }\n\n  @Test\n  public void testDoListChildren() throws Exception {\n    String childPathName = \"fake-path-child\";\n    testDoListChildrenInternal( childPathName );\n  }\n\n  @Test\n  public void testDoListChildrenWithSpaces() throws Exception {\n    String childPathName = \"fake path child with spaces\";\n    testDoListChildrenInternal( childPathName );\n  }\n\n  private void testDoListChildrenInternal( String childPathName ) throws Exception {\n    HadoopFileStatus hadoopFileStatus = mock( HadoopFileStatus.class );\n    HadoopFileStatus[] hadoopFileStatuses = {\n      hadoopFileStatus\n    };\n    HadoopFileSystemPath childPath = mock( HadoopFileSystemPath.class );\n    when( hadoopFileStatus.getPath() ).thenReturn( childPath );\n    when( childPath.getName() ).thenReturn( childPathName );\n    when( hadoopFileSystem.listStatus( hadoopFileSystemPath ) ).thenReturn( hadoopFileStatuses );\n    String[] children = hdfsFileObject.doListChildren();\n    assertEquals( 1, children.length );\n    assertEquals( childPathName, children[ 0 ] );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileProviderTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemManager;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.GenericFileName;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.URI;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/7/15.\n */\npublic class HDFSFileProviderTest {\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n  private NamedClusterService namedClusterService;\n  private DefaultFileSystemManager defaultFileSystemManager;\n  private HDFSFileProvider hdfsFileProvider;\n  private NamedCluster namedCluster;\n  private MetastoreLocator metaStoreLocator;\n  private FileNameParser fileNameParser;\n\n  @Before\n  public void setup() throws FileSystemException {\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n    namedClusterService = mock( NamedClusterService.class );\n    namedCluster = mock( NamedCluster.class );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    defaultFileSystemManager = mock( DefaultFileSystemManager.class );\n    metaStoreLocator = mock( MetastoreLocator.class );\n    fileNameParser = mock( FileNameParser.class );\n    hdfsFileProvider = new HDFSFileProvider(\n      hadoopFileSystemLocator, namedClusterService, defaultFileSystemManager,\n      fileNameParser,  new String[] { HDFSFileProvider.SCHEME, HDFSFileProvider.MAPRFS }, metaStoreLocator );\n    ArgumentCaptor<String[]> argumentCaptor = ArgumentCaptor.forClass( String[].class );\n    verify( defaultFileSystemManager )\n      .addProvider( argumentCaptor.capture(), eq( hdfsFileProvider ) );\n    String[] schemes = argumentCaptor.getValue();\n    assertEquals( 2, schemes.length );\n    assertEquals( HDFSFileProvider.SCHEME, schemes[ 0 ] );\n    assertEquals( HDFSFileProvider.MAPRFS, schemes[ 1 ] );\n  }\n\n  @Test\n  public void testDoCreateFileSystemNoPort() throws FileSystemException, ClusterInitializationException {\n    String testHostname = \"testHostname\";\n    FileName fileName = mock( FileName.class );\n    GenericFileName genericFileName = mock( GenericFileName.class );\n    when( fileName.getURI() ).thenReturn( \"\" );\n    when( fileName.getRoot() ).thenReturn( genericFileName );\n    when( fileName.getScheme() ).thenReturn( HDFSFileProvider.MAPRFS );\n    when( genericFileName.getHostName() ).thenReturn( testHostname );\n    when( genericFileName.getPort() ).thenReturn( -1 );\n    assertTrue( hdfsFileProvider.doCreateFileSystem( fileName, null ) instanceof HDFSFileSystem );\n    verify( hadoopFileSystemLocator ).getHadoopFilesystem( namedCluster, URI.create( \"\" ) );\n    verify( namedClusterService ).updateNamedClusterTemplate( testHostname, -1, true );\n  }\n\n  @Test\n  public void testGetCapabilities() {\n    assertEquals( HDFSFileProvider.capabilities, hdfsFileProvider.getCapabilities() );\n  }\n}\n\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/HDFSFileSystemTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.Capability;\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.provider.AbstractFileName;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.Collections;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/7/15.\n */\npublic class HDFSFileSystemTest {\n  private FileName rootName;\n  private HadoopFileSystem hadoopFileSystem;\n  private HDFSFileSystem hdfsFileSystem;\n\n  @Before\n  public void setup() {\n    rootName = mock( FileName.class );\n    hadoopFileSystem = mock( HadoopFileSystem.class );\n    hdfsFileSystem = new HDFSFileSystem( rootName, null, hadoopFileSystem );\n  }\n\n  @Test\n  public void testAddCapabilities() {\n    Collection caps = mock( Collection.class );\n    hdfsFileSystem.addCapabilities( caps );\n    verify( caps ).addAll( HDFSFileProvider.capabilities );\n  }\n\n  @Test\n  public void testAddAppendCapabilities() {\n    Collection caps = new ArrayList(  );\n    when( hadoopFileSystem.getProperty( eq( \"dfs.support.append\" ), anyString() ) ).thenReturn( \"false\" );\n    hdfsFileSystem.addCapabilities( caps );\n    Collection res = new ArrayList( HDFSFileProvider.capabilities );\n    assertArrayEquals( caps.toArray(), Collections.unmodifiableCollection( res ).toArray() );\n    caps = new ArrayList(  );\n    when( hadoopFileSystem.getProperty( eq( \"dfs.support.append\" ), anyString() ) ).thenReturn( \"true\" );\n    hdfsFileSystem.addCapabilities( caps );\n    res.add( Capability.APPEND_CONTENT );\n    assertArrayEquals( caps.toArray(), Collections.unmodifiableCollection( res ).toArray() );\n  }\n\n  @Test\n  public void testCreateFile() throws Exception {\n    assertTrue( hdfsFileSystem.createFile( mock( AbstractFileName.class ) ) instanceof HDFSFileObject );\n  }\n\n  @Test\n  public void testGetHDFSFileSystem() throws FileSystemException {\n    assertEquals( hadoopFileSystem, hdfsFileSystem.getHDFSFileSystem() );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/MapRFileNameParserTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.impl.vfs.hdfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemManager;\nimport org.apache.commons.vfs2.VFS;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class MapRFileNameParserTest {\n\n  private static final String PREFIX = \"maprfs\";\n  private static final String BASE_PATH = \"//\";\n  private static final String BASE_URI = PREFIX + \":\" + BASE_PATH;\n\n  private FileSystemManager fsm;\n  private MockedStatic<VFS> vfsMockedStatic;\n  private MockedStatic<UriParser> uriParserMockedStatic;\n\n  @Before\n  public void setUp() {\n    vfsMockedStatic = Mockito.mockStatic( VFS.class );\n    uriParserMockedStatic = Mockito.mockStatic( UriParser.class );\n    uriParserMockedStatic.when( () -> UriParser.encode( anyString(), any( char[].class ) ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.decode( anyString() ) ).thenCallRealMethod();\n    uriParserMockedStatic.when( () -> UriParser.appendEncoded( any( StringBuilder.class ), anyString(), any( char[].class ) ) ).thenCallRealMethod();\n\n    fsm = mock( FileSystemManager.class );\n    vfsMockedStatic.when( VFS::getManager ).thenReturn( fsm );\n  }\n\n  @After\n  public void cleanup() {\n    vfsMockedStatic.close();\n    uriParserMockedStatic.close();\n    Mockito.validateMockitoUsage();\n  }\n\n  @Test\n  public void testDefaultPort() {\n    assertEquals( -1, MapRFileNameParser.getInstance().getDefaultPort() );\n  }\n\n  @Test\n  public void rootPathNoClusterName() throws Exception {\n    final String FILEPATH = \"/\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = MapRFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( PREFIX, name.getScheme() );\n  }\n\n  @Test\n  public void withPath() throws Exception {\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = BASE_URI + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + FILEPATH );\n\n    FileNameParser parser = MapRFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( PREFIX, name.getScheme() );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  @Test\n  public void withPathAndClusterName() throws Exception {\n    final String HOST = \"cluster2\";\n    final String FILEPATH = \"/my/file/path\";\n    final String URI = BASE_URI + HOST + FILEPATH;\n\n    buildExtractSchemeMocks( PREFIX, URI, BASE_PATH + HOST + FILEPATH );\n\n    FileNameParser parser = MapRFileNameParser.getInstance();\n    FileName name = parser.parseUri( null, null, URI );\n\n    assertEquals( URI, name.getURI() );\n    assertEquals( PREFIX, name.getScheme() );\n    assertTrue( name.getURI().startsWith( PREFIX + \":\" + BASE_PATH + HOST ) );\n    assertEquals( FILEPATH, name.getPath() );\n  }\n\n  private Answer buildSchemeAnswer( String prefix, String buildPath ) {\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( ( StringBuilder ) args[2] ).append( buildPath );\n      return prefix;\n    };\n  }\n\n  private void buildExtractSchemeMocks( String prefix, String fullPath, String pathWithoutPrefix ) {\n    String[] schemes = {\"maprfs\"};\n    when( fsm.getSchemes() ).thenReturn( schemes );\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( eq( schemes ), eq( fullPath ), any( StringBuilder.class ) ) )\n      .thenAnswer( buildSchemeAnswer( prefix, pathWithoutPrefix ) );\n  }\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterConfigBuilderTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\nimport org.apache.commons.vfs2.FileSystemConfigBuilder;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.util.Arrays;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\npublic class NamedClusterConfigBuilderTest {\n\n  private NamedClusterService namedClusterService = mock( NamedClusterService.class );\n\n  private MetastoreLocator metastoreLocator = mock( MetastoreLocator.class );\n\n  private IMetaStore metastore = mock( IMetaStore.class );\n\n  private NamedCluster namedCluster = mock( NamedCluster.class );\n\n  @Before\n  public void setUp() throws MetaStoreException {\n    when( metastoreLocator.getMetastore() ).thenReturn( metastore );\n  }\n\n  @Test\n  public void testSnapshotNamedClusterToMetaStore() throws MetaStoreException {\n    when( namedClusterService.list( eq( metastore ) ) ).thenReturn( Arrays.asList( namedCluster ) );\n\n    NamedClusterConfigBuilder builder = new NamedClusterConfigBuilder( metastoreLocator, namedClusterService );\n    builder.snapshotNamedClusterToMetaStore( metastore );\n\n    verify( namedClusterService ).create( eq( namedCluster ), eq( metastore ) );\n  }\n\n  @Test\n  public void testSnapshotNamedClusterToMetaStore_staticInit() throws MetaStoreException {\n    when( namedClusterService.list( eq( metastore ) ) ).thenReturn( Arrays.asList( namedCluster ) );\n\n    FileSystemConfigBuilder builder = NamedClusterConfigBuilder.getInstance( metastoreLocator, namedClusterService );\n    assertTrue( builder instanceof NamedClusterConfigBuilder );\n    NamedClusterConfigBuilder ncbuilder = (NamedClusterConfigBuilder) builder;\n    ncbuilder.snapshotNamedClusterToMetaStore( metastore );\n\n    verify( namedClusterService ).create( eq( namedCluster ), eq( metastore ) );\n  }\n\n  @Test\n  public void testSnapshotNamedClusterToMetaStore_MetastoreDoesNotHaveNC() throws MetaStoreException {\n    when( namedClusterService.list( eq( metastore ) ) ).thenReturn( null );\n\n    NamedClusterConfigBuilder builder = new NamedClusterConfigBuilder( metastoreLocator, namedClusterService );\n    builder.snapshotNamedClusterToMetaStore( metastore );\n\n    verify( namedClusterService, never() ).create( eq( namedCluster ), eq( metastore ) );\n  }\n\n  @Test\n  public void testSnapshotNamedClusterToMetaStore_staticInit_MetastoreDoesNotHaveNC() throws MetaStoreException {\n    when( namedClusterService.list( eq( metastore ) ) ).thenReturn( null );\n\n    FileSystemConfigBuilder builder = NamedClusterConfigBuilder.getInstance( metastoreLocator, namedClusterService );\n    assertTrue( builder instanceof NamedClusterConfigBuilder );\n    NamedClusterConfigBuilder ncbuilder = (NamedClusterConfigBuilder) builder;\n    ncbuilder.snapshotNamedClusterToMetaStore( metastore );\n\n    verify( namedClusterService, never() ).create( eq( namedCluster ), eq( metastore ) );\n  }\n\n}\n"
  },
  {
    "path": "impl/vfs-hdfs/src/test/java/org/pentaho/big/data/impl/vfs/hdfs/nc/NamedClusterProviderTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.impl.vfs.hdfs.nc;\n\nimport org.apache.commons.vfs2.FileSystem;\nimport org.apache.commons.vfs2.FileSystemConfigBuilder;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemManager;\nimport org.apache.commons.vfs2.provider.FileNameParser;\nimport org.apache.commons.vfs2.provider.url.UrlFileName;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.AdditionalMatchers;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileSystem;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystem;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.URI;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.ArgumentMatchers.isNull;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\npublic class NamedClusterProviderTest {\n\n  private final NamedClusterService ncService = mock( NamedClusterService.class );\n\n  private final MetastoreLocator metastoreLocator = mock( MetastoreLocator.class );\n\n  private final IMetaStore metastore = mock( IMetaStore.class );\n\n  private final NamedCluster nc = mock( NamedCluster.class );\n\n  private final NamedCluster ncTemplate = mock( NamedCluster.class );\n\n  private final HadoopFileSystemLocator hdfsLocator = mock( HadoopFileSystemLocator.class );\n\n  private final FileNameParser fileNameParser = mock( FileNameParser.class );\n\n  private final DefaultFileSystemManager fileSystemManager = mock( DefaultFileSystemManager.class );\n\n  private final HadoopFileSystem hfs = mock( HadoopFileSystem.class );\n\n  private final String[] scheme = new String[] { \"test\" };\n\n  private final String ncName = \"ncName\";\n  private final String path = \"/samplePath\";\n\n  @Before\n  public void setUp() throws MetaStoreException, ClusterInitializationException {\n    when( ncService.read( eq( ncName ), eq( metastore ) ) ).thenReturn( nc );\n    when( ncService.getClusterTemplate() ).thenReturn( ncTemplate );\n    when( ncTemplate.processURLsubstitution( anyString(), AdditionalMatchers.or( any( IMetaStore.class ), isNull() ), any( Variables.class ) ) ).thenReturn( \"nc://\" + ncName + path );\n    when( nc.processURLsubstitution( anyString(), AdditionalMatchers.or( any( IMetaStore.class ), isNull() ), any( Variables.class ) ) ).thenReturn( \"nc://\" + ncName + path );\n    when( hdfsLocator.getHadoopFilesystem( any( NamedCluster.class ), any( URI.class ) ) ).thenReturn( hfs );\n  }\n\n  @Test\n  public void testGetNamedClusterByName_metastoreExist() throws FileSystemException, MetaStoreException {\n    when( metastoreLocator.getMetastore() ).thenReturn( metastore );\n    String ncName = \"ncName\";\n    NamedClusterProvider provider = new  NamedClusterProvider( hdfsLocator, ncService, fileSystemManager, fileNameParser, scheme, metastoreLocator );\n    assertEquals( nc,  provider.getNamedClusterByName( ncName, null ) );\n    verify( ncService, times( 2 ) ).read( eq( ncName ), eq( metastore ) );\n  }\n\n  @Test\n  public void testGetNamedClusterByName_metastoreNotExist() throws FileSystemException, MetaStoreException {\n    String ncName = \"ncName\";\n    NamedClusterProvider provider = new  NamedClusterProvider( hdfsLocator, ncService, fileSystemManager, fileNameParser, scheme, metastoreLocator );\n    //should be null because we do not have metastore\n    assertNull( provider.getNamedClusterByName( ncName, null ) );\n    verify( ncService, never() ).read( eq( ncName ), eq( metastore ) );\n  }\n\n  @Test\n  public void testGetConfigBuilder() throws FileSystemException {\n    NamedClusterProvider provider = new  NamedClusterProvider( hdfsLocator, ncService, fileSystemManager, fileNameParser, scheme, metastoreLocator );\n    FileSystemConfigBuilder builder = provider.getConfigBuilder();\n    assertNotNull( builder );\n    assertTrue( builder instanceof NamedClusterConfigBuilder );\n  }\n\n  @Test\n  public void testDoCreateFileSystem() throws FileSystemException, ClusterInitializationException {\n    when( metastoreLocator.getMetastore() ).thenReturn( metastore );\n\n    UrlFileName name = new UrlFileName( \"hc\", ncName, 0, 0, null, null, path, null, null );\n    NamedClusterProvider provider = new  NamedClusterProvider( hdfsLocator, ncService, fileSystemManager, fileNameParser, scheme, metastoreLocator );\n    FileSystem fs = provider.doCreateFileSystem( name, null );\n    assertTrue( fs instanceof HDFSFileSystem );\n\n    HDFSFileSystem hdfsFS = (HDFSFileSystem) fs;\n    assertEquals( hfs, hdfsFS.getHDFSFileSystem() );\n\n    verify( nc ).processURLsubstitution( anyString(), eq( metastore ), any( Variables.class ) );\n    verify( hdfsLocator ).getHadoopFilesystem( eq( nc ), any( URI.class ) );\n  }\n\n  @Test\n  public void testDoCreateFileSystem_NCTemplate() throws FileSystemException, MetaStoreException, ClusterInitializationException {\n    UrlFileName name = new UrlFileName( \"hc\", ncName, 0, 0, null, null, path, null, null );\n    NamedClusterProvider provider = new  NamedClusterProvider( hdfsLocator, ncService, fileSystemManager, fileNameParser, scheme, metastoreLocator );\n    FileSystem fs = provider.doCreateFileSystem( name, null );\n    assertTrue( fs instanceof HDFSFileSystem );\n\n    HDFSFileSystem hdfsFS = (HDFSFileSystem) fs;\n    assertEquals( hfs, hdfsFS.getHDFSFileSystem() );\n\n    verify( ncService, never() ).read( eq( ncName ), eq( metastore ) );\n    verify( hdfsLocator ).getHadoopFilesystem( eq( ncTemplate ), any( URI.class ) );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/browse/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <modelVersion>4.0.0</modelVersion>\n\n  <artifactId>pentaho-big-data-kettle-plugins-browse</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-io</groupId>\n      <artifactId>commons-io</artifactId>\n      <version>${commons-io.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>file-open-save-new-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n      <dependency>\n          <groupId>org.apache.logging.log4j</groupId>\n          <artifactId>log4j-api</artifactId>\n          <version>${log4j.version}</version>\n      </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/browse/src/main/java/org/pentaho/big/data/impl/browse/NamedClusterProvider.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.browse;\n\nimport org.apache.commons.compress.utils.IOUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileType;\nimport org.apache.commons.vfs2.Selectors;\nimport org.pentaho.big.data.impl.browse.model.NamedClusterDirectory;\nimport org.pentaho.big.data.impl.browse.model.NamedClusterFile;\nimport org.pentaho.big.data.impl.browse.model.NamedClusterTree;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.plugins.fileopensave.api.overwrite.OverwriteStatus;\nimport org.pentaho.di.plugins.fileopensave.api.providers.BaseFileProvider;\nimport org.pentaho.di.plugins.fileopensave.api.providers.File;\nimport org.pentaho.di.plugins.fileopensave.api.providers.Tree;\nimport org.pentaho.di.plugins.fileopensave.api.providers.Utils;\nimport org.pentaho.di.plugins.fileopensave.api.providers.exception.FileException;\nimport org.pentaho.di.plugins.fileopensave.api.providers.exception.FileNotFoundException;\nimport org.pentaho.di.plugins.fileopensave.api.providers.exception.ProviderServiceInterface;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\npublic class NamedClusterProvider extends BaseFileProvider<NamedClusterFile> {\n\n  public static final String NAME = \"Hadoop Clusters\";\n  public static final String TYPE = \"clusters\";\n  public static final String SCHEME = \"hc\";\n\n  private NamedClusterService namedClusterManager;\n  private MetastoreLocator metastoreLocator;\n  private Logger logger = LogManager.getLogger( NamedClusterProvider.class );\n  private boolean initialized = false;\n\n  public NamedClusterProvider() {\n    this(\n      NamedClusterManager.getInstance()\n    );\n    lazilyInitialize();\n  }\n\n  public NamedClusterProvider( NamedClusterService namedClusterManager ) {\n    this.namedClusterManager = namedClusterManager;\n  }\n\n  private void lazilyInitialize() {\n    try {\n      Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n      this.metastoreLocator = metastoreLocators.stream().findFirst().get();\n    } catch ( Exception e ) {\n      logger.error( \"Error getting MetastoreLocator\", e );\n    }\n    try {\n      Collection<ProviderServiceInterface> providerServiceInterfaces = PluginServiceLoader.loadServices( ProviderServiceInterface.class );\n      providerServiceInterfaces.stream().findFirst().get().addProviderService( this );\n      initialized = true;\n    } catch ( Exception e ) {\n      logger.error( \"Error registering Hadoop Clusters file provider\", e );\n    }\n  }\n\n  private void ensureInitialized() {\n    if ( !initialized ) {\n      lazilyInitialize();\n    }\n  }\n\n  @Override\n  public String getName() {\n    return NAME;\n  }\n\n  @Override\n  public String getType() {\n    return TYPE;\n  }\n\n  @Override\n  public Class<NamedClusterFile> getFileClass() {\n    return NamedClusterFile.class;\n  }\n\n  @Override\n  public boolean isAvailable() {\n    return true;\n  }\n\n  @Override\n  public Tree getTree( Bowl bowl ) {\n    ensureInitialized();\n    NamedClusterTree namedClusterTree = new NamedClusterTree( NAME );\n\n    try {\n      List<String> names = namedClusterManager.listNames( metastoreLocator.getMetastore() );\n      names.forEach( name -> {\n        NamedClusterDirectory namedClusterDirectory = new NamedClusterDirectory();\n        namedClusterDirectory.setName( name );\n        namedClusterDirectory.setPath( SCHEME + \"://\" + name );\n        namedClusterDirectory.setRoot( NAME );\n        namedClusterDirectory.setHasChildren( true );\n        namedClusterDirectory.setCanDelete( false );\n        namedClusterTree.addChild( namedClusterDirectory );\n      } );\n    } catch ( MetaStoreException me ) {\n      // ignored\n    }\n\n    return namedClusterTree;\n  }\n\n  @Override\n  public List<NamedClusterFile> getFiles( Bowl bowl, NamedClusterFile file, String filters,\n                                          VariableSpace space ) throws FileException {\n    ensureInitialized();\n    FileObject fileObject;\n    try {\n      fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n      if ( !fileObject.exists() ) {\n        throw new FileNotFoundException( file.getPath(), TYPE );\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      throw new FileNotFoundException( file.getPath(), TYPE );\n    }\n    return populateChildren( file, fileObject, filters );\n  }\n\n  /**\n   * Check if a file object has children\n   *\n   * @param fileObject\n   * @return\n   */\n  private boolean hasChildren( FileObject fileObject ) {\n    try {\n      return fileObject != null && fileObject.getType().hasChildren();\n    } catch ( FileSystemException e ) {\n      return false;\n    }\n  }\n\n  /**\n   * Get the children if they are available, if an error return an empty list\n   *\n   * @param fileObject\n   * @return\n   */\n  private FileObject[] getChildren( FileObject fileObject ) {\n    try {\n      return fileObject != null ? fileObject.getChildren() : new FileObject[] {};\n    } catch ( FileSystemException e ) {\n      return new FileObject[] {};\n    }\n  }\n\n  /**\n   * Populate Named Cluster file objects from named cluster FileObject types\n   *\n   * @param parent\n   * @param fileObject\n   * @param filters\n   * @return\n   */\n  private List<NamedClusterFile> populateChildren( NamedClusterFile parent, FileObject fileObject, String filters ) {\n    List<NamedClusterFile> files = new ArrayList<>();\n    if ( fileObject != null && hasChildren( fileObject ) ) {\n      FileObject[] children = getChildren( fileObject );\n      for ( FileObject child : children ) {\n        if ( hasChildren( child ) ) {\n          files.add( NamedClusterDirectory.create( parent.getPath(), child ) );\n        } else {\n          if ( child != null && Utils.matches( child.getName().getBaseName(), filters ) ) {\n            files.add( NamedClusterFile.create( parent.getPath(), child ) );\n          }\n        }\n      }\n    }\n    return files;\n  }\n\n  @Override\n  public List<NamedClusterFile> delete( Bowl bowl, List<NamedClusterFile> files, VariableSpace space )\n    throws FileException {\n    ensureInitialized();\n    List<NamedClusterFile> deletedFiles = new ArrayList<>();\n    for ( NamedClusterFile file : files ) {\n      try {\n        FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n        if ( fileObject.delete() ) {\n          deletedFiles.add( file );\n        }\n      } catch ( KettleFileException | FileSystemException kfe ) {\n        // Ignore don't add\n      }\n    }\n    return deletedFiles;\n  }\n\n  @Override\n  public NamedClusterFile add( Bowl bowl, NamedClusterFile folder, VariableSpace space ) throws FileException {\n    ensureInitialized();\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( folder.getPath() );\n      fileObject.createFolder();\n      String parent = folder.getPath().substring( 0, folder.getPath().length() - 1 );\n      return NamedClusterFile.create( parent, fileObject );\n    } catch ( KettleFileException | FileSystemException ignored ) {\n      // Ignored\n    }\n    return null;\n  }\n\n  @Override\n  public NamedClusterFile getFile( Bowl bowl, NamedClusterFile file, VariableSpace space ) {\n    ensureInitialized();\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n      if ( fileObject.getType().equals( FileType.FOLDER ) ) {\n        return NamedClusterDirectory.create( null, fileObject );\n      } else {\n        return NamedClusterFile.create( null, fileObject );\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      // File does not exist\n    }\n    return null;\n  }\n\n  @Override\n  public boolean fileExists( Bowl bowl, NamedClusterFile dir, String path, VariableSpace space ) throws FileException {\n    ensureInitialized();\n    path = sanitizeName( bowl, dir, path );\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( path );\n      return fileObject.exists();\n    } catch ( KettleFileException | FileSystemException e ) {\n      throw new FileException();\n    }\n  }\n\n  @Override\n  public String getNewName( Bowl bowl, NamedClusterFile destDir, String newPath, VariableSpace space )\n    throws FileException {\n    ensureInitialized();\n    String extension = Utils.getExtension( newPath );\n    String parent = Utils.getParent( newPath, \"/\" );\n    String name = Utils.getName( newPath, \"/\" ).replace( \".\" + extension, \"\" );\n    int i = 1;\n    String testName = sanitizeName( bowl, destDir, newPath );\n    try {\n      while ( KettleVFS.getInstance( bowl ).getFileObject( testName ).exists() ) {\n        if ( Utils.isValidExtension( extension ) ) {\n          testName = sanitizeName( bowl, destDir, parent + name + \" \" + i + \".\" + extension );\n        } else {\n          testName = sanitizeName( bowl, destDir, newPath + \" \" + i );\n        }\n        i++;\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      return testName;\n    }\n    return testName;\n  }\n\n  @Override\n  public boolean isSame( Bowl bowl, File file1, File file2 ) {\n    return file1 instanceof NamedClusterFile && file2 instanceof NamedClusterFile;\n  }\n\n  @Override\n  public NamedClusterFile rename( Bowl bowl, NamedClusterFile file, String newPath, OverwriteStatus overwriteStatus,\n    VariableSpace space ) throws FileException {\n    return doMove( bowl, file, newPath, overwriteStatus );\n  }\n\n  @Override\n  public NamedClusterFile copy( Bowl bowl, NamedClusterFile file, String toPath, OverwriteStatus overwriteStatus,\n    VariableSpace space ) throws FileException {\n    ensureInitialized();\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n      FileObject copyObject = KettleVFS.getInstance( bowl ).getFileObject( toPath );\n      copyObject.copyFrom( fileObject, Selectors.SELECT_ALL );\n      if ( file instanceof NamedClusterDirectory ) {\n        return NamedClusterDirectory.create( copyObject.getParent().getPublicURIString(), fileObject );\n      } else {\n        return NamedClusterFile.create( copyObject.getParent().getPublicURIString(), fileObject );\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      throw new FileException();\n    }\n  }\n\n  @Override\n  public NamedClusterFile move( Bowl bowl, NamedClusterFile namedClusterFile, String s, OverwriteStatus overwriteStatus,\n    VariableSpace space ) throws FileException {\n    return null;\n  }\n\n  private NamedClusterFile doMove( Bowl bowl, NamedClusterFile file, String newPath, OverwriteStatus overwriteStatus ) {\n    ensureInitialized();\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n      FileObject renameObject = KettleVFS.getInstance( bowl ).getFileObject( newPath );\n      if ( renameObject.exists() ) {\n        overwriteStatus.promptOverwriteIfNecessary( file.getPath(), file.getType() );\n        if ( overwriteStatus.isOverwrite() ) {\n          renameObject.delete();\n        } else if ( overwriteStatus.isCancel() || overwriteStatus.isSkip() ) {\n          return null;\n        } else if ( overwriteStatus.isRename() ) {\n          NamedClusterDirectory namedClusterDir =\n            NamedClusterDirectory.create( renameObject.getParent().getPath().toString(), renameObject );\n          newPath = getNewName( bowl, namedClusterDir, newPath, new Variables() );\n          renameObject = KettleVFS.getInstance( bowl ).getFileObject( newPath );\n        }\n      }\n\n      fileObject.moveTo( renameObject );\n      if ( file instanceof NamedClusterDirectory ) {\n        return NamedClusterDirectory.create( renameObject.getParent().getPublicURIString(), renameObject );\n      } else {\n        return NamedClusterFile.create( renameObject.getParent().getPublicURIString(), renameObject );\n      }\n    } catch ( KettleFileException | FileSystemException | FileException e ) {\n      return null;\n    }\n  }\n\n  @Override\n  public InputStream readFile( Bowl bowl, NamedClusterFile file, VariableSpace space ) throws FileException {\n    ensureInitialized();\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( file.getPath() );\n      return fileObject.getContent().getInputStream();\n    } catch ( KettleFileException | FileSystemException e ) {\n      return null;\n    }\n  }\n\n  @Override\n  public NamedClusterFile writeFile( Bowl bowl, InputStream inputStream, NamedClusterFile destDir, String path,\n                                     OverwriteStatus overwriteStatus, VariableSpace space ) throws FileException {\n    ensureInitialized();\n    FileObject fileObject = null;\n    try {\n      fileObject = KettleVFS.getInstance( bowl ).getFileObject( path );\n    } catch ( KettleFileException ke ) {\n      throw new FileException();\n    }\n    if ( fileObject != null ) {\n      try ( OutputStream outputStream = fileObject.getContent().getOutputStream(); ) {\n        IOUtils.copy( inputStream, outputStream );\n        outputStream.flush();\n        return NamedClusterFile.create( destDir.getPath(), fileObject );\n      } catch ( IOException e ) {\n        return null;\n      }\n    }\n    return null;\n  }\n\n  @Override\n  public NamedClusterFile getParent( Bowl bowl, NamedClusterFile file ) {\n    NamedClusterFile vfsFile = new NamedClusterFile();\n    vfsFile.setPath( file.getParent() );\n    return vfsFile;\n  }\n\n  @Override\n  public void clearProviderCache() {\n    // Nothing to clear\n  }\n\n  @Override\n  public File getFile( Bowl bowl, String path, boolean isDirectory ) {\n    ensureInitialized();\n    FileObject fileObject = null;\n    try {\n      fileObject = KettleVFS.getInstance( bowl ).getFileObject( path );\n      if ( isDirectory ) {\n        if ( fileObject.exists() && !fileObject.getType().equals( FileType.FOLDER ) ) {\n          throwIllegalArgumentException( path, \"is not a directory\" );\n        }\n        return NamedClusterDirectory.create( null, fileObject );\n      } else {\n        if ( fileObject.exists() && !fileObject.getType().equals( FileType.FILE ) ) {\n          throwIllegalArgumentException( path, \"is a directory\" );\n        }\n        return NamedClusterFile.create( null, fileObject );\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      throwIllegalArgumentException( path, \"could not create a VFSFile object\" );\n    }\n    return null; //Will never be executed but compiler complained\n  }\n\n  private void throwIllegalArgumentException( String path, String message ) {\n    throw new IllegalArgumentException( \"\\\"\" + path + \"\\\" \" + message );\n  }\n\n  @Override\n  public NamedClusterFile createDirectory( Bowl bowl, String parentPath, NamedClusterFile file,\n    String newDirectoryName ) {\n    ensureInitialized();\n    NamedClusterDirectory namedClusterDir = null;\n    try {\n      FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( parentPath + \"/\" + newDirectoryName );\n      namedClusterDir = NamedClusterDirectory.create( null, fileObject );\n      add( bowl, namedClusterDir,null );\n    } catch ( KettleFileException | FileException e ) {\n      e.printStackTrace();\n      return null;\n    }\n    return namedClusterDir;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/browse/src/main/java/org/pentaho/big/data/impl/browse/model/NamedClusterDirectory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.browse.model;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.big.data.impl.browse.NamedClusterProvider;\nimport org.pentaho.di.plugins.fileopensave.api.providers.Directory;\nimport org.pentaho.di.plugins.fileopensave.api.providers.EntityType;\n\nimport java.util.ArrayList;\nimport java.util.Date;\nimport java.util.List;\n\npublic class NamedClusterDirectory extends NamedClusterFile implements Directory {\n  private boolean hasChildren;\n  private boolean canAddChildren;\n  private List<NamedClusterFile> children = new ArrayList<>();\n\n  public static final String DIRECTORY = \"folder\";\n\n  @Override public String getType() {\n    return DIRECTORY;\n  }\n\n  public boolean hasChildren() {\n    return hasChildren;\n  }\n\n  public void setHasChildren( boolean hasChildren ) {\n    this.hasChildren = hasChildren;\n  }\n\n  public List<NamedClusterFile> getChildren() {\n    return children;\n  }\n\n  public void setChildren( List<NamedClusterFile> children ) {\n    this.children = children;\n  }\n\n  public void addChild( NamedClusterFile file ) {\n    this.children.add( file );\n  }\n\n  public boolean isHasChildren() {\n    return hasChildren;\n  }\n\n  public void setCanAddChildren( boolean canAddChildren ) {\n    this.canAddChildren = canAddChildren;\n  }\n\n  @Override public boolean isCanAddChildren() {\n    return this.canAddChildren;\n  }\n\n  public static NamedClusterDirectory create( String parent, FileObject fileObject ) {\n    NamedClusterDirectory namedClusterDirectory = new NamedClusterDirectory();\n    namedClusterDirectory.setName( fileObject.getName().getBaseName() );\n    namedClusterDirectory.setPath( fileObject.getName().getFriendlyURI() );\n    namedClusterDirectory.setParent( parent );\n    namedClusterDirectory.setRoot( NamedClusterProvider.NAME );\n    namedClusterDirectory.setCanEdit( true );\n    namedClusterDirectory.setHasChildren( true );\n    namedClusterDirectory.setCanAddChildren( true );\n    try {\n      namedClusterDirectory.setDate( new Date( fileObject.getContent().getLastModifiedTime() ) );\n    } catch ( FileSystemException e ) {\n      namedClusterDirectory.setDate( new Date() );\n    }\n    return namedClusterDirectory;\n  }\n\n  @Override\n  public EntityType getEntityType(){\n    return EntityType.NAMED_CLUSTER_DIRECTORY;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/browse/src/main/java/org/pentaho/big/data/impl/browse/model/NamedClusterFile.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.browse.model;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.big.data.impl.browse.NamedClusterProvider;\nimport org.pentaho.di.plugins.fileopensave.api.providers.BaseEntity;\nimport org.pentaho.di.plugins.fileopensave.api.providers.EntityType;\nimport org.pentaho.di.plugins.fileopensave.api.providers.File;\n\nimport java.util.Date;\n\npublic class NamedClusterFile extends BaseEntity implements File {\n  public static final String TYPE = \"file\";\n\n  public NamedClusterFile() {\n    // Needed for JSON marshalling\n  }\n\n  @Override public String getType() {\n    return TYPE;\n  }\n\n  @Override public String getProvider() {\n    return NamedClusterProvider.TYPE;\n  }\n\n  public static NamedClusterFile create( String parent, FileObject fileObject ) {\n    NamedClusterFile namedClusterFile = new NamedClusterFile();\n    namedClusterFile.setName( fileObject.getName().getBaseName() );\n    namedClusterFile.setPath( fileObject.getName().getFriendlyURI() );\n    namedClusterFile.setParent( parent );\n    namedClusterFile.setRoot( NamedClusterProvider.NAME );\n    namedClusterFile.setCanEdit( true );\n    try {\n      namedClusterFile.setDate( new Date( fileObject.getContent().getLastModifiedTime() ) );\n    } catch ( FileSystemException ignored ) {\n      namedClusterFile.setDate( new Date() );\n    }\n    return namedClusterFile;\n  }\n\n  @Override public boolean equals( Object obj ) {\n    // If the object is compared with itself then return true\n    if ( obj == this ) {\n      return true;\n    }\n\n    if ( !( obj instanceof NamedClusterFile ) ) {\n      return false;\n    }\n\n    NamedClusterFile compare = (NamedClusterFile) obj;\n    // This comparison depends on `getProvider()` to always return a hardcoded value\n    return compare.getProvider().equals( getProvider() )\n      && StringUtils.equals( compare.getPath(), getPath() );\n  }\n\n  @Override\n  public EntityType getEntityType(){\n    return EntityType.NAMED_CLUSTER_FILE;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/browse/src/main/java/org/pentaho/big/data/impl/browse/model/NamedClusterTree.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.impl.browse.model;\n\nimport org.pentaho.big.data.impl.browse.NamedClusterProvider;\nimport org.pentaho.di.plugins.fileopensave.api.providers.Tree;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class NamedClusterTree implements Tree<NamedClusterFile> {\n\n  private static final int ORDER = 4;\n  private String name;\n\n  private List<NamedClusterFile> namedClusters = new ArrayList<>();\n\n  public NamedClusterTree( String name ) {\n    this.name = name;\n  }\n\n  @Override public String getName() {\n    return name;\n  }\n\n  @Override public List<NamedClusterFile> getChildren() {\n    return namedClusters;\n  }\n\n  @Override public void addChild( NamedClusterFile namedClusterFile ) {\n    namedClusters.add( namedClusterFile );\n  }\n\n  @Override public boolean isCanAddChildren() {\n    return false;\n  }\n\n  @Override public int getOrder() {\n    return ORDER;\n  }\n\n  @Override public String getProvider() {\n    return NamedClusterProvider.TYPE;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/browse/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xsi:schemaLocation=\"\n            http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n\n\n  <bean id=\"NamedClusterProvider\" scope=\"singleton\" class=\"org.pentaho.big.data.impl.browse.NamedClusterProvider\">\n    <argument ref=\"namedClusterManager\" />\n  </bean>\n\n  <service id=\"NamedClusterProviderService\" ref=\"NamedClusterProvider\" interface=\"org.pentaho.di.plugins.fileopensave.api.providers.FileProvider\"/>\n  <reference id=\"namedClusterManager\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n\n</blueprint>"
  },
  {
    "path": "kettle-plugins/common/job/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-common</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins-common-job</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.core</groupId>\n      <artifactId>commands</artifactId>\n      <version>3.3.0-I20070605-0010</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.h2database</groupId>\n      <artifactId>h2</artifactId>\n      <version>${h2.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/AbstractJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.w3c.dom.Node;\n\nimport java.util.List;\n\n/**\n * User: RFellows Date: 6/5/12\n */\npublic abstract class AbstractJobEntry<T extends BlockableJobConfig> extends JobEntryBase implements Cloneable,\n    JobEntryInterface {\n\n  protected T jobConfig = null;\n\n  public AbstractJobEntry() {\n    this( null );\n  }\n\n  protected AbstractJobEntry( LogChannelInterface logChannelInterface ) {\n    if ( logChannelInterface != null ) {\n      this.log = logChannelInterface;\n    }\n    jobConfig = createJobConfig();\n  }\n\n  public T getJobConfig() {\n    jobConfig.setJobEntryName( getName() );\n    return jobConfig;\n  }\n\n  public void setJobConfig( T jobConfig ) {\n    this.jobConfig = jobConfig;\n    setName( jobConfig.getJobEntryName() );\n  }\n\n  /**\n   * @return {@code true} if this job entry yields a success or failure result\n   */\n  @Override\n  public boolean evaluates() {\n    return true;\n  }\n\n  /**\n   * @return {@code true} if this job entry supports and unconditional hop from it\n   */\n  @Override\n  public boolean isUnconditional() {\n    return true;\n  }\n\n  /**\n   * @return an portion of XML describing the current state of this job entry\n   */\n  @Override\n  public String getXML() {\n    StringBuffer buffer = new StringBuffer( 1024 );\n    buffer.append( super.getXML() );\n    JobEntrySerializationHelper.write( getJobConfig(), 1, buffer );\n    return buffer.toString();\n  }\n\n  /**\n   * Set the state of this job entry from an XML document node containing a previous state.\n   * \n   * @param node\n   * @param databaseMetas\n   * @param slaveServers\n   * @param repository\n   * @throws KettleXMLException\n   */\n  @Override\n  public void loadXML( Node node, List<DatabaseMeta> databaseMetas, List<SlaveServer> slaveServers,\n      Repository repository ) throws KettleXMLException {\n    super.loadXML( node, databaseMetas, slaveServers );\n    T loaded = createJobConfig();\n    JobEntrySerializationHelper.read( loaded, node );\n    setJobConfig( loaded );\n  }\n\n  /**\n   * Load the state of this job entry from a repository.\n   *\n   * @param rep\n   * @param id_jobentry\n   * @param databases\n   * @param slaveServers\n   * @throws KettleException\n   */\n  @Override\n  public void loadRep( Repository rep, ObjectId id_jobentry, List<DatabaseMeta> databases,\n      List<SlaveServer> slaveServers ) throws KettleException {\n    super.loadRep( rep, id_jobentry, databases, slaveServers );\n    T loaded = createJobConfig();\n    JobEntrySerializationHelper.loadRep( loaded, rep, id_jobentry, databases, slaveServers );\n    setJobConfig( loaded );\n  }\n\n  /**\n   * Save the state of this job entry to a repository.\n   * \n   * @param rep\n   * @param id_job\n   * @throws KettleException\n   */\n  @Override\n  public void saveRep( Repository rep, ObjectId id_job ) throws KettleException {\n    JobEntrySerializationHelper.saveRep( getJobConfig(), rep, id_job, getObjectId() );\n  }\n\n  @Override\n  public Result execute( Result result, int i ) throws KettleException {\n    if ( !isValid( getJobConfig() ) ) {\n      setJobResultFailed( result );\n      return result;\n    }\n    final Result jobResult = result;\n    result.setResult( true );\n\n    Thread t = new Thread( getExecutionRunnable( jobResult ) );\n\n    t.setUncaughtExceptionHandler( new Thread.UncaughtExceptionHandler() {\n      @Override\n      public void uncaughtException( Thread t, Throwable e ) {\n        handleUncaughtThreadException( t, e, jobResult );\n      }\n    } );\n\n    t.start();\n\n    if ( JobEntryUtils.asBoolean( getJobConfig().getBlockingExecution(), variables ) ) {\n      while ( !parentJob.isStopped() && t.isAlive() ) {\n        try {\n          t.join( JobEntryUtils.asLong( getJobConfig().getBlockingPollingInterval(), variables ) );\n        } catch ( InterruptedException ex ) {\n          // ignore\n          break;\n        }\n      }\n      // If the parent job is stopped and the thread is still running make sure to interrupt it\n      if ( t.isAlive() ) {\n        t.interrupt();\n        setJobResultFailed( result );\n      }\n      // Wait for thread to die so we get the proper return status set in jobResult before returning\n      try {\n        t.join( 10 * 1000 ); // Don't wait for more than 10 seconds in case the thread is really blocked\n      } catch ( InterruptedException e ) {\n        // ignore\n      }\n    }\n\n    return result;\n  }\n\n  /**\n   * Flag a job result as failed\n   * \n   * @param jobResult\n   */\n  public void setJobResultFailed( Result jobResult ) {\n    jobResult.setNrErrors( 1 );\n    jobResult.setResult( false );\n  }\n\n  /**\n   * Determine if the configuration provide is valid. This will validate all options in one pass.\n   * \n   * @param config\n   *          Configuration to validate\n   * @return {@code true} if the configuration contains valid values for all options we use directly in this job entry.\n   */\n  public boolean isValid( T config ) {\n    List<String> warnings = getValidationWarnings( config );\n    for ( String warning : warnings ) {\n      logError( warning );\n    }\n    return warnings.isEmpty();\n  }\n\n  public VariableSpace getVariableSpace() {\n    // These variables must be set on this job entry prior to retrieval.\n    // Today this happens as part of job execution via the Kettle job execution engine\n    // or in the controller's open() method.\n    return variables;\n  }\n\n  /**\n   * Creates a job configuration\n   * \n   * @return\n   */\n  protected abstract T createJobConfig();\n\n  // /**\n  // * Ensures that the configuration is valid for execution\n  // * @param config\n  // * @return\n  // */\n  // protected abstract boolean isValid(T config);\n\n  /**\n   * Validate any configuration option we use directly that could be invalid at runtime.\n   * \n   * @param config\n   *          Configuration to validate\n   * @return List of warning messages for any invalid configuration options we use directly in this job entry.\n   */\n  public abstract List<String> getValidationWarnings( T config );\n\n  /**\n   * Get the {@link Runnable} that does the execution of the job\n   * \n   * @param jobResult\n   *          Job result for the execution to use\n   * @return\n   * @throws KettleException\n   *           error obtaining execution runnable\n   */\n  protected abstract Runnable getExecutionRunnable( final Result jobResult ) throws KettleException;\n\n  /**\n   * Handle any clean up required when our execution thread encounters an unexpected {@link Exception}.\n   * \n   * @param t\n   *          Thread that encountered the uncaught exception\n   * @param e\n   *          Exception that was encountered\n   * @param jobResult\n   *          Job result for the execution that spawned the thread\n   */\n  protected abstract void handleUncaughtThreadException( Thread t, Throwable e, Result jobResult );\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/AbstractJobEntryController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.plugins.common.ui.VfsFileChooserHelper;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.plugins.JobEntryPluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.containers.XulDeck;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.impl.AbstractXulEventHandler;\nimport org.pentaho.ui.xul.stereotype.Bindable;\nimport org.pentaho.ui.xul.swt.tags.SwtDialog;\nimport com.google.common.annotations.VisibleForTesting;\n\nimport java.lang.reflect.InvocationTargetException;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\n/**\n * User: RFellows Date: 6/6/12\n */\npublic abstract class AbstractJobEntryController<C extends BlockableJobConfig, E extends AbstractJobEntry<C>> extends\n    AbstractXulEventHandler {\n\n  public static final String[] DEFAULT_FILE_FILTERS = new String[] { \"*.*\" };\n\n  // Generically typed fields\n  protected C config; // BlockableJobConfig\n  protected E jobEntry; // AbstractJobEntry<BlockableJobConfig>\n\n  // common fields\n  protected XulDomContainer container;\n  protected BindingFactory bindingFactory;\n  protected List<Binding> bindings;\n  protected JobMeta jobMeta;\n\n  protected JobEntryMode jobEntryMode = JobEntryMode.QUICK_SETUP;\n\n  @SuppressWarnings( \"unchecked\" )\n  public AbstractJobEntryController( JobMeta jobMeta, XulDomContainer container, E jobEntry,\n                                     BindingFactory bindingFactory ) {\n    super();\n    this.jobMeta = jobMeta;\n    this.jobEntry = jobEntry;\n    this.container = container;\n    this.config = (C) jobEntry.getJobConfig().clone();\n    this.bindingFactory = bindingFactory;\n  }\n\n  /**\n   * @return the simple name for this controller. This controller can be referenced by this name in the XUL document.\n   */\n  @Override\n  public String getName() {\n    return \"controller\";\n  }\n\n  /**\n   * Opens the dialog\n   * \n   * @return\n   */\n  public JobEntryInterface open() {\n    XulDialog dialog = (XulDialog) container.getDocumentRoot().getElementById( getDialogElementId() );\n    // Update the Variable Space so the latest are available when the dialog is show (for test evaluation)\n    jobEntry.copyVariablesFrom( jobMeta );\n    dialog.show();\n    return jobEntry;\n  }\n\n  /**\n   * Initialize the dialog by loading model data, creating bindings and firing initial sync (\n   * {@link Binding#fireSourceChanged()}.\n   *\n   * @throws XulException\n   *\n   * @throws InvocationTargetException\n   *\n   */\n  public void init() throws XulException, InvocationTargetException {\n    bindings = new ArrayList<Binding>();\n\n    // override hook\n    beforeInit();\n\n    try {\n\n      createBindings( config, container, bindingFactory, bindings );\n      syncModel();\n\n      for ( Binding binding : bindings ) {\n        binding.fireSourceChanged();\n      }\n    } finally {\n      // override hook\n      afterInit();\n    }\n\n  }\n\n  /**\n   * Accept and apply the changes made in the dialog. Also, close the dialog\n   */\n  @Bindable\n  public void accept() {\n    jobEntry.setJobConfig( config );\n    jobEntry.setChanged();\n    cancel();\n  }\n\n  /**\n   * Close the dialog without saving any changes\n   */\n  @Bindable\n  public void cancel() {\n    removeBindings();\n    XulDialog xulDialog = getDialog();\n\n    Shell shell = (Shell) xulDialog.getRootObject();\n    if ( !shell.isDisposed() ) {\n      WindowProperty winprop = new WindowProperty( shell );\n      PropsUI.getInstance().setScreen( winprop );\n      ( (Composite) xulDialog.getManagedObject() ).dispose();\n      shell.dispose();\n    }\n  }\n\n  /**\n   * Call help dialog\n   */\n  public void help() {\n    XulDialog xulDialog = getDialog();\n    Shell shell = (Shell) xulDialog.getRootObject();\n    HelpUtils.openHelpDialog( shell, getPlugin() );\n  }\n\n  /**\n   * Find a plugin for a corresponding job entry\n   */\n  protected PluginInterface getPlugin() {\n    return PluginRegistry.getInstance().findPluginWithName( JobEntryPluginType.class, jobEntry.getName() );\n  }\n\n  /**\n   * Remove and destroy all bindings from {@link #bindings}.\n   */\n  protected void removeBindings() {\n    if ( bindings == null ) {\n      return;\n    }\n    for ( Binding binding : bindings ) {\n      binding.destroyBindings();\n    }\n    bindings.clear();\n  }\n\n  /**\n   * Look up the dialog reference from the document.\n   *\n   * @return The dialog element referred to by {@link #getDialogElementId()}\n   */\n  protected SwtDialog getDialog() {\n    return (SwtDialog) getXulDomContainer().getDocumentRoot().getElementById( getDialogElementId() );\n  }\n\n  /**\n   * @return the job entry this controller will modify configuration for\n   */\n  @VisibleForTesting\n  public E getJobEntry() {\n    return jobEntry;\n  }\n\n  /**\n   * Override this to execute some code prior to the init function running\n   */\n  protected void beforeInit() {\n    return;\n  }\n\n  /**\n   * Override this to execute some code after the init function is complete\n   */\n  protected void afterInit() {\n    return;\n  }\n\n  protected boolean showConfirmationDialog( String title, String message ) {\n    return MessageDialog.openConfirm( getShell(), title, message );\n  }\n\n  /**\n   * Show an information dialog with the title and message provided.\n   *\n   * @param title\n   *          Dialog window title\n   * @param message\n   *          Dialog message\n   */\n  protected void showInfoDialog( String title, String message ) {\n    MessageBox mb = new MessageBox( getShell(), SWT.OK | SWT.ICON_INFORMATION );\n    mb.setText( title );\n    mb.setMessage( message );\n    mb.open();\n  }\n\n  /**\n   * Show an error dialog with the title and message provided.\n   *\n   * @param title\n   *          Dialog window title\n   * @param message\n   *          Dialog message\n   */\n  protected void showErrorDialog( String title, String message ) {\n    MessageBox mb = new MessageBox( getShell(), SWT.OK | SWT.ICON_ERROR );\n    mb.setText( title );\n    mb.setMessage( message );\n    mb.open();\n  }\n\n  /**\n   * Show an error dialog with the title, message, and toggle button to see the entire stacktrace produced by {@code t}.\n   *\n   * @param title\n   *          Dialog window title\n   * @param message\n   *          Dialog message\n   * @param t\n   *          Cause for this error\n   */\n  protected void showErrorDialog( String title, String message, Throwable t ) {\n    new ErrorDialog( getShell(), title, message, t );\n  }\n\n  /**\n   * @return the shell for the currently visible dialog. This will be used to display additional dialogs/popups.\n   */\n  protected Shell getShell() {\n    return getDialog().getShell();\n  }\n\n  /**\n   * Browse for a file or directory with the VFS Browser.\n   *\n   * @param root\n   *          Root object\n   * @param initial\n   *          Initial file or folder the browser should open to\n   * @param dialogMode\n   *          Mode to open dialog in: e.g.\n   *          {@link org.pentaho.vfs.ui .VfsFileChooserDialog#VFS_DIALOG_OPEN_FILE_OR_DIRECTORY}\n   * @param schemeRestriction\n   *          Scheme to limit the user to browsing from\n   * @param defaultScheme\n   *          Scheme to select by default in the selection dropdown\n   * @return The selected file object, {@code null} if no object is selected\n   * @throws KettleFileException\n   *           Error accessing the root file using the initial file, when {@code root} is not provided\n   */\n  protected FileObject browseVfs( FileObject root, FileObject initial, int dialogMode, String schemeRestriction,\n      String defaultScheme, boolean showFileScheme ) throws KettleFileException {\n    String[] schemeRestrictions = new String[1];\n    schemeRestrictions[0] = schemeRestriction;\n    return browseVfs( root, initial, dialogMode, schemeRestrictions, showFileScheme, defaultScheme );\n  }\n\n  protected FileObject browseVfs( FileObject root, FileObject initial, int dialogMode, String[] schemeRestrictions,\n      boolean showFileScheme, String defaultScheme ) throws KettleFileException {\n    return browseVfs( root, initial, dialogMode, schemeRestrictions, showFileScheme, defaultScheme, null );\n  }\n\n  protected FileObject browseVfs( FileObject root, FileObject initial, int dialogMode, String[] schemeRestrictions,\n      boolean showFileScheme, String defaultScheme, NamedCluster namedCluster ) throws KettleFileException {\n    return browseVfs( root, initial, dialogMode, schemeRestrictions,  showFileScheme, defaultScheme, namedCluster, true, true );\n  }\n\n  protected FileObject browseVfs( FileObject root, FileObject initial, int dialogMode, String[] schemeRestrictions,\n      boolean showFileScheme, String defaultScheme, NamedCluster namedCluster, boolean showLocation ) throws KettleFileException {\n    return browseVfs( root, initial, dialogMode, schemeRestrictions,  showFileScheme, defaultScheme, namedCluster, showLocation, true );\n  }\n\n  protected FileObject browseVfs( FileObject root, FileObject initial, int dialogMode, String[] schemeRestrictions,\n      boolean showFileScheme, String defaultScheme, NamedCluster namedCluster, boolean showLocation, boolean showCustomUI ) throws KettleFileException {\n    Spoon spoon = Spoon.getInstance();\n    if ( initial == null ) {\n      initial = KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( Spoon.getInstance().getLastFileOpened() );\n    }\n    if ( root == null ) {\n      try {\n        root = initial.getFileSystem().getRoot();\n      } catch ( FileSystemException e ) {\n        throw new KettleFileException( e );\n      }\n    }\n\n    VfsFileChooserHelper fileChooserHelper =\n        new VfsFileChooserHelper( getShell(), Spoon.getInstance().getVfsFileChooserDialog( root, initial ), jobEntry );\n    fileChooserHelper.setDefaultScheme( defaultScheme );\n    fileChooserHelper.setSchemeRestrictions( schemeRestrictions );\n    fileChooserHelper.setShowFileScheme( showFileScheme );\n    if ( namedCluster != null ) {\n      fileChooserHelper.setNamedCluster( namedCluster );\n    }\n    try {\n      return fileChooserHelper.browse(\n          getFileFilters(), getFileFilterNames(), initial.getName().getURI(), dialogMode, showLocation, showCustomUI );\n    } catch ( KettleException e ) {\n      throw new KettleFileException( e );\n    } catch ( FileSystemException e ) {\n      throw new KettleFileException( e );\n    }\n  }\n\n  protected String[] getFileFilters() {\n    return DEFAULT_FILE_FILTERS;\n  }\n\n  /**\n   * Used by browseVfs method as names corresponding to the file filters. Override if {@code getFileFilters} is\n   * overridden.\n   * \n   * @return\n   */\n  protected String[] getFileFilterNames() {\n    return new String[] { BaseMessages.getString( getClass(), \"System.FileType.AllFiles\" ) };\n  }\n\n  /**\n   * @return the current configuration object. This configuration may be discarded if the dialog is canceled.\n   */\n  public C getConfig() {\n    return config;\n  }\n\n  public void setConfig( C config ) {\n    this.config = config;\n  }\n\n  /**\n   * @return the job meta for the job entry we're editing\n   */\n  public JobMeta getJobMeta() {\n    return jobMeta;\n  }\n\n  public void setJobMeta( JobMeta jobMeta ) {\n    this.jobMeta = jobMeta;\n  }\n\n  public JobEntryMode getJobEntryMode() {\n    return jobEntryMode;\n  }\n\n  /**\n   * Toggle between Advanced and Basic configuration modes\n   */\n  public void toggleMode() {\n    JobEntryMode mode =\n        ( jobEntryMode == JobEntryMode.ADVANCED_LIST ? JobEntryMode.QUICK_SETUP : JobEntryMode.ADVANCED_LIST );\n    setMode( mode );\n  }\n\n  protected void setMode( JobEntryMode mode ) {\n\n    // if switching from Advanced to Quick mode, warn the user that any changes made in Advanced mode will be lost\n    if ( this.jobEntryMode == JobEntryMode.ADVANCED_LIST && mode == JobEntryMode.QUICK_SETUP ) {\n      boolean confirmed =\n          showConfirmationDialog( BaseMessages.getString( AbstractJobEntryController.class,\n              \"JobExecutor.Confirm.Toggle.Quick.Mode.Title\" ), BaseMessages.getString( AbstractJobEntryController.class,\n              \"JobExecutor.Confirm.Toggle.Quick.Mode.Message\" ) );\n      if ( !confirmed ) {\n        return;\n      }\n    }\n\n    JobEntryMode opposite = mode == JobEntryMode.QUICK_SETUP ? JobEntryMode.ADVANCED_LIST : JobEntryMode.QUICK_SETUP;\n    this.jobEntryMode = mode;\n    XulDeck deck = (XulDeck) getXulDomContainer().getDocumentRoot().getElementById( getModeDeckElementId() );\n\n    deck.setSelectedIndex( mode == JobEntryMode.QUICK_SETUP ? 0 : 1 );\n    // Synchronize the model every time we swap modes so the UI is always up to date. This is required since we don't\n    // set argument item values directly or listen for their changes\n    syncModel();\n\n    // Swap the label on the button\n    setModeToggleLabel( opposite );\n  }\n\n  /**\n   * The mode deck element defined in your xul. Override this to customize the element id\n   * \n   * @return\n   */\n  protected String getModeDeckElementId() {\n    return \"modeDeck\";\n  }\n\n  // //////////////////\n  // abstract methods\n  // //////////////////\n  protected abstract void syncModel();\n\n  protected abstract void createBindings( C config, XulDomContainer container, BindingFactory bindingFactory,\n      Collection<Binding> bindings );\n\n  protected abstract String getDialogElementId();\n\n  protected abstract void setModeToggleLabel( JobEntryMode mode );\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/BlockableJobConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.pentaho.ui.xul.XulEventSource;\n\nimport java.beans.PropertyChangeListener;\nimport java.beans.PropertyChangeSupport;\n\n/**\n * User: RFellows Date: 6/5/12\n */\npublic class BlockableJobConfig implements XulEventSource, Cloneable {\n  protected transient PropertyChangeSupport pcs = new PropertyChangeSupport( this );\n  protected String jobEntryName = null;\n  protected String blockingPollingInterval = String.valueOf( 300 );\n  protected String blockingExecution = Boolean.TRUE.toString();\n\n  public static final String JOB_ENTRY_NAME = \"jobEntryName\";\n  public static final String BLOCKING_EXECUTION = \"blockingExecution\";\n  public static final String BLOCKING_POLLING_INTERVAL = \"blockingPollingInterval\";\n\n  public String getJobEntryName() {\n    return jobEntryName;\n  }\n\n  public void setJobEntryName( String jobEntryName ) {\n    String old = this.jobEntryName;\n    this.jobEntryName = jobEntryName;\n    pcs.firePropertyChange( JOB_ENTRY_NAME, old, this.jobEntryName );\n  }\n\n  public String getBlockingPollingInterval() {\n    return blockingPollingInterval;\n  }\n\n  public void setBlockingPollingInterval( String blockingPollingInterval ) {\n    String old = this.blockingPollingInterval;\n    this.blockingPollingInterval = blockingPollingInterval;\n    pcs.firePropertyChange( BLOCKING_POLLING_INTERVAL, old, this.blockingPollingInterval );\n  }\n\n  public String getBlockingExecution() {\n    return blockingExecution;\n  }\n\n  public void setBlockingExecution( String blockingExecution ) {\n    String old = this.blockingExecution;\n    this.blockingExecution = blockingExecution;\n    pcs.firePropertyChange( BLOCKING_EXECUTION, old, this.blockingExecution );\n  }\n\n  /**\n   * @see {@link PropertyChangeSupport#addPropertyChangeListener(PropertyChangeListener)}\n   */\n  public void addPropertyChangeListener( PropertyChangeListener l ) {\n    pcs.addPropertyChangeListener( l );\n  }\n\n  /**\n   * @see {@link PropertyChangeSupport#addPropertyChangeListener(String, PropertyChangeListener)}\n   */\n  public void addPropertyChangeListener( String propertyName, PropertyChangeListener l ) {\n    pcs.addPropertyChangeListener( propertyName, l );\n  }\n\n  /**\n   * @see {@link PropertyChangeSupport#removePropertyChangeListener(PropertyChangeListener)}\n   */\n  public void removePropertyChangeListener( PropertyChangeListener l ) {\n    pcs.removePropertyChangeListener( l );\n  }\n\n  /**\n   * @see {@link PropertyChangeSupport#removePropertyChangeListener(String, PropertyChangeListener)}\n   */\n  public void removePropertyChangeListener( String propertyName, PropertyChangeListener l ) {\n    pcs.removePropertyChangeListener( propertyName, l );\n  }\n\n  @Override\n  public Object clone() {\n    try {\n      return super.clone();\n    } catch ( CloneNotSupportedException e ) {\n      throw new RuntimeException( e );\n    }\n  }\n\n  @Override\n  public boolean equals( Object o ) {\n    if ( this == o ) {\n      return true;\n    }\n    if ( o == null || getClass() != o.getClass() ) {\n      return false;\n    }\n    BlockableJobConfig that = (BlockableJobConfig) o;\n\n    if ( blockingExecution != null ? !blockingExecution.equals( that.blockingExecution )\n        : that.blockingExecution != null ) {\n      return false;\n    }\n    if ( blockingPollingInterval != null ? !blockingPollingInterval.equals( that.blockingPollingInterval )\n        : that.blockingPollingInterval != null ) {\n      return false;\n    }\n    if ( jobEntryName != null ? !jobEntryName.equals( that.jobEntryName ) : that.jobEntryName != null ) {\n      return false;\n    }\n    return true;\n  }\n\n  @Override\n  public int hashCode() {\n    int result = jobEntryName != null ? jobEntryName.hashCode() : 0;\n    result = 31 * result + ( blockingPollingInterval != null ? blockingPollingInterval.hashCode() : 0 );\n    result = 31 * result + ( blockingExecution != null ? blockingExecution.hashCode() : 0 );\n    return result;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/JobEntryMode.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\n/**\n * Represents visible states of the UI and the execution mode.\n * \n * User: RFellows Date: 6/11/12\n */\npublic enum JobEntryMode {\n  QUICK_SETUP, ADVANCED_LIST, ADVANCED_COMMAND_LINE\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/JobEntrySerializationHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.core.xml.XMLParserFactoryProducer;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.w3c.dom.Document;\nimport org.w3c.dom.Node;\nimport org.w3c.dom.NodeList;\nimport org.xml.sax.SAXException;\n\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.parsers.ParserConfigurationException;\nimport java.io.ByteArrayInputStream;\nimport java.io.IOException;\nimport java.io.Serializable;\nimport java.lang.reflect.Array;\nimport java.lang.reflect.Constructor;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Modifier;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.List;\n\npublic class JobEntrySerializationHelper implements Serializable {\n  private static final long serialVersionUID = -3924431164206698711L;\n\n  private static final String INDENT_STRING = \"    \";\n\n  /**\n   * This method will perform the work that used to be done by hand in each kettle input meta for: readData(Node node).\n   * We handle all primitive types, complex user types, arrays, lists and any number of nested object levels, via\n   * recursion of this method.\n   * \n   * @param object\n   *          The object to be persisted\n   * @param node\n   *          The node to 'attach' our XML to\n   */\n  public static void read( Object object, Node node ) {\n    // get this classes declared fields, public, private, protected, package, everything, but not super\n    Field[] declaredFields = getAllDeclaredFields( object.getClass() );\n    for ( Field field : declaredFields ) {\n\n      // ignore fields which are final, static or transient\n      if ( Modifier.isFinal( field.getModifiers() ) || Modifier.isStatic( field.getModifiers() )\n          || Modifier.isTransient( field.getModifiers() ) ) {\n        continue;\n      }\n\n      // if the field is not accessible (private), we'll open it up so we can operate on it\n      boolean accessible = field.isAccessible();\n      if ( !accessible ) {\n        field.setAccessible( true );\n      }\n      try {\n        // check if we're going to try to read an array\n        if ( field.getType().isArray() ) {\n          try {\n            // get the node (if available) for the field\n            Node fieldNode = XMLHandler.getSubNode( node, field.getName() );\n            if ( fieldNode == null ) {\n              // doesn't exist (this is possible if fields were empty/null when persisted)\n              continue;\n            }\n            // get the Java classname for the array elements\n            String fieldClassName = XMLHandler.getTagAttribute( fieldNode, \"class\" );\n            Class<?> clazz = null;\n            // primitive types require special handling\n            if ( fieldClassName.equals( \"boolean\" ) ) {\n              clazz = boolean.class;\n            } else if ( fieldClassName.equals( \"int\" ) ) {\n              clazz = int.class;\n            } else if ( fieldClassName.equals( \"float\" ) ) {\n              clazz = float.class;\n            } else if ( fieldClassName.equals( \"double\" ) ) {\n              clazz = double.class;\n            } else if ( fieldClassName.equals( \"long\" ) ) {\n              clazz = long.class;\n            } else {\n              // normal, non primitive array class\n              clazz = Class.forName( fieldClassName );\n            }\n            // get the child nodes for the field\n            NodeList childrenNodes = fieldNode.getChildNodes();\n\n            // create a new, appropriately sized array\n            int arrayLength = 0;\n            for ( int i = 0; i < childrenNodes.getLength(); i++ ) {\n              Node child = childrenNodes.item( i );\n              // ignore TEXT_NODE, they'll cause us to have a larger count than reality, even if they are empty\n              if ( child.getNodeType() != Node.TEXT_NODE ) {\n                arrayLength++;\n              }\n            }\n            // create a new instance of our array\n            Object array = Array.newInstance( clazz, arrayLength );\n            // set the new array on the field (on object, passed in)\n            field.set( object, array );\n\n            int arrayIndex = 0;\n            for ( int i = 0; i < childrenNodes.getLength(); i++ ) {\n              Node child = childrenNodes.item( i );\n              if ( child.getNodeType() == Node.TEXT_NODE ) {\n                continue;\n              }\n\n              // roll through all of our array elements setting them as encountered\n              if ( String.class.isAssignableFrom( clazz ) || Number.class.isAssignableFrom( clazz ) ) {\n                Constructor<?> constructor = clazz.getConstructor( String.class );\n                Object instance = constructor.newInstance( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, instance );\n              } else if ( Boolean.class.isAssignableFrom( clazz ) || boolean.class.isAssignableFrom( clazz ) ) {\n                Object value = Boolean.valueOf( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, value );\n              } else if ( Integer.class.isAssignableFrom( clazz ) || int.class.isAssignableFrom( clazz ) ) {\n                Object value = Integer.valueOf( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, value );\n              } else if ( Float.class.isAssignableFrom( clazz ) || float.class.isAssignableFrom( clazz ) ) {\n                Object value = Float.valueOf( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, value );\n              } else if ( Double.class.isAssignableFrom( clazz ) || double.class.isAssignableFrom( clazz ) ) {\n                Object value = Double.valueOf( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, value );\n              } else if ( Long.class.isAssignableFrom( clazz ) || long.class.isAssignableFrom( clazz ) ) {\n                Object value = Long.valueOf( XMLHandler.getTagAttribute( child, \"value\" ) );\n                Array.set( array, arrayIndex++, value );\n              } else {\n                // create an instance of 'fieldClassName'\n                Object instance = clazz.newInstance();\n                // add the instance to the array\n                Array.set( array, arrayIndex++, instance );\n                // read child, the same way as the parent\n                read( instance, child );\n              }\n            }\n          } catch ( Throwable t ) {\n            t.printStackTrace();\n            // TODO: log this\n          }\n        } else if ( Collection.class.isAssignableFrom( field.getType() ) ) {\n          // handle collections\n          try {\n            // get the node (if available) for the field\n            Node fieldNode = XMLHandler.getSubNode( node, field.getName() );\n            if ( fieldNode == null ) {\n              // doesn't exist (this is possible if fields were empty/null when persisted)\n              continue;\n            }\n            // get the Java classname for the array elements\n            String fieldClassName = XMLHandler.getTagAttribute( fieldNode, \"class\" );\n            fieldClassName = upgradeName( fieldClassName );\n            Class<?> clazz = Class.forName( fieldClassName );\n\n            // create a new, appropriately sized array, we already know it's a collection\n            @SuppressWarnings( \"unchecked\" )\n            Collection<Object> collection = (Collection<Object>) field.getType().newInstance();\n            field.set( object, collection );\n\n            // iterate over all of the array elements and add them one by one as encountered\n            NodeList childrenNodes = fieldNode.getChildNodes();\n            for ( int i = 0; i < childrenNodes.getLength(); i++ ) {\n              Node child = childrenNodes.item( i );\n              if ( child.getNodeType() == Node.TEXT_NODE ) {\n                continue;\n              }\n\n              // create an instance of 'fieldClassName'\n              if ( String.class.isAssignableFrom( clazz ) || Number.class.isAssignableFrom( clazz )\n                  || Boolean.class.isAssignableFrom( clazz ) ) {\n                Constructor<?> constructor = clazz.getConstructor( String.class );\n                Object instance = constructor.newInstance( XMLHandler.getTagAttribute( child, \"value\" ) );\n                collection.add( instance );\n              } else {\n                // read child, the same way as the parent\n                Object instance = clazz.newInstance();\n                // add the instance to the array\n                collection.add( instance );\n                read( instance, child );\n              }\n            }\n          } catch ( Throwable t ) {\n            t.printStackTrace();\n            // TODO: log this\n          }\n        } else {\n          // we're handling a regular field (not an array or list)\n          try {\n            String value = XMLHandler.getTagValue( node, field.getName() );\n            if ( value == null ) {\n              continue;\n            }\n\n            if ( field.isAnnotationPresent( Password.class ) ) {\n              value = Encr.decryptPasswordOptionallyEncrypted( value );\n            }\n\n            // System.out.println(\"Setting \" + field.getName() + \"(\" + field.getType().getSimpleName() + \") = \" + value\n            // + \" on: \" + object.getClass().getName());\n            if ( field.getType().isPrimitive() && \"\".equals( value ) ) {\n              // skip setting of primitives if we see null\n              continue;\n            } else if ( \"\".equals( value ) ) {\n              field.set( object, value );\n            } else if ( field.getType().isPrimitive() ) {\n              // special primitive handling\n              if ( double.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, Double.parseDouble( value ) );\n              } else if ( float.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, Float.parseFloat( value ) );\n              } else if ( long.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, Long.parseLong( value ) );\n              } else if ( int.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, Integer.parseInt( value ) );\n              } else if ( byte.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, value.getBytes() );\n              } else if ( boolean.class.isAssignableFrom( field.getType() ) ) {\n                field.set( object, \"true\".equalsIgnoreCase( value ) );\n              }\n            } else if ( String.class.isAssignableFrom( field.getType() )\n                || Number.class.isAssignableFrom( field.getType() )\n                || Boolean.class.isAssignableFrom( field.getType() ) ) {\n              Constructor<?> constructor = field.getType().getConstructor( String.class );\n              Object instance = constructor.newInstance( value );\n              field.set( object, instance );\n            } else {\n              // we don't know what we're handling, but we'll give it a shot\n              Node fieldNode = XMLHandler.getSubNode( node, field.getName() );\n              if ( fieldNode == null ) {\n                // doesn't exist (this is possible if fields were empty/null when persisted)\n                continue;\n              }\n              // get the Java classname for the array elements\n              String fieldClassName = XMLHandler.getTagAttribute( fieldNode, \"class\" );\n              Class<?> clazz = Class.forName( fieldClassName );\n              Object instance = clazz.newInstance();\n              field.set( object, instance );\n              read( instance, fieldNode );\n            }\n          } catch ( Throwable t ) {\n            // TODO: log this\n            t.printStackTrace();\n          }\n        }\n      } finally {\n        if ( !accessible ) {\n          field.setAccessible( false );\n        }\n      }\n    }\n  }\n\n  private static String upgradeName( String fieldClassName ) {\n    if ( fieldClassName.equals( \"org.pentaho.di.job.PropertyEntry\" ) ) {\n      return \"org.pentaho.big.data.kettle.plugins.job.PropertyEntry\";\n    }\n    return fieldClassName;\n  }\n\n  /**\n   * This method will perform the work that used to be done by hand in each kettle input meta for: getXML(). We handle\n   * all primitive types, complex user types, arrays, lists and any number of nested object levels, via recursion of\n   * this method.\n   * \n   * @param object\n   * @param buffer\n   */\n  public static void write( Object object, int indentLevel, StringBuffer buffer ) {\n\n    // don't even attempt to persist\n    if ( object == null ) {\n      return;\n    }\n\n    // get this classes declared fields, public, private, protected, package, everything, but not super\n    Field[] declaredFields = getAllDeclaredFields( object.getClass() );\n    for ( Field field : declaredFields ) {\n\n      // ignore fields which are final, static or transient\n      if ( Modifier.isFinal( field.getModifiers() ) || Modifier.isStatic( field.getModifiers() )\n          || Modifier.isTransient( field.getModifiers() ) ) {\n        continue;\n      }\n\n      // if the field is not accessible (private), we'll open it up so we can operate on it\n      boolean accessible = field.isAccessible();\n      if ( !accessible ) {\n        field.setAccessible( true );\n      }\n\n      try {\n        Object fieldValue = field.get( object );\n        // no value? null? skip it!\n        if ( fieldValue == null || \"\".equals( fieldValue ) ) {\n          continue;\n        }\n\n        if ( field.isAnnotationPresent( Password.class ) && String.class.isAssignableFrom( field.getType() ) ) {\n          fieldValue = Encr.encryptPasswordIfNotUsingVariables( String.class.cast( fieldValue ) );\n        }\n\n        if ( field.getType().isPrimitive() || String.class.isAssignableFrom( field.getType() )\n            || Number.class.isAssignableFrom( field.getType() ) || Boolean.class.isAssignableFrom( field.getType() ) ) {\n          indent( buffer, indentLevel );\n          buffer.append( XMLHandler.addTagValue( field.getName(), fieldValue.toString() ) );\n        } else if ( field.getType().isArray() ) {\n          // write array values\n          int length = Array.getLength( fieldValue );\n\n          // open node (add class name attribute)\n          indent( buffer, indentLevel );\n          buffer.append(\n              \"<\" + field.getName() + \" class=\\\"\" + fieldValue.getClass().getComponentType().getName() + \"\\\">\" )\n              .append( Const.CR );\n\n          for ( int i = 0; i < length; i++ ) {\n            Object childObject = Array.get( fieldValue, i );\n            // handle all strings/numbers\n            if ( String.class.isAssignableFrom( childObject.getClass() )\n                || Number.class.isAssignableFrom( childObject.getClass() ) ) {\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" ).append( fieldValue.getClass().getComponentType().getSimpleName() );\n              buffer.append( \" value=\\\"\" + childObject.toString() + \"\\\"/>\" ).append( Const.CR );\n            } else if ( Boolean.class.isAssignableFrom( childObject.getClass() )\n                || boolean.class.isAssignableFrom( childObject.getClass() ) ) {\n              // handle booleans (special case)\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" ).append( fieldValue.getClass().getComponentType().getSimpleName() );\n              buffer.append( \" value=\\\"\" + childObject.toString() + \"\\\"/>\" ).append( Const.CR );\n            } else {\n              // array element is a user defined/complex type, recurse into it\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" + fieldValue.getClass().getComponentType().getSimpleName() + \">\" ).append( Const.CR );\n              write( childObject, indentLevel + 1, buffer );\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"</\" + fieldValue.getClass().getComponentType().getSimpleName() + \">\" ).append( Const.CR );\n            }\n          }\n          // close node\n          buffer.append( \"    </\" + field.getName() + \">\" ).append( Const.CR );\n        } else if ( Collection.class.isAssignableFrom( field.getType() ) ) {\n          // write collection values\n          Collection<?> collection = (Collection<?>) fieldValue;\n          if ( collection.size() == 0 ) {\n            continue;\n          }\n          Class<?> listClass = collection.iterator().next().getClass();\n\n          // open node (add class name attribute)\n          indent( buffer, indentLevel );\n          buffer.append( \"<\" + field.getName() + \" class=\\\"\" + listClass.getName() + \"\\\">\" ).append( Const.CR );\n\n          for ( Object childObject : collection ) {\n            // handle all strings/numbers\n            if ( String.class.isAssignableFrom( childObject.getClass() )\n                || Number.class.isAssignableFrom( childObject.getClass() ) ) {\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" ).append( listClass.getSimpleName() );\n              buffer.append( \" value=\\\"\" + childObject.toString() + \"\\\"/>\" ).append( Const.CR );\n            } else if ( Boolean.class.isAssignableFrom( childObject.getClass() )\n                || boolean.class.isAssignableFrom( childObject.getClass() ) ) {\n              // handle booleans (special case)\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" ).append( listClass.getSimpleName() );\n              buffer.append( \" value=\\\"\" + childObject.toString() + \"\\\"/>\" ).append( Const.CR );\n            } else {\n              // array element is a user defined/complex type, recurse into it\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"<\" + listClass.getSimpleName() + \">\" ).append( Const.CR );\n              write( childObject, indentLevel + 1, buffer );\n              indent( buffer, indentLevel + 1 );\n              buffer.append( \"</\" + listClass.getSimpleName() + \">\" ).append( Const.CR );\n            }\n          }\n          // close node\n          indent( buffer, indentLevel );\n          buffer.append( \"</\" + field.getName() + \">\" ).append( Const.CR );\n        } else {\n          // if we don't now what it is, let's treat it like a first class citizen and try to write it out\n          // open node (add class name attribute)\n          indent( buffer, indentLevel );\n          buffer.append( \"<\" + field.getName() + \" class=\\\"\" + fieldValue.getClass().getName() + \"\\\">\" ).append(\n              Const.CR );\n          write( fieldValue, indentLevel + 1, buffer );\n          // close node\n          indent( buffer, indentLevel );\n          buffer.append( \"</\" + field.getName() + \">\" ).append( Const.CR );\n        }\n      } catch ( Throwable t ) {\n        t.printStackTrace();\n        // TODO: log this\n      } finally {\n        if ( !accessible ) {\n          field.setAccessible( false );\n        }\n      }\n    }\n\n  }\n\n  /**\n   * Get all declared fields of the provided class including any inherited class fields.\n   * \n   * @param aClass\n   *          Class to look up fields for\n   * @return All declared fields for the class provided\n   */\n  private static Field[] getAllDeclaredFields( Class<?> aClass ) {\n    List<Field> fields = new ArrayList<Field>();\n    while ( aClass != null ) {\n      fields.addAll( Arrays.asList( aClass.getDeclaredFields() ) );\n      aClass = aClass.getSuperclass();\n    }\n    return fields.toArray( new Field[0] );\n  }\n\n  /**\n   * Handle saving of the input (object) to the kettle repository using the most simple method available, by calling\n   * write and then saving the xml as an attribute.\n   * \n   * @param object\n   * @param rep\n   * @param id_job\n   * @param id_jobentry\n   * @throws KettleException\n   */\n  public static void saveRep( Object object, Repository rep, ObjectId id_job, ObjectId id_jobentry )\n    throws KettleException {\n    StringBuffer sb = new StringBuffer( 1024 );\n    sb.append( \"<job-xml>\" );\n    write( object, 0, sb );\n    sb.append( \"</job-xml>\" );\n    rep.saveJobEntryAttribute( id_job, id_jobentry, \"job-xml\", sb.toString() );\n  }\n\n  /**\n   * Handle reading of the input (object) from the kettle repository by getting the xml from the repository attribute\n   * string and then re-hydrate the object with our already existing read method.\n   * \n   * @param object\n   * @param rep\n   * @param id_job\n   * @param databases\n   * @param slaveServers\n   * @throws KettleException\n   */\n  public static void loadRep( Object object, Repository rep, ObjectId id_job, List<DatabaseMeta> databases,\n      List<SlaveServer> slaveServers ) throws KettleException {\n    try {\n      String xml = rep.getJobEntryAttributeString( id_job, \"job-xml\" );\n      ByteArrayInputStream bais = new ByteArrayInputStream( xml.getBytes() );\n      DocumentBuilderFactory factory = XMLParserFactoryProducer.createSecureDocBuilderFactory();\n      Document doc = factory.newDocumentBuilder().parse( bais );\n      read( object, doc.getDocumentElement() );\n    } catch ( ParserConfigurationException ex ) {\n      throw new KettleException( ex.getMessage(), ex );\n    } catch ( SAXException ex ) {\n      throw new KettleException( ex.getMessage(), ex );\n    } catch ( IOException ex ) {\n      throw new KettleException( ex.getMessage(), ex );\n    }\n  }\n\n  private static void indent( StringBuffer sb, int indentLevel ) {\n    for ( int i = 0; i < indentLevel; i++ ) {\n      sb.append( INDENT_STRING );\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/JobEntryUtils.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.pentaho.di.core.variables.VariableSpace;\n\nimport java.util.Map;\n\n/**\n * User: RFellows Date: 6/7/12\n */\npublic class JobEntryUtils {\n\n    /**\n     * @return {@code true} if {@link Boolean#parseBoolean(String)} returns {@code true} for\n     *         {@link #isBlockingExecution()}\n     */\n    /**\n     * Determine if the string equates to {@link Boolean#TRUE} after performing a variable substitution.\n     *\n     * @param s\n     *          String-encoded boolean value or variable expression\n     * @param variableSpace\n     *          Context for variables so we can substitute {@code s}\n     * @return the value returned by {@link Boolean#parseBoolean(String) Boolean.parseBoolean(s)} after substitution\n     */\n    public static boolean asBoolean( String s, VariableSpace variableSpace ) {\n        String value = variableSpace.environmentSubstitute( s );\n        return Boolean.parseBoolean( value );\n    }\n\n    /**\n     * Parse the string as a {@link Long} after variable substitution.\n     *\n     * @param s\n     *          String-encoded {@link Long} value or variable expression that should resolve to a {@link Long} value\n     * @param variableSpace\n     *          Context for variables so we can substitute {@code s}\n     * @return the value returned by {@link Long#parseLong(String, int) Long.parseLong(s, 10)} after substitution\n     */\n    public static Long asLong( String s, VariableSpace variableSpace ) {\n        String value = variableSpace.environmentSubstitute( s );\n        return value == null ? null : Long.valueOf( value, 10 );\n    }\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/Password.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport java.lang.annotation.Documented;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\n\n/**\n * Denotes a field is a password and must be encrypted when serialized. This must be placed on a {@link String} field.\n */\n@Documented\n@Retention( RetentionPolicy.RUNTIME )\n@Target( ElementType.FIELD )\npublic @interface Password {\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/java/org/pentaho/big/data/kettle/plugins/job/PropertyEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.pentaho.ui.xul.XulEventSource;\n\nimport java.beans.PropertyChangeListener;\nimport java.util.Map;\n\n/**\n * User: RFellows Date: 6/18/12\n */\npublic class PropertyEntry implements Map.Entry<String, String>, XulEventSource {\n  private String key = null;\n  private String value = null;\n\n  public PropertyEntry() {\n    this( null, null );\n  }\n\n  public PropertyEntry( String key, String value ) {\n    this.key = key;\n    this.value = value;\n  }\n\n  @Override\n  public String getKey() {\n    return key;\n  }\n\n  public void setKey( String key ) {\n    this.key = key;\n  }\n\n  @Override\n  public String getValue() {\n    return value;\n  }\n\n  @Override\n  public String setValue( String value ) {\n    this.value = value;\n    return value;\n  }\n\n  @Override\n  public boolean equals( Object o ) {\n    if ( this == o ) {\n      return true;\n    }\n    if ( o == null || getClass() != o.getClass() ) {\n      return false;\n    }\n\n    PropertyEntry that = (PropertyEntry) o;\n\n    if ( key != null ? !key.equals( that.key ) : that.key != null ) {\n      return false;\n    }\n    if ( value != null ? !value.equals( that.value ) : that.value != null ) {\n      return false;\n    }\n    return true;\n  }\n\n  @Override\n  public int hashCode() {\n    int result = key != null ? key.hashCode() : 0;\n    result = 31 * result + ( value != null ? value.hashCode() : 0 );\n    return result;\n  }\n\n  @Override\n  public void addPropertyChangeListener( PropertyChangeListener propertyChangeListener ) {\n  }\n\n  @Override\n  public void removePropertyChangeListener( PropertyChangeListener propertyChangeListener ) {\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/main/resources/org/pentaho/big/data/kettle/plugins/job/messages/messages_en_US.properties",
    "content": "\nJobExecutor.Confirm.Toggle.Quick.Mode.Title=Confirm leaving Advanced Mode\nJobExecutor.Confirm.Toggle.Quick.Mode.Message=Any changes made in \"Advanced\" mode will be lost by switching to \"Quick Setup\" mode.\\nAre you sure you want to proceed?\n"
  },
  {
    "path": "kettle-plugins/common/job/src/test/java/org/pentaho/big/data/kettle/plugins/job/AbstractJobEntryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.junit.Assert;\nimport org.junit.Test;\nimport org.pentaho.di.core.KettleEnvironment;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.entry.JobEntryCopy;\nimport org.pentaho.di.repository.LongObjectId;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.RepositoryMeta;\nimport org.pentaho.di.repository.kdr.KettleDatabaseRepository;\nimport org.pentaho.di.repository.kdr.KettleDatabaseRepositoryCreationHelper;\nimport org.pentaho.di.repository.kdr.KettleDatabaseRepositoryMeta;\nimport org.w3c.dom.Document;\n\nimport java.io.File;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static junit.framework.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * User: RFellows Date: 6/5/12\n */\npublic class AbstractJobEntryTest {\n\n  class TestJobEntry extends AbstractJobEntry<BlockableJobConfig> {\n    private long waitTime = 0L;\n\n    TestJobEntry() {\n    }\n\n    TestJobEntry( long waitTime ) {\n      this.waitTime = waitTime;\n    }\n\n    @Override\n    protected BlockableJobConfig createJobConfig() {\n      return new BlockableJobConfig();\n    }\n\n    @Override\n    public List<String> getValidationWarnings( BlockableJobConfig config ) {\n      return new ArrayList<String>();\n    }\n\n    @Override\n    protected Runnable getExecutionRunnable( Result jobResult ) {\n      return new Runnable() {\n        @Override\n        public void run() {\n          try {\n            Thread.sleep( waitTime );\n          } catch ( InterruptedException e ) {\n            throw new RuntimeException( e );\n          }\n        }\n      };\n    }\n\n    @Override\n    protected void handleUncaughtThreadException( Thread t, Throwable e, Result jobResult ) {\n      logError( \"Error executing Job\", e );\n      setJobResultFailed( jobResult );\n    }\n\n  };\n\n  @Test\n  public void testLoadXml() throws Exception {\n\n    TestJobEntry jobEntry = new TestJobEntry();\n    BlockableJobConfig jobConfig = new BlockableJobConfig();\n\n    jobConfig.setJobEntryName( \"Job Name\" );\n\n    jobEntry.setJobConfig( jobConfig );\n\n    JobEntryCopy jec = new JobEntryCopy( jobEntry );\n    jec.setLocation( 0, 0 );\n    String xml = jec.getXML();\n\n    Document d = XMLHandler.loadXMLString( xml );\n\n    TestJobEntry jobEntry2 = new TestJobEntry();\n    jobEntry2.loadXML( d.getDocumentElement(), null, null, null );\n\n    BlockableJobConfig jobConfig2 = jobEntry2.getJobConfig();\n    assertEquals( jobConfig.getJobEntryName(), jobConfig2.getJobEntryName() );\n  }\n\n  @Test\n  public void testLoadRep() throws Exception {\n    TestJobEntry je = new TestJobEntry();\n    BlockableJobConfig config = new BlockableJobConfig();\n\n    config.setJobEntryName( \"testing\" );\n\n    je.setJobConfig( config );\n\n    KettleEnvironment.init();\n    String filename = File.createTempFile( getClass().getSimpleName() + \"-export-dbtest\", \"\" ).getAbsolutePath();\n\n    try {\n      DatabaseMeta databaseMeta = new DatabaseMeta( \"H2Repo\", \"H2\", \"JDBC\", null, filename, null, null, null );\n      RepositoryMeta repositoryMeta =\n          new KettleDatabaseRepositoryMeta( \"KettleDatabaseRepository\", \"H2Repo\", \"H2 Repository\", databaseMeta );\n      KettleDatabaseRepository repository = new KettleDatabaseRepository();\n      repository.init( repositoryMeta );\n      repository.connectionDelegate.connect( true, true );\n      KettleDatabaseRepositoryCreationHelper helper = new KettleDatabaseRepositoryCreationHelper( repository );\n      helper.createRepositorySchema( null, false, new ArrayList<String>(), false );\n      repository.disconnect();\n\n      // Test connecting...\n      //\n      repository.connect( \"admin\", \"admin\" );\n      assertTrue( repository.isConnected() );\n\n      // A job entry must have an ID if we're going to save it to a repository\n      je.setObjectId( new LongObjectId( 1 ) );\n      ObjectId id_job = new LongObjectId( 1 );\n\n      // Save the original job entry into the repository\n      je.saveRep( repository, id_job );\n\n      // Load it back into a new job entry\n      TestJobEntry je2 = new TestJobEntry();\n      je2.loadRep( repository, id_job, null, null );\n\n      // Make sure all settings we set are properly loaded\n      BlockableJobConfig config2 = je2.getJobConfig();\n      Assert.assertEquals( config.getJobEntryName(), config2.getJobEntryName() );\n    } finally {\n      // Delete test database\n      new File( filename + \".h2.db\" ).delete();\n      new File( filename + \".trace.db\" ).delete();\n    }\n  }\n\n  @Test\n  public void testEvaluates() throws Exception {\n    TestJobEntry jobEntry = new TestJobEntry();\n    assertTrue( jobEntry.evaluates() );\n  }\n\n  @Test\n  public void testIsUnconditional() throws Exception {\n    TestJobEntry jobEntry = new TestJobEntry();\n    assertTrue( jobEntry.isUnconditional() );\n  }\n\n  @Test\n  public void execute_blocking() throws KettleException {\n    final long waitTime = 1000;\n    TestJobEntry je = new TestJobEntry( waitTime );\n\n    je.setParentJob( new Job( \"test\", null, null ) );\n    Result result = new Result();\n    long start = System.currentTimeMillis();\n    je.execute( result, 0 );\n    long end = System.currentTimeMillis();\n    assertTrue( \"Total runtime should be >= the wait time if we are blocking\", ( end - start ) >= waitTime );\n\n    Assert.assertEquals( 0, result.getNrErrors() );\n    assertTrue( result.getResult() );\n  }\n\n  @Test\n  public void execute_nonblocking() throws KettleException {\n    final long waitTime = 1000;\n    TestJobEntry je = new TestJobEntry( waitTime );\n\n    je.setParentJob( new Job( \"test\", null, null ) );\n    je.getJobConfig().setBlockingExecution( \"false\" );\n    Result result = new Result();\n    long start = System.currentTimeMillis();\n    je.execute( result, 0 );\n    long end = System.currentTimeMillis();\n    assertTrue( \"Total runtime should be less than the wait time if we're not blocking\", ( end - start ) < waitTime );\n\n    Assert.assertEquals( 0, result.getNrErrors() );\n    assertTrue( result.getResult() );\n  }\n\n  @Test\n  public void execute_interrupted() throws KettleException {\n    final long waitTime = 1000 * 10;\n    final List<String> loggedErrors = new ArrayList<String>();\n    TestJobEntry je = new TestJobEntry( waitTime ) {\n      @Override\n      public void logError( String message, Throwable e ) {\n        loggedErrors.add( message );\n      }\n    };\n\n    final Job parentJob = new Job( \"test\", null, null );\n\n    Thread t = new Thread() {\n      @Override\n      public void run() {\n        try {\n          Thread.sleep( 1000 );\n        } catch ( InterruptedException e ) {\n          throw new RuntimeException( e );\n        }\n        parentJob.stopAll();\n      }\n    };\n\n    je.setParentJob( parentJob );\n    Result result = new Result();\n\n    // Start another thread to stop the parent job and unblock the job entry in 1 second\n    t.start();\n\n    long start = System.currentTimeMillis();\n    je.execute( result, 0 );\n    long end = System.currentTimeMillis();\n    assertTrue( \"Total runtime should be less than the wait time if we were properly interrupted\",\n        ( end - start ) < waitTime );\n\n    Assert.assertEquals( 1, result.getNrErrors() );\n    assertFalse( result.getResult() );\n\n    // Make sure when an uncaught exception occurs an error log message is generated\n    Assert.assertEquals( 1, loggedErrors.size() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/test/java/org/pentaho/big/data/kettle/plugins/job/BlockableJobConfigTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Captor;\nimport org.mockito.Mock;\nimport org.mockito.MockitoAnnotations;\n\nimport java.beans.PropertyChangeEvent;\nimport java.beans.PropertyChangeListener;\n\nimport static junit.framework.Assert.*;\nimport static org.mockito.Matchers.any;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\n\n/**\n * User: RFellows Date: 6/5/12\n */\npublic class BlockableJobConfigTest {\n\n  @Mock PropertyChangeListener listener;\n  @Captor ArgumentCaptor<PropertyChangeEvent> event;\n\n  @Before\n  public void init() {\n    MockitoAnnotations.initMocks( this );\n  }\n\n\n  @Test\n  public void testAddPropertyChangeListener() throws Exception {\n    BlockableJobConfig config = new BlockableJobConfig();\n\n    // make sure it is capturing property change events\n    config.addPropertyChangeListener( listener );\n    config.setJobEntryName( \"jobName1\" );\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    verify( listener ).propertyChange( event.capture() );\n    assertEquals( config.getJobEntryName(), event.getValue().getNewValue() );\n\n    // remove the listener & verify that it isn't receiving events anymore\n    config.removePropertyChangeListener( listener );\n    config.setJobEntryName( \"jobName2\" );\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) ); // still 1, from the previous call\n  }\n\n  @Test\n  public void testAddPropertyChangeListener_propertyName() throws Exception {\n    BlockableJobConfig config = new BlockableJobConfig();\n\n    // dummy property name, should not indicate any captured prop change\n    config.addPropertyChangeListener( \"dummy\", listener );\n    config.setJobEntryName( \"jobName0\" );\n    verify( listener, times( 0 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    config.removePropertyChangeListener( \"dummy\", listener );\n\n    // make sure it is capturing property change events\n    config.addPropertyChangeListener( BlockableJobConfig.JOB_ENTRY_NAME, listener );\n    config.setJobEntryName( \"jobName1\" );\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    verify( listener ).propertyChange( event.capture() );\n    assertEquals( config.getJobEntryName(), event.getValue().getNewValue() );\n\n    // remove the listener & verify that it isn't receiving events anymore\n    config.removePropertyChangeListener( BlockableJobConfig.JOB_ENTRY_NAME, listener );\n    config.setJobEntryName( \"jobName2\" );\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) ); // still 1, from the previous call\n  }\n\n  @Test\n  public void testGetterAndSetter() throws Exception {\n    BlockableJobConfig config = new BlockableJobConfig();\n    assertNull( config.getJobEntryName() );\n\n    config.setJobEntryName( \"jobName\" );\n    assertEquals( \"jobName\", config.getJobEntryName() );\n  }\n\n  @Test\n  public void testClone() throws Exception {\n    BlockableJobConfig configOrig = new BlockableJobConfig();\n    configOrig.setJobEntryName( \"Test\" );\n    BlockableJobConfig configCloned = (BlockableJobConfig) configOrig.clone();\n\n    assertNotSame( configOrig, configCloned );\n    assertEquals( configOrig, configCloned );\n\n    configOrig.setJobEntryName( \"New Name\" );\n    assertFalse( configOrig.getJobEntryName().equals( configCloned.getJobEntryName() ) );\n\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/job/src/test/java/org/pentaho/big/data/kettle/plugins/job/JobEntryUtilsTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.kettle.plugins.job;\n\nimport org.junit.Test;\n\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport static org.junit.Assert.*;\n\n/**\n * User: RFellows Date: 6/7/12\n */\npublic class JobEntryUtilsTest {\n\n    @Test\n    public void asBoolean() {\n        VariableSpace variableSpace = new Variables();\n\n        assertFalse( JobEntryUtils.asBoolean( \"not-true\", variableSpace ) );\n        assertFalse( JobEntryUtils.asBoolean( Boolean.FALSE.toString(), variableSpace ) );\n        assertTrue( JobEntryUtils.asBoolean( Boolean.TRUE.toString(), variableSpace ) );\n\n        // No variable set, should attempt convert ${booleanValue} as is\n        assertFalse( JobEntryUtils.asBoolean( \"${booleanValue}\", variableSpace ) );\n\n        variableSpace.setVariable( \"booleanValue\", Boolean.TRUE.toString() );\n        assertTrue( JobEntryUtils.asBoolean( \"${booleanValue}\", variableSpace ) );\n\n        variableSpace.setVariable( \"booleanValue\", Boolean.FALSE.toString() );\n        assertFalse( JobEntryUtils.asBoolean( \"${booleanValue}\", variableSpace ) );\n    }\n\n    @Test\n    public void asLong() {\n        VariableSpace variableSpace = new Variables();\n\n        assertNull( JobEntryUtils.asLong( null, variableSpace ) );\n        assertEquals( Long.valueOf( \"10\", 10 ), JobEntryUtils.asLong( \"10\", variableSpace ) );\n\n        variableSpace.setVariable( \"long\", \"150\" );\n        assertEquals( Long.valueOf( \"150\", 10 ), JobEntryUtils.asLong( \"${long}\", variableSpace ) );\n\n        try {\n            JobEntryUtils.asLong( \"NaN\", variableSpace );\n            fail( \"expected number format exception\" );\n        } catch ( NumberFormatException ex ) {\n            // we're good\n        }\n    }\n}\n"
  },
  {
    "path": "kettle-plugins/common/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins-common</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <modules>\n    <module>ui</module>\n    <module>job</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/common/ui/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-common</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.core</groupId>\n      <artifactId>commands</artifactId>\n      <version>3.3.0-I20070605-0010</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/ClusterTestDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.apache.commons.lang.exception.ExceptionUtils;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.graphics.Rectangle;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Dialog;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.ProgressBar;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\nimport java.util.Iterator;\n\n\n/**\n * Dialog for testing a Named Cluster\n *\n * @see <code>NamedCluster</code>\n */\npublic class ClusterTestDialog extends Dialog {\n\n  private static final Class<?> PKG = ClusterTestDialog.class;\n\n  private Shell shell;\n  private PropsUI props;\n\n  private final NamedCluster namedCluster;\n  private final RuntimeTester runtimeTester;\n\n  private RuntimeTestStatus runtimeTestStatus = null;\n\n  /**\n   * The log channel for this dialog.\n   */\n  protected LogChannelInterface log;\n\n  public static ClusterTestDialog create( Shell parent, NamedCluster namedCluster, RuntimeTester clusterTester )\n    throws KettleException {\n    return new ClusterTestDialog( parent, namedCluster, clusterTester );\n  }\n\n  public ClusterTestDialog( Shell parent, NamedCluster namedCluster, RuntimeTester runtimeTester )\n    throws KettleException {\n    super( parent );\n    this.namedCluster = namedCluster;\n    this.runtimeTester = runtimeTester;\n    props = getPropsUIInstance();\n    this.log = KettleLogStore.getLogChannelInterfaceFactory().create( namedCluster );\n  }\n\n  /**\n   * For testing\n   */\n  protected PropsUI getPropsUIInstance() {\n    return PropsUI.getInstance();\n  }\n\n  public RuntimeTestStatus open() {\n    Shell parent = getParent();\n    final Display display = parent.getDisplay();\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.CLOSE | SWT.MAX | SWT.MIN | SWT.ICON );\n    props.setLook( shell );\n    shell.setImage( GUIResource.getInstance().getImageSpoon() );\n\n    int margin = Const.FORM_MARGIN;\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = margin;\n    formLayout.marginHeight = margin;\n\n    final int shellWidth = 385;\n    final int shellHeight = 160;\n    shell.setSize( shellWidth, shellHeight );\n    shell.setMinimumSize( shellWidth, shellHeight );\n    shell.setText( BaseMessages.getString( PKG, \"ClusterTestDialog.Title\" ) );\n    shell.setLayout( formLayout );\n\n    Label testingClusterLabel = new Label( shell, SWT.NONE );\n    testingClusterLabel.setText( BaseMessages.getString( PKG, \"ClusterTestDialog.ClusterTest.Label\" ) );\n    testingClusterLabel.setForeground( GUIResource.getInstance().getColorCrystalTextPentaho() );\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, margin );\n    fd.top = new FormAttachment( 0, margin );\n    testingClusterLabel.setLayoutData( fd );\n\n    final Label testLabel = new Label( shell, SWT.NONE );\n    testLabel.setText( \"Testing cluster...\" );\n    fd = new FormData();\n    fd.top = new FormAttachment( testingClusterLabel, 10 );\n    fd.right = new FormAttachment( 100, -margin );\n    fd.left = new FormAttachment( 0, margin );\n    testLabel.setLayoutData( fd );\n\n    final ProgressBar progressBar = new ProgressBar( shell, SWT.SMOOTH );\n    progressBar.setMinimum( 0 ); // Max tests will be set upon first return\n\n    fd = new FormData();\n    fd.top = new FormAttachment( testLabel, 10 );\n    fd.left = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, -margin );\n    progressBar.setLayoutData( fd );\n\n    Button wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n\n    wCancel.addListener( SWT.Selection, new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    } );\n\n    Button[] buttons = new Button[]{ wCancel };\n    BaseStepDialog.positionBottomRightButtons( shell, buttons, margin, null );\n    shell.setBackgroundMode( SWT.INHERIT_FORCE );\n\n    Rectangle shellBounds = Spoon.getInstance().getShell().getBounds();\n\n    shell.open();\n\n    shell.setLocation(\n      shellBounds.x + ( shellBounds.width - shellWidth ) / 2,\n      shellBounds.y + ( shellBounds.height - shellHeight ) / 2 );\n\n    // Start the cluster tests\n    runtimeTester.runtimeTest( namedCluster, new RuntimeTestProgressCallback() {\n\n      private int numTests = -1;\n\n      @Override\n      public void onProgress( final RuntimeTestStatus clusterTestStatus ) {\n        Runnable runnable = getRunnable( progressBar, clusterTestStatus, testLabel );\n        display.asyncExec( runnable );\n      }\n    } );\n\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return runtimeTestStatus;\n  }\n\n  private void cancel() {\n    runtimeTestStatus = null;\n    dispose();\n  }\n\n  public void dispose() {\n    props.setScreen( new WindowProperty( shell ) );\n    shell.dispose();\n  }\n\n\n  /**\n   * For testing\n   */\n  Runnable getRunnable( final ProgressBar progressBar, final RuntimeTestStatus clusterTestStatus, final Label testLabel ) {\n    return new Runnable() {\n      private int numTests = -1;\n\n      @Override\n      public void run() {\n        if ( progressBar.isDisposed() ) {\n          return;\n        }\n\n        // Calculate the number of tests to be run (only the first time!)\n        if ( numTests == -1 ) {\n          numTests = clusterTestStatus.getTestsDone()\n                  + clusterTestStatus.getTestsOutstanding()\n                  + clusterTestStatus.getTestsRunning();\n\n          progressBar.setMaximum( numTests );\n        }\n\n        progressBar.setSelection( clusterTestStatus.getTestsDone() );\n\n        for ( RuntimeTestModuleResults results : clusterTestStatus.getModuleResults() ) {\n          Iterator<RuntimeTest> runningTests = results.getRunningTests().iterator();\n          if ( runningTests.hasNext() ) {\n            testLabel.setText( runningTests.next().getName() );\n          }\n        }\n\n        if ( clusterTestStatus.isDone() ) {\n          runtimeTestStatus = clusterTestStatus;\n          testLabel.setText( BaseMessages.getString( PKG, \"ClusterTestDialog.TestsFinished\" ) );\n          // Log all the executed tests at the end\n          for ( RuntimeTestModuleResults results : clusterTestStatus.getModuleResults() ) {\n            log.logBasic( BaseMessages.getString( PKG, \"ClusterTestDialog.ModuleTest\", results.getName() ) );\n            for ( RuntimeTestResult result : results.getRuntimeTestResults() ) {\n              String clusterTestName = result.getRuntimeTest().getName();\n              // If there are no entries, that means there was one test and it becomes the summary-level result\n              if ( result.getRuntimeTestResultEntries().isEmpty() ) {\n                RuntimeTestResultEntry entry = result.getOverallStatusEntry();\n                log.logBasic( BaseMessages.getString( PKG, \"ClusterTestDialog.TestResult\",\n                        clusterTestName,\n                        entry.getSeverity().toString(),\n                        entry.getDescription() ) );\n                log.logBasic( \"\\t\" + entry.getMessage() );\n\n                if ( entry.getException() != null ) {\n                  log.logBasic( ExceptionUtils.getStackTrace( entry.getException() ) );\n                }\n              } else {\n                for ( RuntimeTestResultEntry entry : result.getRuntimeTestResultEntries() ) {\n                  log.logBasic( BaseMessages.getString( PKG, \"ClusterTestDialog.TestResult\",\n                          clusterTestName,\n                          entry.getSeverity().toString(),\n                          entry.getDescription() ) );\n                  log.logBasic( \"\\t\" + entry.getMessage() );\n\n                  if ( entry.getException() != null ) {\n                    log.logBasic( ExceptionUtils.getStackTrace( entry.getException() ) );\n                  }\n                }\n              }\n            }\n          }\n          ClusterTestDialog.this.dispose();\n        }\n      }\n    };\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/ClusterTestResultsDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.ScrolledComposite;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.graphics.Rectangle;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Dialog;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Link;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\n/**\n * Dialog to display the results of running a suite of tests on a Named Cluster (and its shim/config)\n */\npublic class ClusterTestResultsDialog extends Dialog {\n\n  private static final Class<?> PKG = ClusterTestResultsDialog.class;\n\n  private Shell shell;\n  private PropsUI props;\n\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTestStatus clusterTestStatus;\n\n  /**\n   * The log channel for this dialog.\n   */\n  protected LogChannel log;\n\n  public ClusterTestResultsDialog( Shell parent, RuntimeTestActionService runtimeTestActionService,\n                                   RuntimeTestStatus clusterTestStatus )\n    throws KettleException {\n    super( parent );\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.clusterTestStatus = clusterTestStatus;\n    props = PropsUI.getInstance();\n    this.log = new LogChannel( clusterTestStatus );\n  }\n\n  public String open() {\n    Shell parent = getParent();\n    final Display display = parent.getDisplay();\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.CLOSE | SWT.MAX | SWT.MIN | SWT.ICON );\n    props.setLook( shell );\n    shell.setImage( GUIResource.getInstance().getImageSpoon() );\n\n    int margin = Const.FORM_MARGIN;\n\n    HelpUtils.createHelpButton( shell,\n        BaseMessages.getString( PKG, \"ClusterTestResultsDialog.Shell.Doc.Title\" ),\n            \"https://docs.pentaho.com/pdia-11.0-install/use-hadoop-with-pentaho/big-data-issues\",\n        BaseMessages.getString( PKG, \"ClusterTestResultsDialog.Shell.Doc.Header\" ) );\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = margin;\n    formLayout.marginHeight = margin;\n\n    final int shellWidth = 585;\n    final int shellHeight = 490;\n    shell.setSize( shellWidth, shellHeight );\n    shell.setMinimumSize( shellWidth, shellHeight );\n\n    shell.setText( BaseMessages.getString( PKG, \"ClusterTestResultsDialog.Title\" ) );\n    shell.setLayout( formLayout );\n    shell.setBackgroundMode( SWT.INHERIT_FORCE );\n\n    Label clusterResultsLabel = new Label( shell, SWT.NONE );\n    clusterResultsLabel.setText( BaseMessages.getString( PKG, \"ClusterTestResultsDialog.ClusterTestResults.Label\" ) );\n    clusterResultsLabel.setForeground( GUIResource.getInstance().getColorCrystalTextPentaho() );\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, margin );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, -margin );\n    clusterResultsLabel.setLayoutData( fd );\n\n    final ScrolledComposite scrolledComposite = new ScrolledComposite( shell, SWT.V_SCROLL | SWT.BORDER );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, -margin );\n    fd.bottom = new FormAttachment( 100, -50 );\n    fd.top = new FormAttachment( clusterResultsLabel, margin );\n    scrolledComposite.setLayoutData( fd );\n\n    final Composite mainComposite = new Composite( scrolledComposite, SWT.NONE );\n    scrolledComposite.setContent( mainComposite );\n    scrolledComposite.setExpandHorizontal( true );\n    FormLayout layout = new FormLayout();\n    mainComposite.setLayout( layout );\n\n    ClassLoader myClassLoader = this.getClass().getClassLoader();\n\n    Label separator = null;\n\n    // Add the test results\n    for ( RuntimeTestModuleResults moduleResults : clusterTestStatus.getModuleResults() ) {\n      for ( RuntimeTestResult testResult : moduleResults.getRuntimeTestResults() ) {\n        RuntimeTestResultEntry summary = testResult.getOverallStatusEntry();\n        Label image = new Label( mainComposite, SWT.NONE );\n        switch ( summary.getSeverity() ) {\n\n          case DEBUG:\n          case INFO:\n            // The above are \"Test(s) passed\"\n            image.setImage(\n              GUIResource.getInstance().getImage( \"ui/images/success_green.svg\", myClassLoader, 22, 22 ) );\n            break;\n          case WARNING:\n          case SKIPPED:\n            // The above are \"Test(s) finished with warnings\"\n            image.setImage(\n              GUIResource.getInstance().getImage( \"ui/images/warning_yellow.svg\", myClassLoader, 22, 22 ) );\n            break;\n          case ERROR:\n          case FATAL:\n            // The above are \"Test(s) failed\"\n            image.setImage(\n              GUIResource.getInstance().getImage( \"ui/images/error_red.svg\", myClassLoader, 22, 22 ) );\n            break;\n        }\n        FormData imageLayoutData = new FormData();\n        imageLayoutData.left = new FormAttachment( 0, margin );\n        if ( separator != null ) {\n          imageLayoutData.top = new FormAttachment( separator, margin );\n        } else {\n          imageLayoutData.top = new FormAttachment( 0, margin );\n        }\n        image.setLayoutData( imageLayoutData );\n\n        Label testName = new Label( mainComposite, SWT.NONE );\n        testName.setText( testResult.getRuntimeTest().getName() );\n        FormData layoutData = new FormData();\n        layoutData.left = new FormAttachment( image, margin );\n        layoutData.right = new FormAttachment( 100, -margin );\n        if ( separator != null ) {\n          layoutData.top = new FormAttachment( separator, margin );\n        } else {\n          layoutData.top = new FormAttachment( 0, margin );\n        }\n        testName.setLayoutData( layoutData );\n\n        // Add test description\n        Label description = new Label( mainComposite, SWT.WRAP );\n        description.setForeground( GUIResource.getInstance().getColorDarkGray() );\n        description.setText( summary.getDescription() );\n        layoutData = new FormData();\n        layoutData.left = new FormAttachment( image, margin );\n        layoutData.right = new FormAttachment( 100, -margin );\n        layoutData.top = new FormAttachment( testName, margin );\n        description.setLayoutData( layoutData );\n\n        Control linkOrNot = description;\n        // Add action link(s)\n        final RuntimeTestAction runtimeTestAction = summary.getAction();\n        if ( runtimeTestAction != null ) {\n          Link link = new Link( mainComposite, SWT.NONE );\n          link.setText( \"<a>\" + runtimeTestAction.getName() + \"</a>\" );\n          link.setToolTipText( runtimeTestAction.getDescription() );\n          link.addSelectionListener( new SelectionAdapter() {\n            public void widgetSelected( SelectionEvent selectionEvent ) {\n              runtimeTestActionService.handle( runtimeTestAction );\n            }\n          } );\n          layoutData = new FormData();\n          layoutData.left = new FormAttachment( image, margin );\n          layoutData.right = new FormAttachment( 100, -margin );\n          layoutData.top = new FormAttachment( description, margin );\n          link.setLayoutData( layoutData );\n          linkOrNot = link;\n        }\n\n        // Add separator\n        separator = new Label( mainComposite, SWT.HORIZONTAL | SWT.SEPARATOR );\n        separator.setForeground( GUIResource.getInstance().getColorLightGray() );\n        layoutData = new FormData();\n        layoutData.left = new FormAttachment( 0, margin );\n        layoutData.right = new FormAttachment( 100, -margin );\n        layoutData.top = new FormAttachment( linkOrNot, margin );\n        separator.setLayoutData( layoutData );\n      }\n    }\n    mainComposite.setSize( mainComposite.computeSize( SWT.DEFAULT, SWT.DEFAULT ) );\n\n    Button wOk = new Button( shell, SWT.PUSH );\n    wOk.setText( BaseMessages.getString( PKG, \"System.Button.Close\" ) );\n\n    wOk.addListener( SWT.Selection, new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    } );\n\n    Button[] buttons = new Button[]{ wOk };\n    BaseStepDialog.positionBottomRightButtons( shell, buttons, margin, null );\n\n    Rectangle shellBounds = Spoon.getInstance().getShell().getBounds();\n\n    shell.pack();\n    shell.open();\n\n    shell.setLocation(\n      shellBounds.x + ( shellBounds.width - shellWidth ) / 2,\n      shellBounds.y + ( shellBounds.height - shellHeight ) / 2 );\n\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return null;\n  }\n\n  private void ok() {\n    dispose();\n  }\n\n  public void dispose() {\n    props.setScreen( new WindowProperty( shell ) );\n    shell.dispose();\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/CommonDialogFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\n/**\n * Created by bryan on 10/19/15.\n */\npublic class CommonDialogFactory {\n  public void createErrorDialog( Shell parent, String title, String message, Exception exception ) {\n    new ErrorDialog( parent, title, message, exception );\n  }\n\n  public NamedClusterDialogImpl createNamedClusterDialog( Shell parent, NamedClusterService namedClusterService,\n                                                          RuntimeTestActionService runtimeTestActionService,\n                                                          RuntimeTester runtimeTester,\n                                                          NamedCluster namedCluster ) {\n    return new NamedClusterDialogImpl( parent, namedClusterService, runtimeTestActionService, runtimeTester,\n      namedCluster );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/HadoopClusterDelegateImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.apache.commons.io.FileUtils;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.metastore.SuppliedMetaStore;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.delegates.SpoonDelegate;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.stores.delegate.DelegatingMetaStore;\nimport org.pentaho.metastore.stores.xml.XmlMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardCopyOption;\n\npublic class HadoopClusterDelegateImpl extends SpoonDelegate {\n  public static final String SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_TITLE =\n    \"Spoon.Dialog.ErrorDeletingNamedCluster.Title\";\n  public static final String SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_MESSAGE =\n    \"Spoon.Dialog.ErrorDeletingNamedCluster.Message\";\n  public static final String SPOON_VARIOUS_DUPE_NAME = \"Spoon.Various.DupeName\";\n  public static final String SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE =\n    \"Spoon.Dialog.ErrorSavingNamedCluster.Title\";\n  public static final String SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE =\n    \"Spoon.Dialog.ErrorSavingNamedCluster.Message\";\n  public static Class<?> PKG = HadoopClusterDelegateImpl.class; // for i18n purposes, needed by Translator2!!\n  public static final String STRING_NAMED_CLUSTERS = BaseMessages.getString( PKG, \"NamedClusterDialog.HadoopClusters\" );\n\n  public static final String SPOON_DIALOG_ERROR_ADDING_NEW_CONFIGURATION_FOR_CLUSTER_TITLE = \"Spoon.Dialog.ErrorAddingNewConfigurationForCluster.Title\";\n  public static final String SPOON_DIALOG_ERROR_ADDING_NEW_CONFIGURATION_FOR_CLUSTER_MESSAGE = \"Spoon.Dialog.ErrorAddingNewConfigurationForCluster.Message\";\n  public static final String SPOON_DIALOG_ERROR_RENAMING_PREVIOUS_CLUSTER_CONFIG_TITLE = \"Spoon.Dialog.ErrorRenamingPreviousClusterConfig.Title\";\n  public static final String SPOON_DIALOG_ERROR_RENAMING_PREVIOUS_CLUSTER_CONFIG_MESSAGE = \"Spoon.Dialog.ErrorRenamingPreviousClusterConfig.Message\";\n\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private final CommonDialogFactory commonDialogFactory;\n\n  public HadoopClusterDelegateImpl( Spoon spoon, NamedClusterService namedClusterService,\n                                    RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this( spoon, namedClusterService, runtimeTestActionService, runtimeTester, new CommonDialogFactory() );\n  }\n\n  public HadoopClusterDelegateImpl( Spoon spoon, NamedClusterService namedClusterService,\n                                    RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                                    CommonDialogFactory commonDialogFactory ) {\n    super( spoon );\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.commonDialogFactory = commonDialogFactory;\n  }\n\n  public void dupeNamedCluster( IMetaStore metaStore, NamedCluster nc, Shell shell ) {\n    if ( metaStore == null ) {\n      metaStore = spoon.getMetaStore();\n    }\n\n    if ( nc == null ) {\n      return;\n    }\n\n    NamedCluster newNamedCluster = nc.clone();\n\n    // The \"duplicate name\" string comes from Spoon, so use its class to get the resource\n    String duplicateName = BaseMessages.getString( Spoon.class, SPOON_VARIOUS_DUPE_NAME ) + nc.getName();\n    newNamedCluster.setName( duplicateName );\n\n    NamedClusterDialogImpl namedClusterDialogImpl = commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester, newNamedCluster );\n    namedClusterDialogImpl.setNewClusterCheck( true );\n\n    String newClusterName = namedClusterDialogImpl.open();\n    // Check if the process was cancelled\n    if ( newClusterName == null ) {\n      return;\n    }\n\n    try {\n      XmlMetaStore xmlMetaStore = getXmlMetastore( metaStore );\n\n      if ( xmlMetaStore != null ) {\n        if ( newNamedCluster.getName() != null ) {\n          delNamedCluster( metaStore, newNamedCluster );\n        }\n\n        File sourceClusterConfigDir = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + \"/\" + nc.getName() );\n        File newClusterConfigDir = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + \"/\" + newClusterName );\n        saveNamedCluster( metaStore, newNamedCluster );\n        FileUtils.copyDirectory( sourceClusterConfigDir, newClusterConfigDir );\n        if ( !nc.getShimIdentifier().equals( newNamedCluster.getShimIdentifier() ) ) {\n          addConfigProperties( newNamedCluster );\n        }\n      }\n    } catch ( Exception e ) {\n      commonDialogFactory.createErrorDialog( spoon.getShell(),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE ),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE, nc.getName() ), e );\n      spoon.refreshTree();\n      return;\n    }\n    spoon.refreshTree( STRING_NAMED_CLUSTERS );\n  }\n\n  public void delNamedCluster( IMetaStore metaStore, NamedCluster namedCluster ) {\n    if ( metaStore == null ) {\n      metaStore = spoon.getMetaStore();\n    }\n    deleteNamedCluster( metaStore, namedCluster );\n    spoon.refreshTree( STRING_NAMED_CLUSTERS );\n    spoon.setShellText();\n  }\n\n  private void backupAndAddShimConfiguration( IMetaStore metaStore, NamedCluster previousNamedCluster, NamedCluster selectedNamedCluster ) {\n    if ( metaStore == null ) {\n      metaStore = spoon.getMetaStore();\n    }\n\n    if ( previousNamedCluster == null ) {\n      return;\n    }\n\n    if ( selectedNamedCluster == null ) {\n      return;\n    }\n\n    XmlMetaStore xmlMetaStore;\n\n    try {\n      xmlMetaStore = getXmlMetastore( metaStore );\n    } catch ( MetaStoreException ex ) {\n      xmlMetaStore = null;\n    }\n\n    String previousNamedClusterName = previousNamedCluster.getName();\n    String selectedNamedClusterName = selectedNamedCluster.getName();\n\n    // get the previous shim configuration\n    File previousShimConfiguration = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + File.separator + previousNamedClusterName + File.separator + \"config.properties\" );\n    File previousShimConfigurationBackup = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + File.separator + previousNamedClusterName + File.separator + \"old-config.bak\" );\n\n    try {\n      // backup original configuration\n      Files.move( previousShimConfiguration.toPath(), previousShimConfigurationBackup.toPath(), java.nio.file.StandardCopyOption.REPLACE_EXISTING );\n    } catch ( IOException e ) {\n      String dialogTitle = BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_RENAMING_PREVIOUS_CLUSTER_CONFIG_TITLE );\n      String dialogMessage = BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_RENAMING_PREVIOUS_CLUSTER_CONFIG_MESSAGE );\n\n      commonDialogFactory.createErrorDialog( spoon.getShell(), dialogTitle, dialogMessage, e );\n\n      return;\n    }\n\n    try {\n      // create new configuration of driver being created\n      addConfigProperties( selectedNamedCluster );\n    } catch ( Exception e ) {\n      String dialogTitle = BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_ADDING_NEW_CONFIGURATION_FOR_CLUSTER_TITLE );\n      String dialogMessage = java.text.MessageFormat.format( BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_ADDING_NEW_CONFIGURATION_FOR_CLUSTER_MESSAGE ), selectedNamedClusterName );\n\n      commonDialogFactory.createErrorDialog( spoon.getShell(), dialogTitle, dialogMessage, e );\n    }\n  }\n\n  public String editNamedCluster( IMetaStore metaStore, NamedCluster namedCluster, Shell shell ) {\n    if ( metaStore == null ) {\n      metaStore = spoon.getMetaStore();\n    }\n\n    NamedClusterDialogImpl namedClusterDialogImpl = commonDialogFactory.createNamedClusterDialog( shell,\n      namedClusterService, runtimeTestActionService, runtimeTester, namedCluster.clone() );\n    namedClusterDialogImpl.setNewClusterCheck( false );\n\n    String result = namedClusterDialogImpl.open();\n\n    if ( result == null ) {\n      return null;\n    }\n\n    NamedCluster selectedNamedCluster = namedClusterDialogImpl.getNamedCluster();\n\n    // Create the new cluster\n    saveNamedCluster( metaStore, selectedNamedCluster );\n\n    String previousNamedClusterName = namedCluster.getName();\n    String selectedNamedClusterName = selectedNamedCluster.getName();\n\n    String previousShimIdentifier = namedCluster.getShimIdentifier();\n    String selectedShimIdentifier = selectedNamedCluster.getShimIdentifier();\n\n    // if the previous shim identifier differs from the selected shim identifier then backup the old config and add the new one\n    if ( !previousShimIdentifier.equals( selectedShimIdentifier ) ) {\n      backupAndAddShimConfiguration( metaStore, namedCluster, selectedNamedCluster );\n    }\n\n    // no name change so return the selected named cluster name\n    if ( previousNamedClusterName != null && previousNamedClusterName.equals( selectedNamedClusterName ) ) {\n      return selectedNamedClusterName;\n    }\n\n    XmlMetaStore xmlMetaStore;\n    try {\n      xmlMetaStore = getXmlMetastore( metaStore );\n    } catch ( MetaStoreException ex ) {\n      xmlMetaStore = null;\n    }\n\n    // Rename the configuration folder to the new name.\n    File source = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + File.separator + previousNamedClusterName );\n    File destination = new File( getNamedClusterConfigsRootDir( xmlMetaStore ) + File.separator + selectedNamedClusterName );\n\n    try {\n      FileUtils.copyDirectory( source, destination );\n    } catch ( IOException ex ) {\n\n    }\n\n    // Delete the old named cluster.\n    deleteNamedCluster( metaStore, namedCluster );\n\n    // If the user changed the shim, create a new config.properties file that corresponds to that shim.\n    String shimIdentifier = namedClusterDialogImpl.getNamedCluster().getShimIdentifier();\n    if ( !namedCluster.getShimIdentifier().equals( shimIdentifier ) ) {\n      try {\n        addConfigProperties( namedClusterDialogImpl.getNamedCluster() );\n      } catch ( Exception e ) {\n        // Do nothing.\n      }\n    }\n\n    spoon.refreshTree( STRING_NAMED_CLUSTERS );\n    if ( namedClusterDialogImpl.getNamedCluster() != null ) {\n      return namedClusterDialogImpl.getNamedCluster().getName();\n    }\n\n    return null;\n  }\n\n  private XmlMetaStore getXmlMetastore( IMetaStore metaStore ) throws MetaStoreException {\n    XmlMetaStore xmlMetaStore = null;\n\n    if ( metaStore instanceof DelegatingMetaStore ) {\n      IMetaStore activeMetastore = ( (DelegatingMetaStore) metaStore ).getActiveMetaStore();\n      if ( activeMetastore instanceof XmlMetaStore ) {\n        xmlMetaStore = (XmlMetaStore) activeMetastore;\n      }\n    } else if ( metaStore instanceof SuppliedMetaStore ) {\n        IMetaStore activeMetastore = ( (SuppliedMetaStore) metaStore ).getCurrentMetaStore();\n        if ( activeMetastore instanceof XmlMetaStore ) {\n          xmlMetaStore = (XmlMetaStore) activeMetastore;\n        }\n    } else if ( metaStore instanceof XmlMetaStore ) {\n      xmlMetaStore = (XmlMetaStore) metaStore;\n    }\n\n    return xmlMetaStore;\n  }\n\n  private String getNamedClusterConfigsRootDir( XmlMetaStore metaStore ) {\n    String configsFolder = null != spoon.getRepository() ? \"ServerConfigs\" : \"Configs\";\n    return System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + configsFolder;\n  }\n\n  public String newNamedCluster( VariableSpace variableSpace, IMetaStore metaStore, Shell shell ) {\n    if ( metaStore == null ) {\n      metaStore = spoon.getMetaStore();\n    }\n\n    NamedCluster nc = namedClusterService.getClusterTemplate();\n\n    NamedClusterDialogImpl namedClusterDialogImpl = commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester, nc );\n    namedClusterDialogImpl.setNewClusterCheck( true );\n    String result = namedClusterDialogImpl.open();\n\n    if ( result != null ) {\n      if ( variableSpace != null ) {\n        nc.shareVariablesWith( (VariableSpace) variableSpace );\n      } else {\n        nc.initializeVariablesFrom( null );\n      }\n\n      try {\n        saveNamedCluster( metaStore, nc );\n        addConfigProperties( nc );\n      } catch ( Exception e ) {\n        commonDialogFactory.createErrorDialog( spoon.getShell(),\n          BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE ),\n          BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE, nc.getName() ), e );\n        spoon.refreshTree();\n        return nc.getName();\n      }\n\n      spoon.refreshTree( STRING_NAMED_CLUSTERS );\n      return nc.getName();\n    }\n    return null;\n  }\n\n  private void addConfigProperties( NamedCluster namedCluster ) throws Exception {\n    Path clusterConfigDirPath = Paths.get( getNamedClusterConfigsRootDir( null ) + \"/\" + namedCluster.getName() );\n    Path configPropertiesPath =\n      Paths.get( getNamedClusterConfigsRootDir( null ) + \"/\" + namedCluster.getName() + \"/\" + \"config.properties\" );\n    Files.createDirectories( clusterConfigDirPath );\n    String sampleConfigProperties = namedCluster.getShimIdentifier() + \"sampleconfig.properties\";\n    InputStream inputStream = this.getClass().getClassLoader().getResourceAsStream( sampleConfigProperties );\n    if ( inputStream != null ) {\n      Files.copy( inputStream, configPropertiesPath, StandardCopyOption.REPLACE_EXISTING );\n    }\n  }\n\n  private void deleteNamedCluster( IMetaStore metaStore, NamedCluster namedCluster ) {\n    try {\n      if ( namedClusterService.read( namedCluster.getName(), metaStore ) != null ) {\n        namedClusterService.delete( namedCluster.getName(), metaStore );\n        XmlMetaStore xmlMetaStore = getXmlMetastore( metaStore );\n        if ( xmlMetaStore != null ) {\n          String path = getNamedClusterConfigsRootDir( xmlMetaStore ) + \"/\" + namedCluster.getName();\n          try {\n            FileUtils.deleteDirectory( new File( path ) );\n          } catch ( IOException e ) {\n            // Do nothing. The config directory will be orphaned but functionality will not be impacted.\n          }\n        }\n      }\n    } catch ( MetaStoreException e ) {\n      commonDialogFactory.createErrorDialog( spoon.getShell(),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_TITLE ),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_MESSAGE, namedCluster.getName() ), e );\n    }\n  }\n\n  private void saveNamedCluster( IMetaStore metaStore, NamedCluster namedCluster ) {\n    try {\n      namedClusterService.create( namedCluster, metaStore );\n    } catch ( MetaStoreException e ) {\n      commonDialogFactory.createErrorDialog( spoon.getShell(),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE ),\n        BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE, namedCluster.getName() ), e );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/NamedClusterComposite.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.StackLayout;\nimport org.eclipse.swt.events.FocusEvent;\nimport org.eclipse.swt.events.FocusListener;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.layout.RowLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.hadoop.shim.api.core.ShimIdentifierInterface;\nimport org.pentaho.platform.engine.core.system.PentahoSystem;\n\npublic class NamedClusterComposite extends Composite {\n\n  private static final String NAMED_CLUSTER_DFS_SCHEME = \"named.cluster.dfs.scheme.\";\n\n  private static Class<?> PKG = NamedClusterComposite.class;\n\n  private PropsUI props;\n\n  private GridData gridData = new GridData();\n  private GridData numberGridData = new GridData();\n  private GridData labelGridData = new GridData();\n  private GridData userNameLabelGridData = new GridData();\n  private GridData userNameGridData = new GridData();\n  private GridData passwordLabelGridData = new GridData();\n  private GridData passwordGridData = new GridData();\n  private GridData portLabelGridData = new GridData();\n\n  private static final int ONE_COLUMN = 1;\n  private static final int TWO_COLUMNS = 2;\n\n  private static final int TEXT_FLAGS = SWT.SINGLE | SWT.LEFT | SWT.BORDER;\n  private static final int PASSWORD_FLAGS = TEXT_FLAGS | SWT.PASSWORD;\n\n  private static final String KETTLE_HADOOP_CLUSTER_GATEWAY_CONNECTION = \"KETTLE_HADOOP_CLUSTER_GATEWAY_CONNECTION\";\n\n  private Text nameOfNamedCluster;\n  private Composite compositeSwitcher;\n  private Composite gatewayComposite;\n  private Composite noGatewayComposite;\n\n  private Label jtHostLabel;\n  private TextVar jtHostNameText;\n  private Label jtPortLabel;\n  private TextVar jtPortText;\n  private Group hdfsGroup;\n  private Label hdfsHostLabel;\n  private TextVar hdfsHostText;\n  private Label hdfsPortLabel;\n  private TextVar hdfsPortText;\n  private Label hdfsUsernameLabel;\n  private TextVar hdfsUsernameText;\n  private Label hdfsPasswordLabel;\n  private TextVar hdfsPasswordText;\n\n  private ArrayList<String> schemeValues = new ArrayList<>();\n  private ArrayList<String> schemeNames = new ArrayList<>();\n\n  private StateChangeListener stateChangeListener;\n\n  private interface Callback {\n    public void invoke( NamedCluster nc, TextVar textVar, String value );\n  }\n\n  public NamedClusterComposite( Composite parent, NamedCluster namedCluster, PropsUI props,\n                                NamedClusterService namedClusterService ) {\n    super( parent, SWT.NONE );\n    props.setLook( this );\n    this.props = props;\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = 0;\n    formLayout.marginHeight = 0;\n    setLayout( formLayout );\n\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    setLayoutData( fd );\n\n    gridData.widthHint = 270;\n    numberGridData.widthHint = 80;\n    labelGridData.widthHint = 270;\n    portLabelGridData.widthHint = 80;\n    userNameLabelGridData.widthHint = 165;\n    userNameGridData.widthHint = 165;\n    passwordLabelGridData.widthHint = 185;\n    passwordGridData.widthHint = 185;\n\n    //create head of form\n    Composite confUI = createHeadOfNamedClusterDialog( this, namedCluster );\n\n    // Create a horizontal separator\n    Label topSeparator = new Label( this, SWT.HORIZONTAL | SWT.SEPARATOR );\n    // Attach the separator to the name \n    topSeparator.setLayoutData( createFormDataAndAttachTopControl( confUI ) );\n\n    // create the composite to hold and switch between two subcomponent\n    compositeSwitcher = new Composite( this, SWT.NONE );\n    // attach to the separator\n    compositeSwitcher.setLayoutData( createFormDataAndAttachTopControl( topSeparator ) );\n    StackLayout compositeLayout = new StackLayout();\n    compositeSwitcher.setLayout( compositeLayout );\n\n    // Create a child composite to hold the gateway controls\n    gatewayComposite = new Composite( compositeSwitcher, SWT.NONE );\n    props.setLook( gatewayComposite );\n    GridLayout gatewayCompositeLayout = new GridLayout( ONE_COLUMN, false );\n    gatewayCompositeLayout.marginHeight = 0;\n    gatewayCompositeLayout.marginWidth = 0;\n    gatewayComposite.setLayout( gatewayCompositeLayout );\n    gatewayComposite.setSize( gatewayComposite.computeSize( SWT.DEFAULT, SWT.DEFAULT ) );\n    createGatewayGroup( gatewayComposite, namedCluster );\n\n    // Create a child composite to hold the non gateway controls\n    noGatewayComposite = new Composite( compositeSwitcher, SWT.NONE );\n    props.setLook( noGatewayComposite );\n    GridLayout gl = new GridLayout( ONE_COLUMN, false );\n    gl.marginHeight = 0;\n    gl.marginWidth = 0;\n    noGatewayComposite.setLayout( gl );\n    noGatewayComposite.setSize( noGatewayComposite.computeSize( SWT.DEFAULT, SWT.DEFAULT ) );\n\n    createStorageGroup( noGatewayComposite, namedCluster, namedClusterService );\n    createShimVendorGroup( noGatewayComposite, namedCluster, namedClusterService );\n    createHdfsGroup( noGatewayComposite, namedCluster );\n    createJobTrackerGroup( noGatewayComposite, namedCluster );\n    createZooKeeperGroup( noGatewayComposite, namedCluster );\n    createOozieGroup( noGatewayComposite, namedCluster );\n    createKafkaGroup( noGatewayComposite, namedCluster );\n    setHdfsAndJobTrackerState( namedCluster );\n\n    //choose the properly composite based on the cluster settings\n    compositeLayout.topControl = namedCluster.isUseGateway() ? gatewayComposite : noGatewayComposite;\n    compositeSwitcher.layout();\n    nameOfNamedCluster.forceFocus();\n  }\n\n  public void setStateChangeListener( StateChangeListener stateChangeListener ) {\n    this.stateChangeListener = stateChangeListener;\n  }\n\n  private FormData createFormDataAndAttachTopControl( Control topControl ) {\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( topControl, 10 );\n    return fd;\n  }\n\n  private Composite createHeadOfNamedClusterDialog( final Composite parentComposite, final NamedCluster namedCluster ) {\n    Composite mainRowComposite = new Composite( parentComposite, SWT.NONE );\n    GridLayout mainRowLayout = new GridLayout( ONE_COLUMN, false );\n    mainRowLayout.marginWidth = 0;\n    mainRowLayout.marginTop = -10;\n    mainRowComposite.setLayout( mainRowLayout );\n    props.setLook( mainRowComposite );\n\n    Composite nameUICluster = new Composite( mainRowComposite, SWT.NONE );\n    props.setLook( nameUICluster );\n    GridLayout nameUILayout = new GridLayout( ONE_COLUMN, false );\n    nameUILayout.marginWidth = 0;\n    nameUILayout.marginTop = 0;\n    nameUICluster.setLayout( nameUILayout );\n\n    createLabel( nameUICluster, BaseMessages.getString( PKG, \"NamedClusterDialog.NamedCluster.Name\" ), labelGridData );\n\n    nameOfNamedCluster = new Text( nameUICluster, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    nameOfNamedCluster.setText( String.valueOf( namedCluster.getName() ) );\n    nameOfNamedCluster.setLayoutData( gridData );\n    props.setLook( nameOfNamedCluster );\n    nameOfNamedCluster.addModifyListener( new ModifyListener() {\n      public void modifyText( final ModifyEvent modifyEvent ) {\n        namedCluster.setName( nameOfNamedCluster.getText() );\n        stateChanged();\n      }\n    } );\n\n    if ( shouldRenderGatewayCheckbox( namedCluster ) ) {\n      // Create gateway composite\n      Composite gatewayUIComposite = new Composite( mainRowComposite, SWT.NONE );\n      GridLayout layout = new GridLayout( ONE_COLUMN, false );\n      layout.marginHeight = 0;\n      layout.marginWidth = 0;\n      gatewayUIComposite.setLayout( layout );\n      props.setLook( gatewayUIComposite );\n\n      // Create gateway check box\n      final Button gatewayButton = new Button( gatewayUIComposite, SWT.CHECK );\n      gatewayButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.NamedCluster.GatewayCheckBoxTitle\" ) );\n      props.setLook( gatewayButton );\n      gatewayButton.setSelection( namedCluster.isUseGateway() );\n      gatewayButton.addSelectionListener( new SelectionAdapter() {\n        public void widgetSelected( SelectionEvent e ) {\n          super.widgetSelected( e );\n          namedCluster.setUseGateway( gatewayButton.getSelection() );\n          StackLayout layout = (StackLayout) compositeSwitcher.getLayout();\n          layout.topControl = namedCluster.isUseGateway() ? gatewayComposite : noGatewayComposite;\n          compositeSwitcher.layout();\n          stateChanged();\n        }\n      } );\n    }\n    return mainRowComposite;\n  }\n\n  private Label createLabel( Composite parent, String text, GridData gd ) {\n    Label label = new Label( parent, SWT.NONE );\n    label.setText( text );\n    label.setLayoutData( gd );\n    props.setLook( label );\n    return label;\n  }\n\n  private TextVar createTextVar( final NamedCluster c, Composite parent, String val, GridData gd, int flags,\n                                 final Callback cb ) {\n    final TextVar textVar = new TextVar( c, parent, flags );\n    // SWT will typically not allow a null text\n    textVar.setText( StringUtils.isEmpty( val ) ? StringUtils.EMPTY : val );\n    textVar.setLayoutData( gd );\n    props.setLook( textVar );\n\n    textVar.addModifyListener( new ModifyListener() {\n      public void modifyText( final ModifyEvent modifyEvent ) {\n        cb.invoke( c, textVar, textVar.getText() );\n      }\n    } );\n\n    return textVar;\n  }\n\n  private Composite createGroup( Composite parent, String groupLabel ) {\n    Group group = new Group( parent, SWT.NONE );\n    group.setText( groupLabel );\n    group.setLayout( new RowLayout( SWT.VERTICAL ) );\n    props.setLook( group );\n    GridData groupGridData = new GridData();\n    groupGridData.grabExcessHorizontalSpace = true;\n    groupGridData.horizontalAlignment = SWT.FILL;\n\n    group.setLayoutData( groupGridData );\n    // property parent composite\n    Composite pp = new Composite( group, SWT.NONE );\n    props.setLook( pp );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.verticalSpacing = -10;\n    gridLayout.marginWidth = 0;\n\n    gridLayout.marginLeft = 5;\n    gridLayout.marginRight = 5;\n    gridLayout.marginTop = -10;\n    gridLayout.marginBottom = -5;\n\n    pp.setLayout( gridLayout );\n    return pp;\n  }\n\n  private Composite createTwoColumnsContainer( Composite parentComposite ) {\n    Composite twoColumnsComposite = new Composite( parentComposite, SWT.NONE );\n    props.setLook( twoColumnsComposite );\n    GridLayout gridLayout = new GridLayout( TWO_COLUMNS, false );\n    gridLayout.marginWidth = 0;\n    twoColumnsComposite.setLayout( gridLayout );\n    return twoColumnsComposite;\n  }\n\n  private void createShimVendorGroup( Composite parentComposite, final NamedCluster cluster,\n                                      final NamedClusterService namedClusterService ) {\n    Composite container = new Composite( parentComposite, SWT.NONE );\n    props.setLook( container );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.marginWidth = 0;\n    gridLayout.marginBottom = 5;\n    container.setLayout( gridLayout );\n\n    // Create a storage type Label\n    createLabel( container, \"Vendor shim\", labelGridData );\n\n    // Create a storage type Drop Down\n    final CCombo shimVendorCombo = new CCombo( container, SWT.BORDER );\n    List<ShimIdentifierInterface> shimIdentifers = PentahoSystem.getAll( ShimIdentifierInterface.class );\n    String[] vendorList = shimIdentifers.stream()\n      .map( ShimIdentifierInterface::getId )\n      .filter( shimId -> !shimId.equals( \"apache\" ) )\n      .toArray( String[]::new );\n\n    shimVendorCombo.setItems( vendorList );\n    shimVendorCombo.setEditable( false );\n    shimVendorCombo.select( Arrays.asList( vendorList ).indexOf( cluster.getShimIdentifier() ) );\n    props.setLook( shimVendorCombo );\n\n    shimVendorCombo.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        super.widgetSelected( e );\n        int index = shimVendorCombo.getSelectionIndex();\n        if ( index == -1 ) {\n          index = 0;\n        }\n        cluster.setShimIdentifier( vendorList[ index ] );\n      }\n    } );\n    shimVendorCombo.addFocusListener( new FocusListener() {\n\n      @Override\n      public void focusLost( FocusEvent e ) {\n        String uiInputText = shimVendorCombo.getText();\n        int selectedIndex;\n        if ( Arrays.asList( vendorList ).contains( uiInputText ) ) {\n          selectedIndex = Arrays.asList( vendorList ).indexOf( uiInputText );\n          cluster.setShimIdentifier( vendorList[ selectedIndex ] );\n          shimVendorCombo.select( selectedIndex );\n        }\n      }\n\n      @Override\n      public void focusGained( FocusEvent e ) {\n        // should not do any thing on enter focus\n      }\n    } );\n  }\n\n  private void createStorageGroup( Composite parentComposite, final NamedCluster cluster,\n                                   final NamedClusterService namedClusterService ) {\n    Map<String, Object> properties = namedClusterService.getProperties();\n    for ( String key : properties.keySet() ) {\n      if ( key.startsWith( NAMED_CLUSTER_DFS_SCHEME ) ) {\n        // will add 1 because we should use the key without \".\"\n        schemeValues.add( key.substring( key.lastIndexOf( \".\" ) + 1 ) );\n        schemeNames.add( (String) properties.get( key ) );\n      }\n    }\n    // if we have undefined scheme ( set by variable for example) than we should add the new scheme\n    if ( !schemeValues.contains( cluster.getStorageScheme() ) ) {\n      schemeValues.add( cluster.getStorageScheme() );\n      schemeNames.add( cluster.getStorageScheme() );\n    }\n\n    Composite container = new Composite( parentComposite, SWT.NONE );\n    props.setLook( container );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.marginWidth = 0;\n    gridLayout.marginBottom = 5;\n    container.setLayout( gridLayout );\n\n    // Create a storage type Label\n    createLabel( container, BaseMessages.getString( PKG, \"NamedClusterDialog.Storage\" ), labelGridData );\n\n    // Create a storage type Drop Down\n    final CCombo storageCombo = new CCombo( container, SWT.BORDER );\n    storageCombo.setItems( schemeNames.toArray( new String[ schemeNames.size() ] ) );\n    storageCombo.select( schemeValues.indexOf( cluster.getStorageScheme() ) );\n    props.setLook( storageCombo );\n\n    storageCombo.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        super.widgetSelected( e );\n        int index = storageCombo.getSelectionIndex();\n        if ( index == -1 ) {\n          index = 0;\n        }\n        cluster.setStorageScheme( schemeValues.get( index ) );\n        setHdfsAndJobTrackerState( cluster );\n      }\n    } );\n    storageCombo.addFocusListener( new FocusListener() {\n\n      @Override\n      public void focusLost( FocusEvent e ) {\n        String uiInputText = storageCombo.getText();\n        int selectedIndex;\n        if ( schemeNames.contains( uiInputText ) ) {\n          selectedIndex = schemeNames.indexOf( uiInputText );\n          cluster.setStorageScheme( schemeValues.get( selectedIndex ) );\n          storageCombo.select( selectedIndex );\n        } else if ( schemeValues.contains( uiInputText ) ) {\n          selectedIndex = schemeValues.indexOf( uiInputText );\n          cluster.setStorageScheme( schemeValues.get( selectedIndex ) );\n          storageCombo.select( selectedIndex );\n        } else {\n          schemeNames.add( uiInputText );\n          schemeValues.add( uiInputText );\n          storageCombo.setItems( schemeNames.toArray( new String[ schemeNames.size() ] ) );\n          cluster.setStorageScheme( uiInputText );\n        }\n        setHdfsAndJobTrackerState( cluster );\n      }\n\n      @Override\n      public void focusGained( FocusEvent e ) {\n        // should not do any thing on enter focus\n      }\n    } );\n  }\n\n  private void createHdfsGroup( Composite parentComposite, final NamedCluster c ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.HDFS\" ) );\n    hdfsGroup = (Group) pp.getParent();\n    Composite hdfsRowComposite = createTwoColumnsContainer( pp );\n    Composite hostUIComposite = new Composite( hdfsRowComposite, SWT.NONE );\n    props.setLook( hostUIComposite );\n    hostUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    Composite portUIComposite = new Composite( hdfsRowComposite, SWT.NONE );\n    props.setLook( portUIComposite );\n    portUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    hdfsHostLabel =\n      createLabel( hostUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Hostname\" ), labelGridData );\n    // hdfs host input\n    Callback hdfsHostCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setHdfsHost( value );\n      }\n    };\n    hdfsHostText = createTextVar( c, hostUIComposite, c.getHdfsHost(), gridData, TEXT_FLAGS, hdfsHostCB );\n\n    hdfsPortLabel =\n      createLabel( portUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Port\" ), portLabelGridData );\n    // hdfs port input\n    Callback hdfsPortCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setHdfsPort( value );\n      }\n    };\n    hdfsPortText = createTextVar( c, portUIComposite, c.getHdfsPort(), numberGridData, TEXT_FLAGS, hdfsPortCB );\n\n    Composite hdfsCredentialsRowComposite = createTwoColumnsContainer( pp );\n\n    Composite usernameUIComposite = new Composite( hdfsCredentialsRowComposite, SWT.NONE );\n    props.setLook( usernameUIComposite );\n    usernameUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    Composite passwordUIComposite = new Composite( hdfsCredentialsRowComposite, SWT.NONE );\n    props.setLook( passwordUIComposite );\n    passwordUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    hdfsUsernameLabel = createLabel( usernameUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Username\" ),\n      userNameLabelGridData );\n    // hdfs user input\n    Callback hdfsUsernameCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setHdfsUsername( value );\n      }\n    };\n    hdfsUsernameText =\n      createTextVar( c, usernameUIComposite, c.getHdfsUsername(), userNameGridData, TEXT_FLAGS, hdfsUsernameCB );\n\n    hdfsPasswordLabel = createLabel( passwordUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Password\" ),\n      passwordLabelGridData );\n    // hdfs password input\n    Callback hdfsPasswordCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setHdfsPassword( nc.encodePassword( value ) );\n      }\n    };\n    hdfsPasswordText =\n      createTextVar( c, passwordUIComposite, c.decodePassword( c.getHdfsPassword() ), passwordGridData, PASSWORD_FLAGS, hdfsPasswordCB );\n  }\n\n  private void createJobTrackerGroup( Composite parentComposite, final NamedCluster c ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.JobTracker\" ) );\n    Composite jobTrackerRowComposite = createTwoColumnsContainer( pp );\n    Composite hostUIComposite = new Composite( jobTrackerRowComposite, SWT.NONE );\n    props.setLook( hostUIComposite );\n    hostUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    Composite portUIComposite = new Composite( jobTrackerRowComposite, SWT.NONE );\n    props.setLook( portUIComposite );\n    portUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    jtHostLabel =\n      createLabel( hostUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Hostname\" ), labelGridData );\n    // hdfs host input\n    Callback hostCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setJobTrackerHost( value );\n      }\n    };\n    jtHostNameText = createTextVar( c, hostUIComposite, c.getJobTrackerHost(), gridData, TEXT_FLAGS, hostCB );\n\n    jtPortLabel =\n      createLabel( portUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Port\" ), portLabelGridData );\n    // hdfs port input\n    Callback portCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setJobTrackerPort( value );\n      }\n    };\n    jtPortText = createTextVar( c, portUIComposite, c.getJobTrackerPort(), numberGridData, TEXT_FLAGS, portCB );\n  }\n\n  private void createZooKeeperGroup( Composite parentComposite, final NamedCluster c ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.ZooKeeper\" ) );\n\n    Composite zooKeeperRowComposite = createTwoColumnsContainer( pp );\n\n    Composite hostUIComposite = new Composite( zooKeeperRowComposite, SWT.NONE );\n    props.setLook( hostUIComposite );\n    hostUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    Composite portUIComposite = new Composite( zooKeeperRowComposite, SWT.NONE );\n    props.setLook( portUIComposite );\n    portUIComposite.setLayout( new GridLayout( ONE_COLUMN, false ) );\n\n    // hdfs host label\n    createLabel( hostUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Hostname\" ), labelGridData );\n    // hdfs host input\n    Callback hostCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setZooKeeperHost( value );\n      }\n    };\n    createTextVar( c, hostUIComposite, c.getZooKeeperHost(), gridData, TEXT_FLAGS, hostCB );\n\n    // hdfs port label\n    createLabel( portUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Port\" ), portLabelGridData );\n    // hdfs port input\n    Callback portCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setZooKeeperPort( value );\n      }\n    };\n    createTextVar( c, portUIComposite, c.getZooKeeperPort(), numberGridData, TEXT_FLAGS, portCB );\n  }\n\n  private void createOozieGroup( Composite parentComposite, final NamedCluster namedCluster ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Oozie\" ) );\n    Composite container = new Composite( pp, SWT.NONE );\n    props.setLook( container );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.marginBottom = 5;\n    gridLayout.marginTop = 5;\n    container.setLayout( gridLayout );\n\n    // oozie label\n    createLabel( container, BaseMessages.getString( PKG, \"NamedClusterDialog.URL\" ), labelGridData );\n    // oozie url\n    Callback hostCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setOozieUrl( value );\n      }\n    };\n    createTextVar( namedCluster, container, namedCluster.getOozieUrl(), gridData, TEXT_FLAGS, hostCB );\n  }\n\n  private void createGatewayGroup( Composite parentComposite, final NamedCluster c ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Gateway\" ) );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.marginBottom = 5;\n    gridLayout.marginTop = 5;\n    Composite gatewayUrlUIComposite = new Composite( pp, SWT.NONE );\n    props.setLook( gatewayUrlUIComposite );\n    gatewayUrlUIComposite.setLayout( gridLayout );\n\n    createLabel( gatewayUrlUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.GatewayUrl\" ), labelGridData );\n    // gateway url input\n    Callback gatewayUrlCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setGatewayUrl( value );\n        stateChanged();\n      }\n    };\n\n    GridData gd = new GridData();\n    gd.widthHint = 365;\n    createTextVar( c, gatewayUrlUIComposite, c.getGatewayUrl(), gd, TEXT_FLAGS, gatewayUrlCB );\n\n    Composite gatewayCredentialsRowComposite = createTwoColumnsContainer( pp );\n\n    Composite usernameUIComposite = new Composite( gatewayCredentialsRowComposite, SWT.NONE );\n    props.setLook( usernameUIComposite );\n    GridLayout userNamelayout = new GridLayout( ONE_COLUMN, false );\n    usernameUIComposite.setLayout( userNamelayout );\n\n    Composite passwordUIComposite = new Composite( gatewayCredentialsRowComposite, SWT.NONE );\n    props.setLook( passwordUIComposite );\n    GridLayout passwordLayout = new GridLayout( ONE_COLUMN, false );\n    passwordUIComposite.setLayout( passwordLayout );\n\n    createLabel( usernameUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Username\" ),\n      userNameLabelGridData );\n    // gateway user input\n    Callback gatewayUsernameCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setGatewayUsername( value );\n        stateChanged();\n      }\n    };\n    createTextVar( c, usernameUIComposite, c.getGatewayUsername(), userNameGridData, TEXT_FLAGS, gatewayUsernameCB );\n\n    createLabel( passwordUIComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Password\" ),\n      passwordLabelGridData );\n    // gateway password input\n    Callback gatewayPasswordCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setGatewayPassword( nc.encodePassword( value ) );\n        stateChanged();\n      }\n    };\n    createTextVar( c, passwordUIComposite, c.decodePassword( c.getGatewayPassword() ), passwordGridData, PASSWORD_FLAGS,\n      gatewayPasswordCB );\n  }\n\n  private void createKafkaGroup( Composite parentComposite, final NamedCluster namedCluster ) {\n    Composite pp = createGroup( parentComposite, BaseMessages.getString( PKG, \"NamedClusterDialog.Kafka.GroupTitle\" ) );\n    Composite container = new Composite( pp, SWT.NONE );\n    props.setLook( container );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    gridLayout.marginBottom = 5;\n    gridLayout.marginTop = 5;\n    container.setLayout( gridLayout );\n\n    // kafka label\n    createLabel( container, BaseMessages.getString( PKG, \"NamedClusterDialog.Kafka.BootstrapServers.Label\" ),\n      labelGridData );\n    // kafka bootstrap servers\n    Callback bootstrapServersCB = new Callback() {\n      public void invoke( NamedCluster nc, TextVar textVar, String value ) {\n        nc.setKafkaBootstrapServers( value );\n      }\n    };\n    createTextVar( namedCluster, container, namedCluster.getKafkaBootstrapServers(), gridData, TEXT_FLAGS,\n      bootstrapServersCB );\n  }\n\n  private void setHdfsAndJobTrackerState( NamedCluster cluster ) {\n    boolean state = !cluster.isMapr();\n    jtHostLabel.setEnabled( state );\n    jtHostNameText.setEnabled( state );\n    jtPortLabel.setEnabled( state );\n    jtPortText.setEnabled( state );\n    hdfsHostLabel.setEnabled( state );\n    hdfsHostText.setEnabled( state );\n    hdfsPortLabel.setEnabled( state );\n    hdfsPortText.setEnabled( state );\n    hdfsUsernameLabel.setEnabled( state );\n    hdfsUsernameText.setEnabled( state );\n    hdfsPasswordLabel.setEnabled( state );\n    hdfsPasswordText.setEnabled( state );\n    String storageName = cluster.getStorageScheme();\n    //get the human readable name\n    if ( !Utils.isEmpty( schemeNames ) && !Utils.isEmpty( schemeValues ) ) {\n      storageName = schemeNames.get( schemeValues.indexOf( storageName ) );\n    }\n    hdfsGroup.setText( storageName );\n  }\n\n  private boolean shouldRenderGatewayCheckbox( final NamedCluster namedCluster ) {\n    return Boolean.valueOf( namedCluster.getVariable( KETTLE_HADOOP_CLUSTER_GATEWAY_CONNECTION ) );\n  }\n\n  private void stateChanged() {\n    if ( stateChangeListener != null ) {\n      stateChangeListener.stateModified();\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/NamedClusterDialogImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.ScrolledComposite;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Dialog;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.plugins.LifecyclePluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\n/**\n * Dialog that allows you to edit the settings of a named cluster.\n *\n * @see <code>NamedCluster</code>\n */\npublic class NamedClusterDialogImpl extends Dialog {\n  private static final int RESULT_NO = 1;\n  private static final int DIALOG_WIDTH = 459;\n  private static Class<?> PKG = NamedClusterDialogImpl.class; // for i18n purposes, needed by Translator2!!\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private Shell shell;\n  private PropsUI props;\n  private NamedCluster originalNamedCluster;\n  private NamedCluster namedCluster;\n  private boolean newClusterCheck = false;\n  private String result;\n\n  public NamedClusterDialogImpl( Shell parent, NamedClusterService namedClusterService,\n                                 RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this( parent, namedClusterService, runtimeTestActionService, runtimeTester, null );\n  }\n\n  public NamedClusterDialogImpl( Shell parent, NamedClusterService namedClusterService,\n                                 RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                                 NamedCluster namedCluster ) {\n    super( parent );\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    props = PropsUI.getInstance();\n\n    this.namedCluster = namedCluster;\n    this.originalNamedCluster = namedCluster == null ? null : namedCluster.clone();\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n    this.originalNamedCluster = namedCluster.clone();\n  }\n\n  public boolean isNewClusterCheck() {\n    return newClusterCheck;\n  }\n\n  public void setNewClusterCheck( boolean newClusterCheck ) {\n    this.newClusterCheck = newClusterCheck;\n  }\n\n  public void dispose() {\n    props.setScreen( new WindowProperty( shell ) );\n    shell.dispose();\n  }\n\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.CLOSE | SWT.ICON | SWT.RESIZE );\n    props.setLook( shell );\n    shell.setImage( GUIResource.getInstance().getImageSpoon() );\n    shell.setMinimumSize( DIALOG_WIDTH, 458 );\n    shell.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.Shell.Title\" ) );\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = 15;\n    formLayout.marginHeight = 15;\n    shell.setLayout( formLayout );\n\n    BaseStepDialog.setSize( shell );\n\n    // Create help button\n    String docUrl = Const.getDocUrl( BaseMessages.getString( PKG, \"NamedClusterDialog.Shell.Doc\" ) );\n    PluginInterface plugin =\n        PluginRegistry.getInstance().findPluginWithId( LifecyclePluginType.class, /* TODO */ \"HadoopSpoonPlugin\" );\n    HelpUtils.createHelpButton( shell, HelpUtils.getHelpDialogTitle( plugin ),\n        docUrl,\n        BaseMessages.getString( PKG, \"NamedClusterDialog.Shell.Title\" ) );\n\n    // Buttons\n    Button wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n    FormData fd = new FormData();\n\n    Button wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) );\n\n    Button wTest = new Button( shell, SWT.PUSH );\n    wTest.setText( BaseMessages.getString( PKG, \"System.Button.Test\" ) );\n\n    Button[] buttons = new Button[] { wTest, wOK, wCancel };\n\n    BaseStepDialog.positionBottomRightButtons( shell, buttons, Const.FORM_MARGIN, null );\n\n    // Create a horizontal separator\n    Label bottomSeparator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n\n    fd = new FormData();\n    fd.bottom = new FormAttachment( wCancel, -15 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    bottomSeparator.setLayoutData( fd );\n\n    ScrolledComposite scrolledComposite = new ScrolledComposite( shell, SWT.V_SCROLL );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( bottomSeparator, -15 );\n    scrolledComposite.setLayoutData( fd );\n    props.setLook( scrolledComposite );\n\n    NamedClusterComposite namedClusterComposite = new NamedClusterComposite( scrolledComposite, namedCluster, props, namedClusterService );\n    scrolledComposite.setContent( namedClusterComposite );\n    namedClusterComposite.pack();\n\n    // Add listeners\n    wTest.addListener( SWT.Selection, new Listener() {\n      @Override\n      public void handleEvent( Event event ) {\n        try {\n          RuntimeTestStatus testStatus = ClusterTestDialog.create( shell, getNamedCluster(), runtimeTester ).open();\n          if ( testStatus != null ) {\n            // We have good results, show the dialog\n            try {\n              new ClusterTestResultsDialog( shell, runtimeTestActionService, testStatus ).open();\n            } catch ( KettleException ke ) {\n              new ErrorDialog( shell, BaseMessages.getString( PKG, \"ClusterTestResultsDialog.FailedToOpen\" ),\n                ke.getMessage(), ke );\n            }\n          }\n        } catch ( KettleException e ) {\n          // The exception already has the message localized\n          new ErrorDialog( shell, BaseMessages.getString( PKG, \"NamedClusterDialog.DialogError\" ),\n            e.getMessage(), e );\n        }\n      }\n    } );\n    wOK.addListener( SWT.Selection, e -> ok() );\n    wCancel.addListener( SWT.Selection, e -> cancel() );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n\n    namedClusterComposite.setStateChangeListener( () -> {\n      boolean enabled = !namedCluster.isUseGateway()\n          || ( StringUtils.isNotBlank( namedCluster.getName() )\n          && StringUtils.isNotBlank( namedCluster.getGatewayUrl() )\n          && StringUtils.isNotBlank( namedCluster.getGatewayUsername() )\n          && StringUtils.isNotBlank( namedCluster.decodePassword( namedCluster.getGatewayPassword() ) ) );\n\n      if ( wOK.isEnabled() != enabled ) {\n        wOK.setEnabled( enabled );\n      }\n    } );\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return result;\n  }\n\n  private void cancel() {\n    result = null;\n    dispose();\n  }\n\n  public void ok() {\n    result = namedCluster.getName();\n    if ( StringUtils.isBlank( result ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.Error\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"NamedClusterDialog.ClusterNameMissing\" ) );\n      mb.open();\n      return;\n    } else if ( StringUtils.isBlank( namedCluster.getShimIdentifier() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.Error\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"NamedClusterDialog.ShimIdentifierMissing\" ) );\n      mb.open();\n      return;\n    } else if ( newClusterCheck || !originalNamedCluster.getName().equals( result ) ) {\n      // check that the getName does not already exist\n      try {\n        NamedCluster fetched = namedClusterService.read( result, Spoon.getInstance().getMetaStore() );\n        if ( fetched != null ) {\n\n          String title = BaseMessages.getString( PKG, \"NamedClusterDialog.ClusterNameExists.Title\" );\n          String message = BaseMessages.getString( PKG, \"NamedClusterDialog.ClusterNameExists\", result );\n          String replaceButton = BaseMessages.getString( PKG, \"NamedClusterDialog.ClusterNameExists.Replace\" );\n          String doNotReplaceButton =\n            BaseMessages.getString( PKG, \"NamedClusterDialog.ClusterNameExists.DoNotReplace\" );\n          MessageDialog dialog =\n            new MessageDialog( shell, title, null, message, MessageDialog.WARNING, new String[]{ replaceButton,\n              doNotReplaceButton }, 0 );\n\n          // there already exists a cluster with the new getName, ask the user\n          if ( RESULT_NO == dialog.open() ) {\n            // do not exist dialog\n            return;\n          }\n        }\n      } catch ( MetaStoreException ignored ) {\n        // the lookup failed, the cluster does not exist, move on to dispose\n      }\n    }\n    dispose();\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/NamedClusterWidgetImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.SelectionListener;\nimport org.eclipse.swt.layout.RowData;\nimport org.eclipse.swt.layout.RowLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Combo;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.List;\n\npublic class NamedClusterWidgetImpl extends Composite {\n  private static Class<?> PKG = NamedClusterWidgetImpl.class;\n  private NamedClusterService namedClusterService;\n  private Combo nameClusterCombo;\n  private HadoopClusterDelegateImpl ncDelegate;\n\n  public NamedClusterWidgetImpl( Composite parent, boolean showLabel, NamedClusterService namedClusterService,\n                                 RuntimeTestActionService runtimeTestActionService, RuntimeTester clusterTester, boolean enableNewEditNameClusterButtons ) {\n    super( parent, SWT.NONE );\n    this.namedClusterService = namedClusterService;\n    ncDelegate = new HadoopClusterDelegateImpl( Spoon.getInstance(), this.namedClusterService,\n      runtimeTestActionService, clusterTester );\n\n    PropsUI props = PropsUI.getInstance();\n    props.setLook( this );\n\n    RowLayout layout = new RowLayout( SWT.HORIZONTAL );\n    //layout.center = true; //TODO EC:FIX THIS\n    setLayout( layout );\n\n    if ( showLabel ) {\n      Label nameLabel = new Label( this, SWT.NONE );\n      nameLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.Shell.Title\" ) + \":\" );\n      props.setLook( nameLabel );\n    }\n\n    setNameClusterCombo( new Combo( this, SWT.DROP_DOWN | SWT.READ_ONLY ) );\n    getNameClusterCombo().setLayoutData( new RowData( 150, SWT.DEFAULT ) );\n\n    if ( enableNewEditNameClusterButtons ) {\n      Button editButton = new Button( this, SWT.NONE );\n      editButton.setText( BaseMessages.getString( PKG, \"NamedClusterWidget.NamedCluster.Edit\" ) );\n      editButton.addListener( SWT.Selection, new Listener() {\n        public void handleEvent( Event e ) {\n          editNamedCluster();\n        }\n      } );\n      props.setLook( editButton );\n    }\n\n    if ( enableNewEditNameClusterButtons ) {\n      Button newButton = new Button( this, SWT.NONE );\n      newButton.setText( BaseMessages.getString( PKG, \"NamedClusterWidget.NamedCluster.New\" ) );\n      newButton.addListener( SWT.Selection, new Listener() {\n        public void handleEvent( Event e ) {\n          newNamedCluster();\n        }\n      } );\n      props.setLook( newButton );\n\n      initiate();\n    }\n  }\n\n  private void newNamedCluster() {\n    Spoon spoon = Spoon.getInstance();\n    AbstractMeta meta = (AbstractMeta) spoon.getActiveMeta();\n    ncDelegate.newNamedCluster( meta, spoon.getMetaStore(), getShell() );\n    initiate();\n  }\n\n  private void editNamedCluster() {\n    Spoon spoon = Spoon.getInstance();\n    AbstractMeta meta = (AbstractMeta) spoon.getActiveMeta();\n    if ( meta != null ) {\n      List<NamedCluster> namedClusters = null;\n      try {\n        namedClusters = namedClusterService.list( spoon.getMetaStore() );\n      } catch ( MetaStoreException e ) {\n        //Ignore\n      }\n\n      int index = getNameClusterCombo().getSelectionIndex();\n      if ( index > -1 && namedClusters != null && namedClusters.size() > 0 ) {\n        ncDelegate.editNamedCluster( spoon.getMetaStore(), namedClusters\n          .get( index ), getShell() );\n        initiate();\n      }\n    }\n  }\n\n  protected String[] getNamedClusterNames() {\n    try {\n      return namedClusterService.listNames( Spoon.getInstance().getMetaStore() )\n        .toArray( new String[ 0 ] );\n    } catch ( MetaStoreException e ) {\n      return new String[ 0 ];\n    }\n  }\n\n  public void initiate() {\n    int selectedIndex = getNameClusterCombo().getSelectionIndex();\n    getNameClusterCombo().removeAll();\n    getNameClusterCombo().setItems( getNamedClusterNames() );\n    getNameClusterCombo().select( selectedIndex );\n  }\n\n  public NamedCluster getSelectedNamedCluster() {\n    Spoon spoon = Spoon.getInstance();\n    int index = getNameClusterCombo().getSelectionIndex();\n    if ( index > -1 ) {\n      String name = getNameClusterCombo().getItem( index );\n      try {\n        return namedClusterService.read( name, spoon.getMetaStore() );\n      } catch ( MetaStoreException e ) {\n        return null;\n      }\n    }\n    return null;\n  }\n\n  public void setSelectedNamedCluster( String name ) {\n    getNameClusterCombo().deselectAll();\n    for ( int i = 0; i < getNameClusterCombo().getItemCount(); i++ ) {\n      if ( getNameClusterCombo().getItem( i ).equals( name ) ) {\n        getNameClusterCombo().select( i );\n        return;\n      }\n    }\n  }\n\n  public void addSelectionListener( SelectionListener selectionListener ) {\n    getNameClusterCombo().addSelectionListener( selectionListener );\n  }\n\n  public Combo getNameClusterCombo() {\n    return nameClusterCombo;\n  }\n\n  protected void setNameClusterCombo( Combo nameClusterCombo ) {\n    this.nameClusterCombo = nameClusterCombo;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/StateChangeListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\ninterface StateChangeListener {\n  void stateModified();\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/TestResultComposite.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.eclipse.swt.widgets.Composite;\n\n/**\n * Created by mburgess on 8/27/15.\n */\npublic class TestResultComposite extends Composite {\n\n  public TestResultComposite( Composite parent, int style ) {\n    super( parent, style );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/java/org/pentaho/big/data/plugins/common/ui/VfsFileChooserHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\n\n/**\n * User: RFellows Date: 6/8/12\n */\npublic class VfsFileChooserHelper {\n  private static final Logger logger = LogManager.getLogger( VfsFileChooserHelper.class );\n  private VfsFileChooserDialog fileChooserDialog = null;\n  private Shell shell = null;\n  private VariableSpace variableSpace = null;\n  private FileSystemOptions fileSystemOptions = null;\n  private String defaultScheme = \"file\";\n  private String[] schemeRestrictions = null;\n  private boolean showFileScheme = true;\n\n  public VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace ) {\n    this( shell, fileChooserDialog, variableSpace, new FileSystemOptions() );\n  }\n\n  public VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace,\n                               FileSystemOptions fileSystemOptions ) {\n    this.fileChooserDialog = fileChooserDialog;\n    this.shell = shell;\n    this.variableSpace = variableSpace;\n    this.fileSystemOptions = fileSystemOptions;\n    this.schemeRestrictions = new String[0];\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri ) throws KettleException,\n    FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode,\n      boolean showLocation ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, true );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode,\n      boolean showLocation, boolean showCustomUI ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, showCustomUI );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n      int fileDialogMode ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, fileDialogMode, true, true );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n      int fileDialogMode, boolean showLocation, boolean showCustomUI ) throws KettleException, FileSystemException {\n    // Get current file\n    FileObject rootFile = null;\n    FileObject initialFile = null;\n    Spoon spoon = Spoon.getInstance();\n\n    if ( fileUri != null ) {\n      initialFile = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileUri, variableSpace, opts );\n    } else {\n      initialFile = KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( Spoon.getInstance().getLastFileOpened() );\n    }\n    rootFile = initialFile.getFileSystem().getRoot();\n    fileChooserDialog.setRootFile( rootFile );\n    fileChooserDialog.setInitialFile( initialFile );\n    fileChooserDialog.defaultInitialFile = rootFile;\n\n    FileObject selectedFile = null;\n    selectedFile = fileChooserDialog.open(\n        shell, this.schemeRestrictions, getDefaultScheme(), showFileScheme(), initialFile.getName().getPath(),\n        fileFilters, fileFilterNames, returnsUserAuthenticatedFileObjects(), fileDialogMode, showLocation, showCustomUI );\n\n    return selectedFile;\n  }\n\n  public VariableSpace getVariableSpace() {\n    return variableSpace;\n  }\n\n  public void setVariableSpace( VariableSpace variableSpace ) {\n    this.variableSpace = variableSpace;\n  }\n\n  public FileSystemOptions getFileSystemOptions() {\n    return fileSystemOptions;\n  }\n\n  public void setFileSystemOptions( FileSystemOptions fileSystemOptions ) {\n    this.fileSystemOptions = fileSystemOptions;\n  }\n\n  public String getDefaultScheme() {\n    return defaultScheme;\n  }\n\n  public void setDefaultScheme( String defaultScheme ) {\n    this.defaultScheme = defaultScheme;\n  }\n\n  public String getSchemeRestriction() {\n    String schemaRestriction = null;\n    if ( this.schemeRestrictions != null && this.schemeRestrictions.length > 0 ) {\n      schemaRestriction = this.schemeRestrictions[0];\n    }\n    return schemaRestriction;\n  }\n\n  public void setSchemeRestriction( String schemeRestriction ) {\n    this.schemeRestrictions = new String[1];\n    this.schemeRestrictions[0] = schemeRestriction;\n  }\n\n  public void setSchemeRestrictions( String[] schemeRestrictions ) {\n    this.schemeRestrictions = schemeRestrictions;\n  }\n\n  public boolean showFileScheme() {\n    return this.showFileScheme;\n  }\n\n  public void setShowFileScheme( boolean showFileScheme ) {\n    this.showFileScheme = showFileScheme;\n  }\n\n  protected boolean returnsUserAuthenticatedFileObjects() {\n    return false;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n    for ( CustomVfsUiPanel currentPanel : dialog.getCustomVfsUiPanels() ) {\n      if ( currentPanel != null ) {\n        try {\n          Method setNamedCluster = currentPanel.getClass().getMethod( \"setNamedCluster\", new Class[] { String.class } );\n          setNamedCluster.invoke( currentPanel, namedCluster.getName() );\n        } catch ( NoSuchMethodException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because it doesn't have setNamedCluster method.\", e );\n          }\n        } catch ( InvocationTargetException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because of exception.\", e.getCause() );\n          }\n        } catch ( IllegalAccessException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because setNamedCluster method isn't accessible.\", e );\n          }\n        }\n      }\n    }\n  }\n\n  @VisibleForTesting\n    VfsFileChooserDialog getFileChooserDialog() {\n    return fileChooserDialog;\n  }\n\n  @VisibleForTesting\n    Shell getShell() {\n    return shell;\n  }\n\n  @VisibleForTesting\n    String[] getSchemeRestrictions() {\n    return schemeRestrictions;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/apachesampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Apache Generic\n#\n\n# \n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=org.apache.derby.iapi.services\nmr1.java.system.hadoop.cluster.path.separator=:\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following: \npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following: \npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/apachevanillasampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2024 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname= Apache Vanilla 3.3.0\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=org.apache.derby.iapi.services\nmr1.java.system.hadoop.cluster.path.separator=:\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/cdpdc71sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname= Cloudera Data Platform(CDP) 7.1\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=org.apache.derby.iapi.services\nmr1.java.system.hadoop.cluster.path.separator=:\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/dataproc1421sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Google Dataproc 1.4\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=java.security.Permission,org.apache.derby\nmr1.java.system.hadoop.cluster.path.separator=:\n\n\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/dataproc23sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Google Dataproc 2.3\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=java.security.Permission,org.apache.derby\nmr1.java.system.hadoop.cluster.path.separator=:\n\n\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/emr521sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Amazon EMR 5.21\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=lib/avro-1.8.0.jar\n#\n# Native libraries\nlibrary.path=\n#\n# Comma-separated list of classes or package names to explicitly ignore when\n# loading classes from the resources within this Hadoop configuration directory\n# or the classpath property\n# e.g.: org.apache.commons.log,org.apache.log4j\n# Note, the two packages above are automatically included for all configurations\nignore.classes=com.ctc.wstx.stax\n#\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/emr770sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2025 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Amazon EMR 7.7\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=lib/avro-1.8.0.jar\n#\n# Native libraries\nlibrary.path=\n#\n# Comma-separated list of classes or package names to explicitly ignore when\n# loading classes from the resources within this Hadoop configuration directory\n# or the classpath property\n# e.g.: org.apache.commons.log,org.apache.log4j\n# Note, the two packages above are automatically included for all configurations\nignore.classes=com.ctc.wstx.stax\n#\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/hdi40sampleconfig.properties",
    "content": "#\n#  HITACHI VANTARA PROPRIETARY AND CONFIDENTIAL\n#\n#  Copyright 2007 - 2022 Hitachi Vantara. All rights reserved.\n#\n#  NOTICE: All information including source code contained herein is, and\n#  remains the sole property of Hitachi Vantara and its licensors. The intellectual\n#  and technical concepts contained herein are proprietary and confidential\n#  to, and are trade secrets of Hitachi Vantara and may be covered by U.S. and foreign\n#  patents, or patents in process, and are protected by trade secret and\n#  copyright laws. The receipt or possession of this source code and/or related\n#  information does not convey or imply any rights to reproduce, disclose or\n#  distribute its contents, or to manufacture, use, or sell anything that it\n#  may describe, in whole or in part. Any reproduction, modification, distribution,\n#  or public display of this information without the express written authorization\n#  from Hitachi Vantara is strictly prohibited and in violation of applicable laws and\n#  international treaties. Access to the source code contained herein is strictly\n#  prohibited to anyone except those individuals and entities who have executed\n#  confidentiality and non-disclosure agreements or other agreements with Hitachi Vantara,\n#  explicitly covering such access.\n#\n# ADDITIONAL RESOURCES\n# For additional questions please visit help.pentaho.com\n# Search for impersonation or secure impersonation\n#\n\n#\n#\n# THE NAME OF YOUR CONFIGURATION\nname=Azure HDInsights 4.0\n#\n\n#\n#\n# GENERAL CONFIGURATIONS\n# These  are comma-separated lists of the following:\n#\n# Directories and/or file lists available for this configuration\nclasspath=\n#\n# Native libraries\nlibrary.path=\n#\n# Classes or packages to ignore from the Hadoop configuration directory\nignore.classes=java.security.Permission,org.apache.derby\nmr1.java.system.hadoop.cluster.path.separator=:\n\n\n\n#\n#\n# SECURITY CONFIGURATIONS\n#\n# Kerberos Authentication\npentaho.authentication.default.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.kerberos.keytabLocation=\npentaho.authentication.default.kerberos.password=\n#\n# Secure Impersonation\n# Please choose one of the following:\n#\n# disabled - when using an unsecured cluster\n# simple - when using a 1 to 1 mapping from the server to your cluster\npentaho.authentication.default.mapping.impersonation.type=disabled\npentaho.authentication.default.mapping.server.credentials.kerberos.principal=exampleUser@EXAMPLE.COM\n#\n# Please define one of the following:\npentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation=\npentaho.authentication.default.mapping.server.credentials.kerberos.password=\n#\n\n#\n#\n# OOZIE\npentaho.oozie.proxy.user=oozie\n#"
  },
  {
    "path": "kettle-plugins/common/ui/src/main/resources/org/pentaho/big/data/plugins/common/ui/messages/messages_en_US.properties",
    "content": "NamedClusterDialog.Shell.Title=Hadoop Cluster\nNamedClusterDialog.NamedCluster.Configuration=Configuration\nNamedClusterDialog.NamedCluster.Type=Type\nNamedClusterDialog.NamedCluster.Name=Cluster name:\nNamedClusterDialog.NamedCluster.DisplayName=Display Name\n\nNamedClusterWidget.NamedCluster.New=New...\nNamedClusterWidget.NamedCluster.Edit=Edit...\n\nNamedClusterDialog.NamedCluster.GatewayCheckBoxTitle=Use a gateway to connect to the cluster\nNamedClusterDialog.HDFS=HDFS\nNamedClusterDialog.ZooKeeper=ZooKeeper\nNamedClusterDialog.JobTracker=JobTracker\nNamedClusterDialog.Oozie=Oozie\nNamedClusterDialog.Gateway=Gateway\nNamedClusterDialog.URL=URL:\nNamedClusterDialog.Port=Port:\nNamedClusterDialog.Hostname=Hostname:\nNamedClusterDialog.Username=Username:\nNamedClusterDialog.Password=Password:\nNamedClusterDialog.Storage=Storage:\nNamedClusterDialog.GatewayUrl=URL:\n\nNamedClusterDialog.Kafka.GroupTitle=Kafka\nNamedClusterDialog.Kafka.BootstrapServers.Label=Bootstrap servers:\n\nNamedClusterDialog.Error=Error\nNamedClusterDialog.Warning=Warning\nNamedClusterDialog.ClusterNameMissing=You must enter a Hadoop cluster name to continue.\nNamedClusterDialog.ShimIdentifierMissing=You must select a Vendor shim to continue.\nNamedClusterDialog.ClusterNameExists.Title=Hadoop Cluster Exists\nNamedClusterDialog.ClusterNameExists=Hadoop Cluster {0} already exists. Do you want to replace it with this one?\nNamedClusterDialog.ClusterNameExists.Replace=Yes, Replace\nNamedClusterDialog.ClusterNameExists.DoNotReplace=No\nNamedClusterDialog.HadoopClusters=Hadoop clusters\n\nNamedClusterDialog.NamedCluster.IsMapR=Use MapR client\nNamedClusterDialog.NamedCluster.IsMapR.Title=Select if this configuration is for a MapR cluster\n\nNamedClusterDialog.Shell.Doc=Data/Hadoop/Connect_to_Cluster\n\nNamedClusterDialog.DialogError=Error opening dialog\n\nClusterTestDialog.Title=Hadoop Cluster Test\nClusterTestDialog.ClusterTest.Label=Testing Hadoop Cluster\nClusterTestDialog.ModuleTest=Cluster Test: {0}\nClusterTestDialog.TestResult=\\t{0}: {1} {2}\nClusterTestDialog.TestsFinished=Tests Finished!\nClusterTestDialog.FailedToOpen=Failed to open the Cluster Test Dialog\n\nClusterTestResultsDialog.Title=Hadoop Cluster Test\nClusterTestResultsDialog.ClusterTestResults.Label=Results\nClusterTestResultsDialog.Shell.Doc.Title=Hadoop Cluster Test\nClusterTestResultsDialog.Shell.Doc.Header=Hadoop Cluster Test details\nClusterTestResultsDialog.FailedToOpen=Failed to open the Cluster Test Results Dialog\n\nSpoon.Dialog.ErrorAddingNewConfigurationForCluster.Title=Error\nSpoon.Dialog.ErrorAddingNewConfigurationForCluster.Message=Something went wrong trying to add the new configuration for the cluster: {0}\nSpoon.Dialog.ErrorRenamingPreviousClusterConfig.Title=Error\nSpoon.Dialog.ErrorRenamingPreviousClusterConfig.Message=Couldn't rename the previous shim configuration file\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/test/java/org/pentaho/big/data/plugins/common/ui/HadoopClusterDelegateImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.stores.delegate.DelegatingMetaStore;\nimport org.pentaho.metastore.stores.xml.XmlMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.mockito.Matchers.any;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl.PKG;\nimport static org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl\n  .SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE;\nimport static org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl\n  .SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE;\n\n/**\n * Created by bryan on 10/19/15.\n */\npublic class HadoopClusterDelegateImplTest {\n  private Spoon spoon;\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n  private HadoopClusterDelegateImpl hadoopClusterDelegate;\n  private IMetaStore metaStore;\n  private NamedCluster namedCluster;\n  private String namedClusterName;\n  private CommonDialogFactory commonDialogFactory;\n  private Shell shell;\n  private VariableSpace variables;\n  private Path tempDirectoryName;\n\n  @Before\n  public void setup() throws IOException {\n    spoon = mock( Spoon.class );\n    shell = mock( Shell.class );\n    when( spoon.getShell() ).thenReturn( shell );\n    namedClusterService = mock( NamedClusterService.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    runtimeTester = mock( RuntimeTester.class );\n    metaStore = mock( IMetaStore.class );\n    namedCluster = mock( NamedCluster.class );\n    variables = new Variables();\n    namedClusterName = \"namedClusterName\";\n    when( namedCluster.getName() ).thenReturn( namedClusterName );\n    commonDialogFactory = mock( CommonDialogFactory.class );\n    hadoopClusterDelegate =\n      new HadoopClusterDelegateImpl( spoon, namedClusterService, runtimeTestActionService, runtimeTester,\n        commonDialogFactory );\n    // avoid putting test data in the local user's metastore\n    tempDirectoryName = Files.createTempDirectory( this.getClass().getName() );\n    System.setProperty( \"user.home\", tempDirectoryName.toString() );\n    String configurationDirectory =\n      System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\" + File.separator\n        + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\";\n    Files.createDirectories( Paths.get( configurationDirectory + \"/\" + namedClusterName ) );\n  }\n\n  @After\n  public void tearDown() {\n    deleteDirectory( tempDirectoryName.toFile() );\n  }\n\n  private boolean deleteDirectory( File directoryToBeDeleted) {\n    File[] allContents = directoryToBeDeleted.listFiles();\n    if (allContents != null) {\n      for (File file : allContents) {\n        deleteDirectory(file);\n      }\n    }\n    return directoryToBeDeleted.delete();\n  }\n\n  @Test\n  public void testSimpleConstructor() {\n    assertNotNull(\n      new HadoopClusterDelegateImpl( spoon, namedClusterService, runtimeTestActionService, runtimeTester ) );\n  }\n\n  @Test\n  public void testDupeNamedClusterNullNc() {\n    hadoopClusterDelegate.dupeNamedCluster( metaStore, null, shell );\n    verifyNoMoreInteractions( metaStore, shell );\n  }\n\n  @Test\n  public void testDupeNamedClusterNullNewName() {\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    NamedCluster clonedNamedCluster = mock( NamedCluster.class );\n    when( namedCluster.clone() ).thenReturn( clonedNamedCluster );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        clonedNamedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( null );\n\n    hadoopClusterDelegate.dupeNamedCluster( metaStore, namedCluster, shell );\n\n    verify( namedClusterDialog ).setNewClusterCheck( true );\n    verify( clonedNamedCluster ).setName(\n      BaseMessages.getString( Spoon.class, HadoopClusterDelegateImpl.SPOON_VARIOUS_DUPE_NAME ) + namedClusterName );\n    verifyNoMoreInteractions( metaStore );\n  }\n\n  @Test\n  public void testDupeNamedClusterNullMetastore() throws MetaStoreException, IOException {\n    String newName = \"newName\";\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    NamedCluster clonedNamedCluster = mock( NamedCluster.class );\n    DelegatingMetaStore spoonMetastore = mock( DelegatingMetaStore.class );\n    XmlMetaStore xmlMetaStore = mock( XmlMetaStore.class );\n    when( spoon.getMetaStore() ).thenReturn( spoonMetastore );\n    when( spoonMetastore.getActiveMetaStore() ).thenReturn( xmlMetaStore );\n    when( namedCluster.clone() ).thenReturn( clonedNamedCluster );\n    when( namedCluster.getShimIdentifier() ).thenReturn( \"oldShimId\" );\n    when( clonedNamedCluster.getShimIdentifier() ).thenReturn( \"shimId\" );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        clonedNamedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( newName );\n\n    hadoopClusterDelegate.dupeNamedCluster( null, namedCluster, shell );\n\n    verify( namedClusterDialog ).setNewClusterCheck( true );\n    verify( clonedNamedCluster ).setName(\n      BaseMessages.getString( Spoon.class, HadoopClusterDelegateImpl.SPOON_VARIOUS_DUPE_NAME ) + namedClusterName );\n    verify( namedClusterService ).create( clonedNamedCluster, spoonMetastore );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n  }\n\n  @Test\n  public void testDelNamedCluster() throws MetaStoreException {\n    when( namedClusterService.read( namedClusterName, metaStore ) ).thenReturn( namedCluster );\n    hadoopClusterDelegate.delNamedCluster( metaStore, namedCluster );\n    verify( namedClusterService ).delete( namedClusterName, metaStore );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n    verify( spoon ).setShellText();\n  }\n\n  @Test\n  public void testDelNamedClusterNull() throws MetaStoreException {\n    when( namedClusterService.read( namedClusterName, metaStore ) ).thenReturn( null );\n    hadoopClusterDelegate.delNamedCluster( metaStore, namedCluster );\n    verify( namedClusterService, never() ).delete( namedClusterName, metaStore );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n    verify( spoon ).setShellText();\n  }\n\n  @Test\n  public void testDelNamedClusterNullMetastore() throws MetaStoreException {\n    DelegatingMetaStore metaStore2 = mock( DelegatingMetaStore.class );\n    when( spoon.getMetaStore() ).thenReturn( metaStore2 );\n    when( namedClusterService.read( namedClusterName, metaStore2 ) ).thenReturn( namedCluster );\n    hadoopClusterDelegate.delNamedCluster( null, namedCluster );\n    verify( namedClusterService ).delete( namedClusterName, metaStore2 );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n    verify( spoon ).setShellText();\n  }\n\n  @Test\n  public void testDelNamedClusterException() throws MetaStoreException {\n    when( namedClusterService.read( namedClusterName, metaStore ) ).thenReturn( namedCluster );\n    MetaStoreException metaStoreException = new MetaStoreException();\n    doThrow( metaStoreException ).when( namedClusterService ).delete( namedClusterName, metaStore );\n    hadoopClusterDelegate.delNamedCluster( metaStore, namedCluster );\n    verify( commonDialogFactory ).createErrorDialog( shell, BaseMessages.getString( PKG,\n      HadoopClusterDelegateImpl.SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_TITLE ), BaseMessages\n        .getString( PKG,\n          HadoopClusterDelegateImpl.SPOON_DIALOG_ERROR_DELETING_NAMED_CLUSTER_MESSAGE, namedClusterName ),\n      metaStoreException );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n    verify( spoon ).setShellText();\n  }\n\n  @Test\n  public void testEditNamedClusterNullMetastore() throws MetaStoreException {\n    DelegatingMetaStore spoonMetastore = mock( DelegatingMetaStore.class );\n    when( spoon.getMetaStore() ).thenReturn( spoonMetastore );\n\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    NamedCluster clonedNamedCluster = mock( NamedCluster.class );\n    when( namedClusterService.read( namedClusterName, spoonMetastore ) ).thenReturn( namedCluster );\n    when( namedCluster.clone() ).thenReturn( clonedNamedCluster );\n    String shimId = \"shimId\";\n    when( namedCluster.getShimIdentifier() ).thenReturn( shimId );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        clonedNamedCluster ) ).thenReturn( namedClusterDialog );\n    String clonedName = \"clonedName\";\n    when( clonedNamedCluster.getName() ).thenReturn( clonedName );\n    when( clonedNamedCluster.getShimIdentifier() ).thenReturn( shimId );\n    when( namedClusterDialog.open() ).thenReturn( clonedName );\n    when( namedClusterDialog.getNamedCluster() ).thenReturn( clonedNamedCluster );\n\n    assertEquals( clonedName, hadoopClusterDelegate.editNamedCluster( null, namedCluster, shell ) );\n\n    verify( namedClusterDialog ).setNewClusterCheck( false );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n    verify( namedClusterService ).create( clonedNamedCluster, spoonMetastore );\n  }\n\n  @Test\n  public void testEditNamedClusterNull() throws MetaStoreException {\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    NamedCluster clonedNamedCluster = mock( NamedCluster.class );\n    when( namedClusterService.read( namedClusterName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.clone() ).thenReturn( clonedNamedCluster );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        clonedNamedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( null );\n\n    hadoopClusterDelegate.editNamedCluster( metaStore, namedCluster, shell );\n\n    verify( namedClusterDialog ).setNewClusterCheck( false );\n    verifyNoMoreInteractions( namedClusterService );\n  }\n\n  @Test\n  public void testNewNamedClusterNullMetastore() throws MetaStoreException {\n    DelegatingMetaStore spoonMetastore = mock( DelegatingMetaStore.class );\n    when( spoon.getMetaStore() ).thenReturn( spoonMetastore );\n\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        namedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( namedClusterName );\n\n    assertEquals( namedClusterName, hadoopClusterDelegate.newNamedCluster( variables, null, shell ) );\n\n    verify( namedClusterDialog ).setNewClusterCheck( true );\n    verify( namedCluster ).shareVariablesWith( variables );\n    verify( namedClusterService ).create( namedCluster, spoonMetastore );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n  }\n\n  @Test\n  public void testNewNamedClusterNullVariables() throws MetaStoreException {\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        namedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( namedClusterName );\n\n    assertEquals( namedClusterName, hadoopClusterDelegate.newNamedCluster( null, metaStore, shell ) );\n\n    verify( namedClusterDialog ).setNewClusterCheck( true );\n    verify( namedCluster ).initializeVariablesFrom( null );\n    verify( namedClusterService ).create( namedCluster, metaStore );\n    verify( spoon ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n  }\n\n  @Test\n  public void testNewNamedClusterNullResult() throws MetaStoreException {\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        namedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( null );\n\n    assertNull( hadoopClusterDelegate.newNamedCluster( null, metaStore, shell ) );\n\n    verify( namedClusterDialog ).setNewClusterCheck( true );\n    verify( namedClusterService, times( 0 ) ).create( any( NamedCluster.class ), any( IMetaStore.class ) );\n    verify( spoon, times( 0 ) ).refreshTree( HadoopClusterDelegateImpl.STRING_NAMED_CLUSTERS );\n  }\n\n  @Test\n  public void testNewNamedClusterErrorSaving() throws MetaStoreException {\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    NamedClusterDialogImpl namedClusterDialog = mock( NamedClusterDialogImpl.class );\n    when( commonDialogFactory\n      .createNamedClusterDialog( shell, namedClusterService, runtimeTestActionService, runtimeTester,\n        namedCluster ) ).thenReturn( namedClusterDialog );\n    when( namedClusterDialog.open() ).thenReturn( namedClusterName );\n    MetaStoreException metaStoreException = new MetaStoreException();\n    doThrow( metaStoreException ).when( namedClusterService ).create( namedCluster, metaStore );\n\n    hadoopClusterDelegate.newNamedCluster( variables, metaStore, shell );\n\n    verify( commonDialogFactory ).createErrorDialog( shell,\n      BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_TITLE ),\n      BaseMessages.getString( PKG, SPOON_DIALOG_ERROR_SAVING_NAMED_CLUSTER_MESSAGE, namedCluster.getName() ),\n      metaStoreException );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/common/ui/src/test/java/org/pentaho/big/data/plugins/common/ui/TestClusterTestDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.common.ui;\n\nimport org.apache.commons.lang.exception.ExceptionUtils;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.ProgressBar;\nimport org.eclipse.swt.widgets.Shell;\nimport org.junit.AfterClass;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LogChannelInterfaceFactory;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestEntrySeverity;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\n\nimport java.util.ArrayList;\n\npublic class TestClusterTestDialog {\n\n  private static LogChannelInterfaceFactory oldLogChannelInterfaceFactory;\n  private static LogChannelInterface logChannelInterface;\n\n  private ClusterTestDialog testDialog;\n  private Shell parent;\n  private NamedCluster namedCluster;\n  private RuntimeTester runtimeTester;\n  private PropsUI props;\n\n  @BeforeClass\n  public static void beforeClass() {\n    KettleLogStore.init();\n    oldLogChannelInterfaceFactory = KettleLogStore.getLogChannelInterfaceFactory();\n    setKettleLogFactoryWithMock();\n  }\n\n  public static void setKettleLogFactoryWithMock() {\n    LogChannelInterfaceFactory logChannelInterfaceFactory = Mockito.mock( LogChannelInterfaceFactory.class );\n    logChannelInterface = Mockito.mock( LogChannelInterface.class );\n    Mockito.when( logChannelInterfaceFactory.create( Mockito.any() ) ).thenReturn( logChannelInterface );\n    KettleLogStore.setLogChannelInterfaceFactory( logChannelInterfaceFactory );\n  }\n\n  @Before\n  public void setup() throws KettleException {\n    parent = Mockito.mock( Shell.class );\n    namedCluster = Mockito.mock( NamedCluster.class );\n    runtimeTester = Mockito.mock( RuntimeTester.class );\n    props = Mockito.mock( PropsUI.class );\n\n    testDialog = new ClusterTestDialog( parent, namedCluster, runtimeTester ) {\n\n      @Override\n      protected PropsUI getPropsUIInstance() {\n        return props;\n      }\n\n      @Override\n      public void dispose() {\n      }\n    };\n  }\n\n  @Test\n  public void testExceptionIsPrintedToLog() throws KettleException {\n    ProgressBar progressBar = Mockito.mock( ProgressBar.class );\n    RuntimeTestStatus clusterTestStatus = Mockito.mock( RuntimeTestStatus.class );\n    Label testLabel = Mockito.mock( Label.class );\n    RuntimeTestModuleResults runtimeTestModuleResults = Mockito.mock( RuntimeTestModuleResults.class );\n    RuntimeTestResult result = Mockito.mock( RuntimeTestResult.class );\n    RuntimeTestResultEntry entry = Mockito.mock( RuntimeTestResultEntry.class );\n    Exception exception = new Exception();\n\n    ArrayList<RuntimeTestModuleResults> results = new ArrayList<>();\n    results.add( runtimeTestModuleResults );\n    ArrayList<RuntimeTestResult> runtimeTestResults = new ArrayList<>();\n    runtimeTestResults.add( result );\n\n    Mockito.when( clusterTestStatus.getModuleResults() ).thenReturn( results );\n    Mockito.when( clusterTestStatus.isDone() ).thenReturn( true );\n    Mockito.when( runtimeTestModuleResults.getRuntimeTestResults() ).thenReturn( runtimeTestResults );\n    Mockito.when( result.getRuntimeTest() ).thenReturn( Mockito.mock( RuntimeTest.class ) );\n    Mockito.when( result.getRuntimeTestResultEntries() ).thenReturn( new ArrayList<>() );\n    Mockito.when( result.getOverallStatusEntry() ).thenReturn( entry );\n    Mockito.when( entry.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.FATAL );\n    Mockito.when( entry.getException() ).thenReturn( exception );\n\n    testDialog.getRunnable( progressBar, clusterTestStatus, testLabel ).run();\n\n    Mockito.verify( logChannelInterface,  Mockito.times( 1 ) ).logBasic( ExceptionUtils.getStackTrace( exception ) );\n  }\n\n  @Test\n  public void testExceptionsArePrintedToLog() throws KettleException {\n    ProgressBar progressBar = Mockito.mock( ProgressBar.class );\n    RuntimeTestStatus clusterTestStatus = Mockito.mock( RuntimeTestStatus.class );\n    Label testLabel = Mockito.mock( Label.class );\n    RuntimeTestModuleResults runtimeTestModuleResults = Mockito.mock( RuntimeTestModuleResults.class );\n    RuntimeTestResult result = Mockito.mock( RuntimeTestResult.class );\n    RuntimeTestResultEntry entry = Mockito.mock( RuntimeTestResultEntry.class );\n    Exception exception = new Exception();\n\n    ArrayList<RuntimeTestModuleResults> results = new ArrayList<>();\n    results.add( runtimeTestModuleResults );\n    ArrayList<RuntimeTestResult> runtimeTestResults = new ArrayList<>();\n    runtimeTestResults.add( result );\n    ArrayList<RuntimeTestResultEntry> entries = new ArrayList<>();\n    entries.add( entry );\n    entries.add( entry );\n\n    Mockito.when( clusterTestStatus.getModuleResults() ).thenReturn( results );\n    Mockito.when( clusterTestStatus.isDone() ).thenReturn( true );\n    Mockito.when( runtimeTestModuleResults.getRuntimeTestResults() ).thenReturn( runtimeTestResults );\n    Mockito.when( result.getRuntimeTest() ).thenReturn( Mockito.mock( RuntimeTest.class ) );\n    Mockito.when( result.getRuntimeTestResultEntries() ).thenReturn( entries );\n    Mockito.when( entry.getSeverity() ).thenReturn( RuntimeTestEntrySeverity.FATAL );\n    Mockito.when( entry.getException() ).thenReturn( exception );\n\n    testDialog.getRunnable( progressBar, clusterTestStatus, testLabel ).run();\n\n    Mockito.verify( logChannelInterface,  Mockito.times( 2 ) ).logBasic( ExceptionUtils.getStackTrace( exception ) );\n  }\n\n  @AfterClass\n  public static void tearDown() {\n    KettleLogStore.setLogChannelInterfaceFactory( oldLogChannelInterfaceFactory );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>formats-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-formats-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Formats Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-formats-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/formats/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-formats-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-formats-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-formats-core:*</exclude>\n      </excludes>\n      <includes>\n        <include>pentaho:pentaho-big-data-kettle-plugins-formats-meta</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/formats/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/formats/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-formats</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>formats-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Formats Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/formats/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-formats</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-formats-core</artifactId>\n  <name>PDI Formats Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <easymock.versin>3.0</easymock.versin>\n    <jface.version>3.3.0-I20070606-0010</jface.version>\n    <mockito.version>4.0.0</mockito.version>\n    <org.apache.orc.version>1.9.6</org.apache.orc.version>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n            <Export-Package>org.pentaho.big.data.kettle.plugins.formats.impl.*;version=${project.version}</Export-Package>\n            <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <!--\n    formats depends on \"formats-meta\". Should \"formats-meta\" be merged with \"formats\"?\n  -->\n\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-formats-meta</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse</groupId>\n      <artifactId>jface</artifactId>\n      <version>${jface.version}</version>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.orc</groupId>\n      <artifactId>orc-core</artifactId>\n      <version>${org.apache.orc.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>org.apache.hadoop</groupId>\n          <artifactId>hadoop-client-api</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-inline</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${pdi.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-hadoop</artifactId>\n      <version>${parquet.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/NamedClusterResolver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.kettle.plugins.formats.impl;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.util.Collection;\nimport java.util.Optional;\n\npublic class NamedClusterResolver {\n\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final NamedClusterService namedClusterService;\n  private MetastoreLocator metaStoreService;\n  private static NamedClusterResolver namedClusterResolver = null;\n\n  private NamedClusterResolver() {\n    this( BigDataServicesHelper.getNamedClusterServiceLocator(),\n      NamedClusterManager.getInstance() );\n  }\n\n  private NamedClusterResolver( NamedClusterServiceLocator namedClusterServiceLocator,\n                                NamedClusterService namedClusterService ) {\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.namedClusterService = namedClusterService;\n  }\n\n  public static synchronized NamedClusterResolver getInstance() {\n    if ( namedClusterResolver == null ) {\n      namedClusterResolver = new NamedClusterResolver();\n    }\n    return namedClusterResolver;\n  }\n\n  protected synchronized MetastoreLocator getMetastoreLocator() {\n    if ( this.metaStoreService == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metaStoreService = metastoreLocators.stream().findFirst().orElse( null );\n      } catch ( Exception e ) {\n        LOG.logError( \"Error getting MetastoreLocator\", e );\n      }\n    }\n    return this.metaStoreService;\n  }\n\n  private static final LogChannelInterface LOG = LogChannel.GENERAL;\n\n  public NamedCluster resolveNamedCluster( String fileName ) {\n    return resolveNamedCluster( fileName, null );\n  }\n\n  public NamedCluster resolveNamedCluster( String fileName, String embeddedMetastoreKey ) {\n    NamedCluster namedCluster = null;\n    Optional<URI> uri = fileUri( fileName );\n\n    if ( uri.isPresent() ) {\n      String scheme = uri.get().getScheme();\n      String hostName = uri.get().getHost();\n      MetastoreLocator metastoreLocator = getMetastoreLocator();\n\n      if ( metastoreLocator != null ) {\n        if ( scheme != null && scheme.equals( \"hc\" ) ) {\n          namedCluster = namedClusterService.getNamedClusterByName( hostName, metastoreLocator.getMetastore() );\n          if ( namedCluster == null && embeddedMetastoreKey != null ) {\n            namedCluster = namedClusterService\n              .getNamedClusterByName( hostName, metastoreLocator.getExplicitMetastore( embeddedMetastoreKey ) );\n          }\n        } else {\n          namedCluster\n            = namedClusterService.getNamedClusterByHost( hostName, metastoreLocator.getMetastore( embeddedMetastoreKey ) );\n          if ( namedCluster == null && embeddedMetastoreKey != null ) {\n            namedCluster = namedClusterService\n              .getNamedClusterByHost( hostName, metastoreLocator.getExplicitMetastore( embeddedMetastoreKey ) );\n          }\n        }\n      }\n    }\n    return namedCluster;\n  }\n\n  private Optional<URI> fileUri( String fileName ) {\n    try {\n      return Optional.of( new URI( fileName ) );\n    } catch ( URISyntaxException e ) {\n      LOG.logDebug( String.format( \"Couldn't parse %s as a URI.\", fileName ) );\n      return Optional.empty();\n    }\n  }\n\n  public NamedClusterServiceLocator getNamedClusterServiceLocator() {\n    return namedClusterServiceLocator;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/NullableValuesEnum.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl;\n\n\n/**\n * Enum with valid list of Nullable values - used for the Nullable combo box\n * <p>\n * Also contains convience methods to get the default value and return a list of values as string to populate combo box\n */\npublic enum NullableValuesEnum {\n  YES( \"Yes\" ),\n  NO( \"No\" );\n\n  private String value;\n\n  NullableValuesEnum( String value ) {\n    this.value = value;\n  }\n\n  public String getValue() {\n    return value;\n  }\n\n  public static NullableValuesEnum getDefaultValue() {\n    return NullableValuesEnum.YES;\n  }\n\n  public static String[] getValuesArr() {\n    String[] valueArr = new String[ NullableValuesEnum.values().length ];\n\n    int i = 0;\n\n    for ( NullableValuesEnum nullValueEnum : NullableValuesEnum.values() ) {\n      valueArr[ i++ ] = nullValueEnum.getValue();\n    }\n\n    return valueArr;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/BaseOrcStepDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc;\n\nimport org.eclipse.jface.window.DefaultToolTip;\nimport org.eclipse.jface.window.ToolTip;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.MouseEvent;\nimport org.eclipse.swt.events.MouseTrackAdapter;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.GC;\nimport org.eclipse.swt.graphics.Point;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterFileDialogTextVar;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterOptions;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\n\npublic abstract class BaseOrcStepDialog<T extends BaseStepMeta & StepMetaInterface> extends BaseStepDialog\n  implements StepDialogInterface {\n  protected final Class<?> PKG = getClass();\n  protected static final Class<?> BPKG = BaseOrcStepDialog.class;\n\n  protected T meta;\n  protected ModifyListener lsMod;\n\n  public static final int MARGIN = 15;\n  public static final int FIELDS_SEP = 10;\n  public static final int FIELD_LABEL_SEP = 5;\n\n  public static final int FIELD_SMALL = 150;\n  public static final int FIELD_MEDIUM = 250;\n  public static final int FIELD_LARGE = 350;\n\n  private static final String ELLIPSIS = \"...\";\n  private static final int TABLE_ITEM_MARGIN = 2;\n  private static final int TOOLTIP_SHOW_DELAY = 350;\n  private static final int TOOLTIP_HIDE_DELAY = 2000;\n  // width of the icon in a varfield\n  protected static final int VAR_EXTRA_WIDTH = GUIResource.getInstance().getImageVariable().getBounds().width;\n\n  protected TextVar wPath;\n  protected Button wbBrowse;\n\n  public BaseOrcStepDialog( Shell parent, T in, TransMeta transMeta, String sname ) {\n    super( parent, (BaseStepMeta) in, transMeta, sname );\n    meta = in;\n  }\n\n  @Override\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE );\n    props.setLook( shell );\n    setShellImage( shell, meta );\n\n    lsMod = e -> meta.setChanged();\n    changed = meta.hasChanged();\n\n    createUI();\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    int height = Math.max( getMinHeight( shell, getWidth() ), getHeight() );\n    shell.setMinimumSize( getWidth(), height );\n    shell.setSize( getWidth(), height );\n    getData( meta );\n    shell.open();\n    wStepname.setFocus();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return stepname;\n  }\n\n  protected abstract void createUI();\n\n  protected Control createFooter( Composite shell ) {\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( getMsg( \"System.Button.Cancel\" ) );\n    wCancel.addListener( SWT.Selection, lsCancel );\n    new FD( wCancel ).right( 100, 0 ).bottom( 100, 0 ).apply();\n\n    // Some buttons\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( getMsg( \"System.Button.OK\" ) );\n    wOK.addListener( SWT.Selection, lsOK );\n    new FD( wOK ).right( wCancel, -FIELD_LABEL_SEP ).bottom( 100, 0 ).apply();\n    lsPreview = getPreview();\n    if ( lsPreview != null ) {\n      wPreview = new Button( shell, SWT.PUSH );\n      wPreview.setText( getBaseMsg( \"BaseStepDialog.Preview\" ) );\n      wPreview.pack();\n      wPreview.addListener( SWT.Selection, lsPreview );\n      int offset = wPreview.getBounds().width / 2;\n      new FD( wPreview ).left( 50, -offset ).bottom( 100, 0 ).apply();\n    }\n    return wCancel;\n  }\n\n  protected void cancel() {\n    stepname = null;\n    meta.setChanged( changed );\n    dispose();\n  }\n\n  protected void ok() {\n    if ( Utils.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    stepname = wStepname.getText();\n\n    getInfo( meta, false );\n    dispose();\n  }\n\n  protected abstract String getStepTitle();\n\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   *\n   * @param meta The meta object to obtain the data from.\n   */\n  protected abstract void getData( T meta );\n\n  /**\n   * Fill meta object from UI options.\n   *\n   * @param meta    meta object\n   * @param preview flag for preview or real options should be used. Currently, only one option is differ for preview - EOL\n   *                chars. It uses as \"mixed\" for be able to preview any file.\n   */\n  protected abstract void getInfo( T meta, boolean preview );\n\n  protected abstract int getWidth();\n\n  protected abstract int getHeight();\n\n  protected abstract Listener getPreview();\n\n  protected Label createHeader() {\n    // main form\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = 15;\n    formLayout.marginHeight = 15;\n    shell.setLayout( formLayout );\n    // title\n    shell.setText( getStepTitle() );\n    // buttons\n    lsOK = e -> ok();\n    lsCancel = e -> cancel();\n\n    // Stepname label\n    wlStepname = new Label( shell, SWT.RIGHT );\n    wlStepname.setText( getBaseMsg( \"BaseStepDialog.StepName\" ) );\n    props.setLook( wlStepname );\n    new FD( wlStepname ).left( 0, 0 ).top( 0, 0 ).apply();\n    // Stepname field\n    wStepname = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wStepname.setText( stepname );\n    props.setLook( wStepname );\n    wStepname.addModifyListener( lsMod );\n    new FD( wStepname ).left( 0, 0 ).top( wlStepname, FIELD_LABEL_SEP ).width( FIELD_MEDIUM ).rright().apply();\n\n    // separator\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.top = new FormAttachment( wStepname, 15 );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    addIcon();\n    return separator;\n  }\n\n  protected void addIcon() {\n    Label wicon = new Label( shell, SWT.RIGHT );\n    String stepId = meta.getParentStepMeta().getStepID();\n    wicon.setImage( GUIResource.getInstance().getImagesSteps().get( stepId ).getAsBitmapForSize( shell.getDisplay(),\n        ConstUI.LARGE_ICON_SIZE, ConstUI.LARGE_ICON_SIZE ) );\n    FormData fdlicon = new FormData();\n    fdlicon.top = new FormAttachment( 0, 0 );\n    fdlicon.right = new FormAttachment( 100, 0 );\n    wicon.setLayoutData( fdlicon );\n    props.setLook( wicon );\n  }\n\n  protected Control addFileWidgets( Control prev ) {\n    Label wlPath = new Label( shell, SWT.RIGHT );\n    wlPath.setText( getBaseMsg( \"OrcDialog.Filename.Label\" ) );\n    props.setLook( wlPath );\n    new FD( wlPath ).left( 0, 0 ).top( prev, MARGIN ).apply();\n    wPath = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wPath.addModifyListener( event -> {\n      if ( wPreview != null ) {\n        wPreview.setEnabled( !Utils.isEmpty( wPath.getText() ) );\n      }\n    } );\n    props.setLook( wPath );\n    wPath.addModifyListener( lsMod );\n    new FD( wPath ).left( 0, 0 ).top( wlPath, FIELD_LABEL_SEP ).width( FIELD_LARGE + VAR_EXTRA_WIDTH ).rright().apply();\n\n\n    wbBrowse = new Button( shell, SWT.PUSH );\n    props.setLook( wbBrowse );\n    wbBrowse.setText( getMsg( \"System.Button.Browse\" ) );\n    wbBrowse.addSelectionListener( new SelectionAdapterFileDialogTextVar(\n      log, wPath, transMeta, new SelectionAdapterOptions( transMeta.getBowl(), selectionOperation() ) ) );\n    int bOffset = ( wbBrowse.computeSize( SWT.DEFAULT, SWT.DEFAULT, false ).y\n      - wPath.computeSize( SWT.DEFAULT, SWT.DEFAULT, false ).y ) / 2;\n    new FD( wbBrowse ).left( wPath, FIELD_LABEL_SEP ).top( wlPath, FIELD_LABEL_SEP - bOffset ).apply();\n    return wPath;\n  }\n\n  protected abstract SelectionOperation selectionOperation();\n\n  protected String getBaseMsg( String key ) {\n    return BaseMessages.getString( BPKG, key );\n  }\n\n  protected String getMsg( String key ) {\n    return BaseMessages.getString( PKG, key );\n  }\n\n  /**\n   * Class for apply layout settings to SWT controls.\n   */\n  protected class FD {\n    private final Control control;\n    private final FormData fd;\n\n    public FD( Control control ) {\n      this.control = control;\n      props.setLook( control );\n      fd = new FormData();\n    }\n\n    public FD width( int width ) {\n      fd.width = width;\n      return this;\n    }\n\n    public FD height( int height ) {\n      fd.height = height;\n      return this;\n    }\n\n    public FD top( int numerator, int offset ) {\n      fd.top = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD top( Control control, int offset ) {\n      fd.top = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD bottom( int numerator, int offset ) {\n      fd.bottom = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD bottom( Control control, int offset ) {\n      fd.bottom = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD left( int numerator, int offset ) {\n      fd.left = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD left( int numerator ) {\n      return left( numerator, 0 );\n    }\n\n    public FD left( Control control, int offset ) {\n      fd.left = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD right( int numerator, int offset ) {\n      fd.right = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD rright() {\n      fd.right = new FormAttachment( 100, -getControlOffset( control, fd.width ) );\n      return this;\n    }\n\n    public FD right( Control control, int offset ) {\n      fd.right = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public void apply() {\n      control.setLayoutData( fd );\n    }\n  }\n\n  protected int getMinHeight( Composite comp, int minWidth ) {\n    comp.pack();\n    return comp.computeSize( minWidth, SWT.DEFAULT ).y;\n  }\n\n  protected void setTruncatedColumn( Table table, int targetColumn ) {\n    table.addListener( SWT.EraseItem, event -> {\n      if ( event.index == targetColumn ) {\n        event.detail &= ~SWT.FOREGROUND;\n      }\n    } );\n    table.addListener( SWT.PaintItem, event -> {\n      TableItem item = (TableItem) event.item;\n      int colIdx = event.index;\n      if ( colIdx == targetColumn ) {\n        String contents = item.getText( colIdx );\n        if ( Utils.isEmpty( contents ) ) {\n          return;\n        }\n        Point size = event.gc.textExtent( contents );\n        int targetWidth = item.getBounds( colIdx ).width;\n        int yOffset = Math.max( 0, ( event.height - size.y ) / 2 );\n        if ( size.x > targetWidth ) {\n          contents = shortenText( event.gc, contents, targetWidth );\n        }\n        event.gc.drawText( contents, event.x + TABLE_ITEM_MARGIN, event.y + yOffset, true );\n      }\n    } );\n  }\n\n\n  protected void addColumnTooltip( Table table, int columnIndex ) {\n    final DefaultToolTip toolTip = new DefaultToolTip( table, ToolTip.RECREATE, true );\n    toolTip.setRespectMonitorBounds( true );\n    toolTip.setRespectDisplayBounds( true );\n    toolTip.setPopupDelay( TOOLTIP_SHOW_DELAY );\n    toolTip.setHideDelay( TOOLTIP_HIDE_DELAY );\n    toolTip.setShift( new Point( ConstUI.TOOLTIP_OFFSET, ConstUI.TOOLTIP_OFFSET ) );\n    table.addMouseTrackListener( new MouseTrackAdapter() {\n      @Override\n\n      public void mouseHover( MouseEvent e ) {\n        Point coord = new Point( e.x, e.y );\n        TableItem item = table.getItem( coord );\n        if ( item != null && item.getBounds( columnIndex ).contains( coord ) ) {\n          String contents = item.getText( columnIndex );\n          if ( !Utils.isEmpty( contents ) ) {\n            toolTip.setText( contents );\n            toolTip.show( coord );\n            return;\n          }\n        }\n        toolTip.hide();\n      }\n\n      @Override\n      public void mouseExit( MouseEvent e ) {\n        toolTip.hide();\n      }\n    } );\n  }\n\n  protected String shortenText( GC gc, String text, final int targetWidth ) {\n    if ( Utils.isEmpty( text ) ) {\n      return \"\";\n    }\n    int textWidth = gc.textExtent( text ).x;\n    int extra = gc.textExtent( ELLIPSIS ).x + 2 * TABLE_ITEM_MARGIN;\n    if ( targetWidth <= extra || textWidth <= targetWidth ) {\n      return text;\n    }\n    int len = text.length();\n    for ( int chomp = 1; chomp < len && textWidth + extra >= targetWidth; chomp++ ) {\n      text = text.substring( 0, text.length() - 1 );\n      textWidth = gc.textExtent( text ).x;\n    }\n    return text + ELLIPSIS;\n  }\n\n  private int getControlOffset( Control control, int controlWidth ) {\n    // remaining space for min size match\n    return getWidth() - getMarginWidths( control ) - controlWidth;\n  }\n\n  private int getMarginWidths( Control control ) {\n    // get the width added by container margins and (wm-specific) decorations\n    int extraWidth = 0;\n    for ( Composite parent = control.getParent(); !parent.equals( getParent() ); parent = parent.getParent() ) {\n      extraWidth += parent.computeTrim( 0, 0, 0, 0 ).width;\n      if ( parent.getLayout() instanceof FormLayout ) {\n        extraWidth += 2 * ( (FormLayout) parent.getLayout() ).marginWidth;\n      }\n    }\n    return extraWidth;\n  }\n\n  protected void setIntegerOnly( TextVar textVar ) {\n    textVar.getTextWidget().addVerifyListener( e -> {\n      if ( !StringUtil.isEmpty( e.text ) && !StringUtil.isVariable( e.text ) && !StringUtil.IsInteger( e.text ) ) {\n        e.doit = false;\n      }\n    } );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.input.OrcInputMetaBase;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.trans.steps.file.BaseFileInputStep;\nimport org.pentaho.di.trans.steps.file.IBaseFileInputReader;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IOrcInputField;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcInputFormat;\n\nimport java.util.Arrays;\nimport java.util.List;\n\npublic class OrcInput extends BaseFileInputStep<OrcInputMeta, OrcInputData> {\n  public static final long SPLIT_SIZE = 128L * 1024L * 1024L;\n\n  public OrcInput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                   Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n    meta = (OrcInputMeta) smi;\n    data = (OrcInputData) sdi;\n    try {\n      if ( data.input == null || data.reader == null || data.rowIterator == null ) {\n        FormatService formatService = getFormatService();\n        if ( meta.inputFiles == null || meta.getFilename() == null || meta.getFilename().length() == 0 ) {\n          throw new KettleException( \"No input files defined\" );\n        }\n        data.input = formatService.createInputFormat( IPentahoOrcInputFormat.class, getNamedCluster() );\n\n        String inputFileName = getKettleVFSFileName( getTransMeta().getBowl(),\n          meta.getParentStepMeta().getParentTransMeta().environmentSubstitute( meta.getFilename() ) );\n\n        data.input.setInputFile( inputFileName );\n        data.input.setSchema( createSchemaFromMeta( meta ) );\n        data.reader = data.input.createRecordReader( null );\n        data.rowIterator = data.reader.iterator();\n      }\n      if ( data.rowIterator.hasNext() ) {\n        RowMetaAndData row = data.rowIterator.next();\n        putRow( row.getRowMeta(), row.getData() );\n        return true;\n      } else {\n        data.reader.close();\n        data.reader = null;\n        data.input = null;\n        setOutputDone();\n        return false;\n      }\n    } catch ( KettleException ex ) {\n      throw ex;\n    } catch ( Exception ex ) {\n      throw new KettleException( ex );\n    }\n  }\n\n  private NamedCluster getNamedCluster() {\n    return meta.getNamedClusterResolver().resolveNamedCluster( environmentSubstitute( meta.getFilename() ) );\n  }\n\n  private FormatService getFormatService() throws KettleException {\n    FormatService formatService;\n    try {\n      formatService = meta.getNamedClusterResolver().getNamedClusterServiceLocator()\n        .getService( getNamedCluster(), FormatService.class );\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleException( \"can't get service format shim \", e );\n    }\n    return formatService;\n  }\n\n\n  @Override\n  protected boolean init() {\n    return true;\n  }\n\n  @Override\n  protected IBaseFileInputReader createReader( OrcInputMeta meta, OrcInputData data, FileObject file )\n    throws Exception {\n    return null;\n  }\n\n  public static List<IOrcInputField> retrieveSchema( Bowl bowl, NamedClusterServiceLocator namedClusterServiceLocator,\n                                                     NamedCluster namedCluster, String dataPath ) throws Exception {\n    FormatService formatService = namedClusterServiceLocator.getService( namedCluster, FormatService.class );\n    IPentahoOrcInputFormat in = formatService.createInputFormat( IPentahoOrcInputFormat.class, namedCluster );\n\n    in.setInputFile( getKettleVFSFileName( bowl, dataPath ) );\n    return in.readSchema();\n  }\n\n  public static List<IOrcInputField> createSchemaFromMeta( OrcInputMetaBase meta ) {\n    return Arrays.asList( meta.getInputFields() );\n  }\n\n  public static String getKettleVFSFileName( Bowl bowl, String path ) throws KettleFileException {\n    String inputFileName = path;\n    FileObject inputFileObject = KettleVFS.getInstance( bowl ).getFileObject( path );\n    if ( AliasedFileObject.isAliasedFile( inputFileObject ) ) {\n      inputFileName = ( (AliasedFileObject) inputFileObject ).getOriginalURIString();\n    }\n\n    return inputFileName;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport java.util.Iterator;\n\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.steps.file.BaseFileInputStepData;\nimport org.pentaho.hadoop.shim.api.format.IPentahoInputFormat.IPentahoRecordReader;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcInputFormat;\n\npublic class OrcInputData extends BaseFileInputStepData {\n  IPentahoOrcInputFormat input;\n  IPentahoRecordReader reader;\n  Iterator<RowMetaAndData> rowIterator;\n  RowMetaInterface outputRowMeta;\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.orc.BaseOrcStepDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.OrcInputField;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransPreviewFactory;\nimport org.pentaho.di.ui.core.dialog.EnterNumberDialog;\nimport org.pentaho.di.ui.core.dialog.EnterTextDialog;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.dialog.PreviewRowsDialog;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ColumnsResizer;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.trans.dialog.TransPreviewProgressDialog;\nimport org.pentaho.hadoop.shim.api.format.IOrcInputField;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport java.util.List;\n\n@PluginDialog( id = \"OrcInput\", image = \"OI.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/orc-input\" )\npublic class OrcInputDialog extends BaseOrcStepDialog<OrcInputMeta> {\n\n  private static final int SHELL_WIDTH = 526;\n  private static final int SHELL_HEIGHT = 506;\n\n  private static final int ORC_PATH_COLUMN_INDEX = 1;\n\n  private static final int FIELD_NAME_COLUMN_INDEX = 2;\n\n  private static final int FIELD_TYPE_COLUMN_INDEX = 3;\n\n  private static final int FORMAT_COLUMN_INDEX = 4;\n\n  private static final int FIELD_SOURCE_TYPE_COLUMN_INDEX = 5;\n  private static final String UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE =\n    \"OrcInput.Error.UnableToLoadSchemaFromContainerFile\";\n\n  private TableView wInputFields;\n\n  private Button wPassThruFields;\n\n  public OrcInputDialog( Shell parent, Object in, TransMeta transMeta, String sname ) {\n    super( parent, (OrcInputMeta) in, transMeta, sname );\n  }\n\n  @Override\n  protected void createUI( ) {\n    Control prev = createHeader();\n\n    //main fields\n    prev = addFileWidgets( prev );\n\n    createFooter( shell );\n\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.bottom = new FormAttachment( wCancel, -MARGIN );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    Group fieldsContainer = new Group( shell, SWT.SHADOW_IN );\n    fieldsContainer.setLayout( new FormLayout() );\n    fieldsContainer.setText( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.Label\" ) );\n    new FD( fieldsContainer ).left( 0, 0 ).top( prev, MARGIN ).right( 100, 0 ).bottom( separator, -MARGIN ).apply();\n\n    // Accept fields from previous steps?\n    //\n    wPassThruFields = new Button( fieldsContainer, SWT.CHECK );\n    wPassThruFields.setText( BaseMessages.getString( PKG, \"OrcInputDialog.PassThruFields.Label\" ) );\n    wPassThruFields.setToolTipText( BaseMessages.getString( PKG, \"OrcInputDialog.PassThruFields.Tooltip\" ) );\n    wPassThruFields.setOrientation( SWT.LEFT_TO_RIGHT );\n    props.setLook( wPassThruFields );\n    new FD( wPassThruFields ).left( 0, MARGIN ).top( 0, MARGIN ).apply();\n\n    //get fields button\n    lsGet = e -> populateFieldsTable();\n    Button wGetFields = new Button( fieldsContainer, SWT.PUSH );\n    wGetFields.setText( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.Get\" ) );\n    props.setLook( wGetFields );\n    new FD( wGetFields ).bottom( 100, -FIELDS_SEP ).right( 100, -MARGIN ).apply();\n    wGetFields.addListener( SWT.Selection, lsGet );\n\n    // fields table\n    ColumnInfo orcPathColumnInfo = new ColumnInfo( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.column.Path\" ), ColumnInfo.COLUMN_TYPE_TEXT, false, true );\n    ColumnInfo nameColumnInfo = new ColumnInfo( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.column.Name\" ), ColumnInfo.COLUMN_TYPE_TEXT, false, false );\n    ColumnInfo typeColumnInfo = new ColumnInfo( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.column.Type\" ), ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaFactory.getValueMetaNames() );\n    ColumnInfo formatColumnInfo = new ColumnInfo( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.column.Format\" ), ColumnInfo.COLUMN_TYPE_CCOMBO, Const.getDateFormats() );\n    ColumnInfo sorceTypeColumnInfo = new ColumnInfo( BaseMessages.getString( PKG, \"OrcInputDialog.Fields.column.SourceType\" ), ColumnInfo.COLUMN_TYPE_TEXT, ValueMetaFactory.getValueMetaNames(), true );\n\n    ColumnInfo[] parameterColumns = new ColumnInfo[] { orcPathColumnInfo, nameColumnInfo, typeColumnInfo, formatColumnInfo, sorceTypeColumnInfo};\n    parameterColumns[0].setAutoResize( false );\n    parameterColumns[1].setUsingVariables( true );\n    parameterColumns[3].setAutoResize( false );\n\n    wInputFields = new TableView( transMeta, fieldsContainer, SWT.FULL_SELECTION | SWT.SINGLE | SWT.BORDER | SWT.NO_SCROLL | SWT.V_SCROLL, parameterColumns, 7, null, props );\n    ColumnsResizer resizer = new ColumnsResizer( 0, 40, 20, 20, 20, 0 );\n    wInputFields.getTable().addListener( SWT.Resize, resizer );\n\n    props.setLook( wInputFields );\n    new FD( wInputFields ).left( 0, MARGIN ).right( 100, -MARGIN ).top( wPassThruFields, FIELDS_SEP )\n            .bottom( wGetFields, -FIELDS_SEP ).apply();\n\n    wInputFields.setRowNums();\n    wInputFields.optWidth( true );\n\n    for ( ColumnInfo col : parameterColumns ) {\n      col.setAutoResize( false );\n    }\n    resizer.addColumnResizeListeners( wInputFields.getTable() );\n    setTruncatedColumn( wInputFields.getTable(), 1 );\n    if ( !Const.isWindows() ) {\n      addColumnTooltip( wInputFields.getTable(), 1 );\n    }\n  }\n\n  protected void populateFieldsTable() {\n    try {\n      List<? extends IOrcInputField> inputFields = getInputFieldsFromOrcFile( false );\n      wInputFields.clearAll();\n      for ( IOrcInputField field : inputFields ) {\n        TableItem item = new TableItem( wInputFields.table, SWT.NONE );\n        if ( field != null ) {\n          setField( item, concatenateOrcNameAndType( field ), ORC_PATH_COLUMN_INDEX );\n          setField( item, field.getPentahoFieldName(), FIELD_NAME_COLUMN_INDEX );\n          setField( item, ValueMetaFactory.getValueMetaName( field.getPentahoType() ), FIELD_TYPE_COLUMN_INDEX );\n          setField( item, OrcSpec.DataType.getDataType( field.getFormatType() ).getName(), FIELD_SOURCE_TYPE_COLUMN_INDEX );\n        }\n      }\n\n      wInputFields.removeEmptyRows();\n      wInputFields.setRowNums();\n      wInputFields.optWidth( true );\n    } catch ( Exception ex ) {\n      logError( BaseMessages.getString( PKG, UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE ), ex );\n      new ErrorDialog( shell, stepname, BaseMessages.getString( PKG,\n        UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE, getProcessedFileName() ), ex );\n    }\n  }\n\n  private String getProcessedFileName() {\n    return transMeta.environmentSubstitute( wPath.getText() );\n  }\n\n  private List<? extends IOrcInputField> getInputFieldsFromOrcFile( boolean failQuietly ) {\n    String orcFileName = getProcessedFileName();\n    List<? extends IOrcInputField> inputFields = null;\n    try {\n      inputFields = OrcInput.retrieveSchema( transMeta.getBowl(),\n        meta.getNamedClusterResolver().getNamedClusterServiceLocator(),\n        meta.getNamedClusterResolver().resolveNamedCluster( orcFileName ), orcFileName );\n    } catch ( Exception ex ) {\n      if ( !failQuietly ) {\n        logError( BaseMessages.getString( PKG, UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE ), ex );\n        new ErrorDialog( shell, stepname, BaseMessages.getString( PKG,\n          UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE, orcFileName ), ex );\n      }\n    }\n    return inputFields;\n  }\n\n  private void setField( TableItem item, String fieldValue, int fieldIndex ) {\n    if ( !Utils.isEmpty( fieldValue ) ) {\n      item.setText( fieldIndex, fieldValue );\n    }\n  }\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   */\n  @Override\n  protected void getData( OrcInputMeta meta ) {\n    if ( meta.getFilename() != null && meta.getFilename().length() > 0 ) {\n      wPath.setText( meta.getFilename() );\n    }\n    wPassThruFields.setSelection( meta.inputFiles.passingThruFields );\n    int itemIndex = 0;\n    for ( IOrcInputField inputField : meta.getInputFields() ) {\n      TableItem item = null;\n      if ( itemIndex < wInputFields.table.getItemCount() ) {\n        item = wInputFields.table.getItem( itemIndex );\n      } else {\n        item = new TableItem( wInputFields.table, SWT.NONE );\n      }\n\n      if ( inputField.getFormatFieldName() != null ) {\n        item.setText( ORC_PATH_COLUMN_INDEX,\n          concatenateOrcNameAndType( inputField ) );\n      }\n      if ( inputField.getPentahoFieldName() != null ) {\n        item.setText( FIELD_NAME_COLUMN_INDEX, inputField.getPentahoFieldName() );\n      }\n      if ( getTypeDesc( inputField.getPentahoType() ) != null ) {\n        item.setText( FIELD_TYPE_COLUMN_INDEX, getTypeDesc( inputField.getPentahoType() ) );\n      }\n      if ( getSourceTypeDesc( inputField.getFormatType() ) != null ) {\n        item.setText( FIELD_SOURCE_TYPE_COLUMN_INDEX, getSourceTypeDesc( inputField.getFormatType() ) );\n      }\n      if ( inputField.getStringFormat() != null ) {\n        item.setText( FORMAT_COLUMN_INDEX, inputField.getStringFormat() );\n      } else {\n        item.setText( FORMAT_COLUMN_INDEX, \"\" );\n      }\n      itemIndex++;\n    }\n  }\n\n  public String getTypeDesc( int type ) {\n    return ValueMetaFactory.getValueMetaName( type );\n  }\n\n  public String getSourceTypeDesc( int type ) {\n    return OrcSpec.DataType.getDataType( type ).getName();\n  }\n\n  /**\n   * Fill meta object from UI options.\n   */\n  @Override\n  protected void getInfo( OrcInputMeta meta, boolean preview ) {\n    String filePath = wPath.getText();\n    if ( filePath != null && !filePath.isEmpty() ) {\n      meta.allocateFiles( 1 );\n      meta.setFilename( wPath.getText().trim() );\n    }\n\n    meta.inputFiles.passingThruFields = wPassThruFields.getSelection();\n\n    List<? extends IOrcInputField> actualOrcFileInputFields = getInputFieldsFromOrcFile( true );\n\n    int nrFields = wInputFields.nrNonEmpty();\n    meta.setInputFields( new OrcInputField[nrFields] );\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = wInputFields.getNonEmpty( i );\n      OrcInputField field = new OrcInputField();\n      field.setFormatFieldName( extractFieldName( item.getText( ORC_PATH_COLUMN_INDEX ) ) );\n      if ( actualOrcFileInputFields != null ) {\n        IOrcInputField actualOrcField = actualOrcFileInputFields.stream()\n          .filter( x -> field.getFormatFieldName().equals( x.getFormatFieldName() ) )\n          .findFirst( ).orElse( null );\n        if ( actualOrcField != null ) {\n          field.setFormatType( actualOrcField.getFormatType() );\n        } else {\n          field.setFormatType( extractOrcType( item.getText( FIELD_SOURCE_TYPE_COLUMN_INDEX ) ).getId() );\n          item.setText( concatenateOrcNameAndType( field ) );\n        }\n      }\n      field.setPentahoFieldName( item.getText( FIELD_NAME_COLUMN_INDEX ) );\n      field.setPentahoType( ValueMetaFactory.getIdForValueMeta( item.getText( FIELD_TYPE_COLUMN_INDEX ) ) );\n      field.setStringFormat( item.getText( FORMAT_COLUMN_INDEX ) );\n      meta.getInputFields()[ i ] = field;\n    }\n  }\n\n  /**\n   * When all else fails, extract he orc type from the field description.\n   *\n   * @see #concatenateOrcNameAndType(IOrcInputField)\n   */\n  private OrcSpec.DataType extractOrcType( String orcNameTypeFromUI ) {\n    if ( orcNameTypeFromUI != null ) {\n      String uiType = StringUtils.substringBetween( orcNameTypeFromUI, \"(\", \")\" );\n      if ( uiType != null ) {\n        String uiTypeTrimmed = uiType.trim();\n        for ( OrcSpec.DataType temp : OrcSpec.DataType.values() ) {\n          if ( temp.getName().equalsIgnoreCase( uiTypeTrimmed ) ) {\n            return temp;\n          }\n        }\n      }\n    }\n    return null;\n  }\n\n  /**\n   * Get the field name from the UI path column\n   *\n   * @see #concatenateOrcNameAndType(IOrcInputField)\n   */\n  private String extractFieldName( String orcNameTypeFromUI ) {\n    if ( orcNameTypeFromUI != null ) {\n      return StringUtils.substringBefore( orcNameTypeFromUI, \"(\" ).trim();\n    }\n    return orcNameTypeFromUI;\n  }\n\n  /**\n   * this method must be changed only with change {@link #extractOrcType(String)}\n   * since this method converts the field for show user and the extract methods myst convert to internal format\n   */\n  private String concatenateOrcNameAndType( IOrcInputField field ) {\n    String typeName;\n    OrcSpec.DataType orcDataType = OrcSpec.DataType.getDataType( field.getFormatType() );\n    if ( orcDataType == null ) {\n      typeName = \"unknown\";\n    } else {\n      typeName = OrcSpec.DataType.getDataType( field.getFormatType() ).getName();\n    }\n    return field.getFormatFieldName() + \" (\" + typeName + \")\";\n  }\n\n  private void doPreview() {\n    getInfo( meta, true );\n    TransMeta previewMeta =\n      TransPreviewFactory.generatePreviewTransformation( transMeta, meta, wStepname.getText() );\n    transMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n    previewMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n\n    EnterNumberDialog numberDialog =\n      new EnterNumberDialog( shell, props.getDefaultPreviewSize(), BaseMessages.getString( PKG,\n        \"OrcInputDialog.PreviewSize.DialogTitle\" ), BaseMessages.getString( PKG,\n        \"OrcInputDialog.PreviewSize.DialogMessage\" ) );\n    int previewSize = numberDialog.open();\n\n    if ( previewSize > 0 ) {\n      TransPreviewProgressDialog progressDialog =\n        new TransPreviewProgressDialog( shell, previewMeta, new String[] { wStepname.getText() },\n          new int[] { previewSize } );\n      progressDialog.open();\n\n      Trans trans = progressDialog.getTrans();\n      String loggingText = progressDialog.getLoggingText();\n\n      if ( !progressDialog.isCancelled() && trans.getResult() != null && trans.getResult().getNrErrors() > 0 ) {\n        EnterTextDialog etd =\n          new EnterTextDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Title\" ),\n            BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Message\" ), loggingText, true );\n        etd.setReadOnly();\n        etd.open();\n      }\n\n      PreviewRowsDialog prd =\n        new PreviewRowsDialog( shell, transMeta, SWT.NONE, wStepname.getText(), progressDialog\n          .getPreviewRowsMeta( wStepname.getText() ),\n          progressDialog.getPreviewRows( wStepname.getText() ), loggingText );\n      prd.open();\n    }\n  }\n\n  @Override\n  protected int getWidth() {\n    return SHELL_WIDTH;\n  }\n\n  @Override\n  protected int getHeight() {\n    return SHELL_HEIGHT;\n  }\n\n  @Override\n  protected String getStepTitle() {\n    return BaseMessages.getString( PKG, \"OrcInputDialog.Shell.Title\" );\n  }\n\n  @Override\n  protected Listener getPreview() {\n    return e -> doPreview();\n  }\n\n  @Override protected SelectionOperation selectionOperation() {\n    return SelectionOperation.FILE_OR_FOLDER;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.input.OrcInputMetaBase;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\n\n//keep ID as new because we will have old step with ID OrcInput\n@Step( id = \"OrcInput\", image = \"OI.svg\", name = \"OrcInput.Name\", description = \"OrcInput.Description\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.orc\" )\n@InjectionSupported( localizationPrefix = \"OrcInput.Injection.\", groups = { \"FILENAME_LINES\", \"FIELDS\" }, hide = {\n  \"FILEMASK\", \"EXCLUDE_FILEMASK\", \"FILE_REQUIRED\", \"INCLUDE_SUBFOLDERS\", \"FIELD_POSITION\", \"FIELD_LENGTH\",\n  \"FIELD_IGNORE\", \"FIELD_FORMAT\", \"FIELD_PRECISION\", \"FIELD_CURRENCY\",\n  \"FIELD_DECIMAL\", \"FIELD_GROUP\", \"FIELD_REPEAT\", \"FIELD_TRIM_TYPE\", \"FIELD_NULL_STRING\", \"FIELD_IF_NULL\",\n  \"FIELD_NULLABLE\", \"ACCEPT_FILE_NAMES\", \"ACCEPT_FILE_STEP\", \"PASS_THROUGH_FIELDS\", \"ACCEPT_FILE_FIELD\",\n  \"ADD_FILES_TO_RESULT\", \"IGNORE_ERRORS\", \"FILE_ERROR_FIELD\", \"FILE_ERROR_MESSAGE_FIELD\", \"SKIP_BAD_FILES\",\n  \"WARNING_FILES_TARGET_DIR\", \"WARNING_FILES_EXTENTION\",\n  \"ERROR_FILES_TARGET_DIR\", \"ERROR_FILES_EXTENTION\", \"LINE_NR_FILES_TARGET_DIR\", \"LINE_NR_FILES_EXTENTION\",\n  \"FILE_SHORT_FILE_FIELDNAME\",\n  \"FILE_EXTENSION_FIELDNAME\", \"FILE_PATH_FIELDNAME\", \"FILE_SIZE_FIELDNAME\", \"FILE_HIDDEN_FIELDNAME\",\n  \"FILE_LAST_MODIFICATION_FIELDNAME\",\n  \"FILE_URI_FIELDNAME\", \"FILE_ROOT_URI_FIELDNAME\", \"FIELD_SOURCE_TYPE\"\n} )\npublic class OrcInputMeta extends OrcInputMetaBase {\n\n  private final NamedClusterResolver namedClusterResolver;\n\n  public OrcInputMeta() {\n    this( NamedClusterResolver.getInstance() );\n  }\n\n  public OrcInputMeta( NamedClusterResolver namedClusterResolver ) {\n    this.namedClusterResolver = namedClusterResolver;\n  }\n\n  @Override\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n      Trans trans ) {\n    return new OrcInput( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public StepDataInterface getStepData() {\n    return new OrcInputData();\n  }\n\n  public NamedClusterResolver getNamedClusterResolver() {\n    return namedClusterResolver;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\n\nimport org.apache.orc.CompressionKind;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.output.PvfsFileAliaser;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcOutputFormat;\n\nimport java.io.IOException;\n\npublic class OrcOutput extends BaseStep implements StepInterface {\n\n  private OrcOutputMeta meta;\n\n  private OrcOutputData data;\n\n  private PvfsFileAliaser pvfsFileAliaser;\n\n  public OrcOutput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                    Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public synchronized boolean processRow( StepMetaInterface smi, StepDataInterface sdi )\n    throws KettleException {\n    try {\n      meta = (OrcOutputMeta) smi;\n      data = (OrcOutputData) sdi;\n\n      if ( data.output == null ) {\n        init();\n      }\n\n      Object[] currentRow = getRow();\n      if ( currentRow != null ) {\n        //create new outputMeta\n        RowMetaInterface outputRMI = new RowMeta();\n        //create data equals with output fileds\n        Object[] outputData = new Object[ meta.getOutputFields().size() ];\n        for ( int i = 0; i < meta.getOutputFields().size(); i++ ) {\n          int inputRowIndex = getInputRowMeta().indexOfValue( meta.getOutputFields().get( i ).getPentahoFieldName() );\n          if ( inputRowIndex == -1 ) {\n            throw new KettleException( \"Field name [\" + meta.getOutputFields().get( i ).getPentahoFieldName()\n              + \" ] couldn't be found in the input stream!\" );\n          } else {\n            ValueMetaInterface vmi = ValueMetaFactory.cloneValueMeta( getInputRowMeta().getValueMeta( inputRowIndex ) );\n            //add output value meta according output fields\n            outputRMI.addValueMeta( i, vmi );\n            //add output data according output fields\n            outputData[ i ] = currentRow[ inputRowIndex ];\n          }\n        }\n        RowMetaAndData row = new RowMetaAndData( outputRMI, outputData );\n        data.writer.write( row );\n        putRow( row.getRowMeta(), row.getData() );\n        return true;\n      } else {\n        // no more input to be expected...\n        closeWriter();\n        pvfsFileAliaser.copyFileToFinalDestination();\n        pvfsFileAliaser.deleteTempFileAndFolder();\n        setOutputDone();\n        return false;\n      }\n    } catch ( IllegalStateException e ) {\n      getLogChannel().logError( e.getMessage() );\n      setErrors( 1 );\n      pvfsFileAliaser.deleteTempFileAndFolder();\n      setOutputDone();\n      return false;\n    } catch ( KettleException ex ) {\n      throw ex;\n    } catch ( Exception ex ) {\n      throw new KettleException( ex );\n    }\n  }\n\n  public void init() throws Exception {\n    FormatService formatService;\n    try {\n      formatService = meta.getNamedClusterResolver().getNamedClusterServiceLocator()\n        .getService( getNamedCluster(), FormatService.class );\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleException( \"can't get service format shim \", e );\n    }\n\n    if ( meta.getFilename() == null ) {\n      throw new KettleException( \"No output files defined\" );\n    }\n\n    data.output = formatService.createOutputFormat( IPentahoOrcOutputFormat.class, getNamedCluster() );\n\n    String outputFileName = environmentSubstitute( meta.constructOutputFilename() );\n    pvfsFileAliaser = new PvfsFileAliaser( getTransMeta().getBowl(), outputFileName, getTransMeta(), data.output,\n      meta.isOverrideOutput(), getLogChannel() );\n\n    data.output.setOutputFile( pvfsFileAliaser.generateAlias(), meta.isOverrideOutput() );\n    data.output.setFields( meta.getOutputFields() );\n\n    CompressionKind compression;\n    try {\n      compression = CompressionKind.valueOf( meta.getCompressionType().toUpperCase() );\n    } catch ( Exception ex ) {\n      compression = CompressionKind.NONE;\n    }\n    data.output.setCompression( compression );\n    if ( compression != CompressionKind.NONE ) {\n      data.output.setCompressSize( meta.getCompressSize() );\n    }\n    data.output.setRowIndexStride( meta.getRowsBetweenEntries() );\n    data.output.setStripeSize( meta.getStripeSize() );\n    data.writer = data.output.createRecordWriter();\n  }\n\n  private NamedCluster getNamedCluster() {\n    return meta.getNamedClusterResolver().resolveNamedCluster( environmentSubstitute( meta.getFilename() ) );\n  }\n\n  public void closeWriter() throws KettleException {\n    try {\n      data.writer.close();\n    } catch ( IOException e ) {\n      throw new KettleException( e );\n    }\n    data.output = null;\n  }\n\n  public boolean init( StepMetaInterface smi, StepDataInterface sdi ) {\n    meta = (OrcOutputMeta) smi;\n    data = (OrcOutputData) sdi;\n    return super.init( smi, sdi );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcOutputFormat;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOutputFormat.IPentahoRecordWriter;\n\npublic class OrcOutputData extends BaseStepData implements StepDataInterface {\n\n  public IPentahoOrcOutputFormat output;\n  public IPentahoRecordWriter writer;\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NullableValuesEnum;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.orc.BaseOrcStepDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.OrcTypeConverter;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.output.OrcOutputField;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ColumnsResizer;\nimport org.pentaho.di.ui.core.widget.ComboVar;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.eclipse.swt.widgets.Table;\n\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.TableItemInsertListener;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.function.BiConsumer;\n\n@PluginDialog( id = \"OrcOutput\", image = \"OO.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/orc-output\" )\npublic class OrcOutputDialog extends BaseOrcStepDialog<OrcOutputMeta> implements StepDialogInterface {\n\n  private static final Class<?> PKG = OrcOutputMeta.class;\n\n  private static final int SHELL_WIDTH = 698;\n  private static final int SHELL_HEIGHT = 554;\n\n  private ComboVar wCompression;\n  private TextVar wStripeSize;\n  private TextVar wCompressSize;\n  private Button wInlineIndexes;\n  private TextVar wRowsBetweenEntries;\n  private Button wDateInFileName;\n  private Button wTimeInFileName;\n  private Button wOverwriteExistingFile;\n  private Button wSpecifyDateTimeFormat;\n  private ComboVar wDateTimeFormat;\n  private int startingRowsBetweenEntries = OrcOutputMeta.DEFAULT_ROWS_BETWEEN_ENTRIES;\n\n  private TableView wOutputFields;\n\n  public OrcOutputDialog( Shell parent, Object orcOutputMeta, TransMeta transMeta, String sname ) {\n    this( parent, (OrcOutputMeta) orcOutputMeta, transMeta, sname );\n  }\n\n  public OrcOutputDialog( Shell parent, OrcOutputMeta orcOutputMeta, TransMeta transMeta, String sname ) {\n    super( parent, orcOutputMeta, transMeta, sname );\n    this.meta = orcOutputMeta;\n  }\n\n  @Override\n  protected void createUI()  {\n    Control prev = createHeader();\n\n    //main fields\n    prev = addFileWidgets( prev );\n\n    createFooter( shell );\n\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.bottom = new FormAttachment( wCancel, -MARGIN );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    Composite tabContainer;\n    tabContainer = new Composite( shell, SWT.NONE );\n    tabContainer.setLayout( new FormLayout() );\n    new FD( tabContainer ).left( 0, 0 ).top( prev, 0 ).right( 100, 0 ).bottom( separator, -MARGIN ).apply();\n\n    wOverwriteExistingFile = new Button( tabContainer, SWT.CHECK );\n    wOverwriteExistingFile.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.OverwriteFile.Label\" ) );\n    props.setLook( wOverwriteExistingFile );\n    new FD( wOverwriteExistingFile ).left( 0, 0 ).top( tabContainer, FIELDS_SEP ).apply();\n    wOverwriteExistingFile.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n      }\n    } );\n\n    CTabFolder wTabFolder = new CTabFolder( tabContainer, SWT.BORDER );\n    props.setLook( wTabFolder, Props.WIDGET_STYLE_TAB );\n    wTabFolder.setSimple( false );\n\n    addFieldsTab( wTabFolder );\n    addOptionsTab( wTabFolder );\n\n    new FD( wTabFolder ).left( 0, 0 ).top( wOverwriteExistingFile, MARGIN ).right( 100, 0 ).bottom( 100, 0 ).apply();\n    wTabFolder.setSelection( 0 );\n  }\n\n  @Override\n  protected String getStepTitle() {\n    return BaseMessages.getString( PKG, \"OrcOutputDialog.Shell.Title\" );\n  }\n\n  private void addFieldsTab( CTabFolder wTabFolder ) {\n    CTabItem wTab = new CTabItem( wTabFolder, SWT.NONE );\n    wTab.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.FieldsTab.TabTitle\" ) );\n\n    Composite wComp = new Composite( wTabFolder, SWT.NONE );\n    props.setLook( wComp );\n\n    FormLayout layout = new FormLayout();\n    layout.marginWidth = MARGIN;\n    layout.marginHeight = MARGIN;\n    wComp.setLayout( layout );\n\n    lsGet = e -> getFields();\n\n    Button wGetFields = new Button( wComp, SWT.PUSH );\n    wGetFields.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.Get\" ) );\n    props.setLook( wGetFields );\n    new FD( wGetFields ).bottom( 100, 0 ).right( 100, 0 ).apply();\n\n    wGetFields.addListener( SWT.Selection, lsGet );\n\n    ColumnInfo[] parameterColumns = new ColumnInfo[]{\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Path\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Name\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Type\" ),\n        ColumnInfo.COLUMN_TYPE_CCOMBO, OrcSpec.DataType.getDisplayableTypeNames() ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Precision\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Scale\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Default\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"OrcOutputDialog.Fields.column.Null\" ),\n        ColumnInfo.COLUMN_TYPE_CCOMBO, NullableValuesEnum.getValuesArr(), true )};\n    parameterColumns[0].setAutoResize( false );\n    parameterColumns[1].setUsingVariables( true );\n    wOutputFields =\n      new TableView( transMeta, wComp, SWT.FULL_SELECTION | SWT.SINGLE | SWT.BORDER | SWT.NO_SCROLL | SWT.V_SCROLL,\n        parameterColumns, 7, lsMod, props );\n    ColumnsResizer resizer = new ColumnsResizer( 0, 30, 20, 10, 10, 10, 15, 5 );\n    wOutputFields.getTable().addListener( SWT.Resize, resizer );\n\n    props.setLook( wOutputFields );\n    new FD( wOutputFields ).left( 0, 0 ).right( 100, 0 ).top( wComp, 0 ).bottom( wGetFields, -FIELDS_SEP ).apply();\n\n    wOutputFields.setRowNums();\n    wOutputFields.optWidth( true );\n\n    new FD( wComp ).left( 0, 0 ).top( 0, 0 ).right( 100, 0 ).bottom( 100, 0 ).apply();\n\n    wTab.setControl( wComp );\n    for ( ColumnInfo col : parameterColumns ) {\n      col.setAutoResize( false );\n    }\n    resizer.addColumnResizeListeners( wOutputFields.getTable() );\n    setTruncatedColumn( wOutputFields.getTable(), 1 );\n    if ( !Const.isWindows() ) {\n      addColumnTooltip( wOutputFields.getTable(), 1 );\n    }\n  }\n\n  private void addOptionsTab( CTabFolder wTabFolder ) {\n    CTabItem wTab = new CTabItem( wTabFolder, SWT.NONE );\n    wTab.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Options.TabTitle\" ) );\n    Composite wGrid = new Composite( wTabFolder, SWT.NONE );\n    wTab.setControl( wGrid );\n    props.setLook( wGrid );\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginHeight = MARGIN;\n    formLayout.marginWidth = MARGIN;\n\n    wGrid.setLayout( formLayout );\n\n    Label wLabel = createLabel( wGrid, \"OrcOutputDialog.Options.Compression\" );\n    FormData formData = new FormData();\n    formData.top = new FormAttachment( 0, 0 );\n    wLabel.setLayoutData( formData );\n\n    wCompression = createComboVar( wGrid, meta.getCompressionTypes() );\n    formData = new FormData();\n    formData.top = new FormAttachment( wLabel, 5 );\n    formData.width = FIELD_SMALL + VAR_EXTRA_WIDTH;\n    wCompression.setLayoutData( formData );\n    props.setLook( wCompression );\n\n    wLabel = createLabel( wGrid, \"OrcOutputDialog.Options.StripeSize\" );\n    formData = new FormData();\n    formData.top = new FormAttachment( wCompression, 10 );\n    wLabel.setLayoutData( formData );\n\n    wStripeSize = new TextVar( transMeta, wGrid, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wStripeSize );\n    formData = new FormData();\n    formData.top = new FormAttachment( wLabel, 5 );\n    formData.width = FIELD_SMALL + VAR_EXTRA_WIDTH;\n    wStripeSize.setLayoutData( formData );\n    setIntegerOnly( wStripeSize );\n    wStripeSize.addModifyListener( lsMod );\n\n    wLabel = createLabel( wGrid, \"OrcOutputDialog.Options.CompressSize\" );\n    formData = new FormData();\n    formData.top = new FormAttachment( wStripeSize, 10 );\n    wLabel.setLayoutData( formData );\n\n    wCompressSize = new TextVar( transMeta, wGrid, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wCompressSize );\n    formData = new FormData();\n    formData.top = new FormAttachment( wLabel, 5 );\n    formData.width = FIELD_SMALL + VAR_EXTRA_WIDTH;\n    wCompressSize.setLayoutData( formData );\n    wCompressSize.getTextWidget().addModifyListener( lsMod );\n    setIntegerOnly( wCompressSize );\n    wCompressSize.addModifyListener( lsMod );\n\n    wInlineIndexes = new Button( wGrid, SWT.CHECK );\n    props.setLook( wInlineIndexes );\n    wInlineIndexes.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Options.InlineIndexes\" ) );\n    formData = new FormData();\n    formData.top = new FormAttachment( 0, 0 );\n    formData.left = new FormAttachment( wCompressSize, 50 );\n    wInlineIndexes.setLayoutData( formData );\n    wInlineIndexes.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        boolean isSelected = wInlineIndexes.getSelection();\n        if ( isSelected ) {\n          wRowsBetweenEntries.setEnabled( true );\n          wRowsBetweenEntries.setText( Integer.toString( startingRowsBetweenEntries ) );\n        } else {\n          wRowsBetweenEntries.setEnabled( false );\n          wRowsBetweenEntries.setText( \"\" );\n        }\n      }\n    } );\n\n    wLabel = createLabel( wGrid, \"OrcOutputDialog.Options.RowsBetweenEntries\" );\n    formData = new FormData();\n    formData.top = new FormAttachment( wInlineIndexes, 10 );\n    formData.left = new FormAttachment( wCompressSize, 70 );\n    wLabel.setLayoutData( formData );\n\n    wRowsBetweenEntries = new TextVar( transMeta, wGrid, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wRowsBetweenEntries );\n    formData = new FormData();\n    formData.top = new FormAttachment( wLabel, 5 );\n    formData.left = new FormAttachment( wCompressSize, 70 );\n    formData.width = FIELD_SMALL + VAR_EXTRA_WIDTH;\n    wRowsBetweenEntries.setLayoutData( formData );\n    setIntegerOnly( wRowsBetweenEntries );\n    wRowsBetweenEntries.addModifyListener( lsMod );\n\n    wDateInFileName = new Button( wGrid, SWT.CHECK );\n    props.setLook( wDateInFileName );\n    wDateInFileName.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Options.DateInFileName\" ) );\n    formData = new FormData();\n    formData.top = new FormAttachment( wRowsBetweenEntries, 10 );\n    formData.left = new FormAttachment( wCompressSize, 50 );\n    wDateInFileName.setLayoutData( formData );\n    wDateInFileName.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        boolean isSelected = wDateInFileName.getSelection();\n        if ( isSelected ) {\n          wSpecifyDateTimeFormat.setSelection( false );\n          wDateTimeFormat.setText( \"\" );\n          wDateTimeFormat.setEnabled( false );\n        }\n      }\n    } );\n\n\n    wTimeInFileName = new Button( wGrid, SWT.CHECK );\n    props.setLook( wTimeInFileName );\n    wTimeInFileName.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Options.TimeInFileName\" ) );\n    formData = new FormData();\n    formData.top = new FormAttachment( wDateInFileName, 10 );\n    formData.left = new FormAttachment( wCompressSize, 50 );\n    wTimeInFileName.setLayoutData( formData );\n    wTimeInFileName.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        boolean isSelected = wTimeInFileName.getSelection();\n        if ( isSelected ) {\n          wSpecifyDateTimeFormat.setSelection( false );\n          wDateTimeFormat.setText( \"\" );\n          wDateTimeFormat.setEnabled( false );\n        }\n      }\n    } );\n\n\n    wSpecifyDateTimeFormat = new Button( wGrid, SWT.CHECK );\n    wSpecifyDateTimeFormat.setText( BaseMessages.getString( PKG, \"OrcOutputDialog.Options.SpecifyDateTimeFormat\" ) );\n    props.setLook( wSpecifyDateTimeFormat );\n    formData = new FormData();\n    formData.top = new FormAttachment( wTimeInFileName, 10 );\n    formData.left = new FormAttachment( wCompressSize, 50 );\n    wSpecifyDateTimeFormat.setLayoutData( formData );\n    wSpecifyDateTimeFormat.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        boolean isSelected = wSpecifyDateTimeFormat.getSelection();\n        wDateTimeFormat.setEnabled( isSelected );\n        if ( !isSelected ) {\n          wDateTimeFormat.setText( \"\" );\n          wTimeInFileName.setEnabled( true );\n          wDateInFileName.setEnabled( true );\n        } else {\n          wTimeInFileName.setSelection( false );\n          wDateInFileName.setSelection( false );\n          wTimeInFileName.setEnabled( false );\n          wDateInFileName.setEnabled( false );\n        }\n      }\n    } );\n\n    String[] dates = Const.getDateFormats();\n    dates =\n      Arrays.stream( dates ).filter( d -> d.indexOf( '/' ) < 0 && d.indexOf( '\\\\' ) < 0 && d.indexOf( ':' ) < 0 )\n        .toArray( String[]::new ); // remove formats with slashes and colons\n    wDateTimeFormat = createComboVar( wGrid, dates );\n    props.setLook( wDateTimeFormat );\n    formData = new FormData();\n    formData.top = new FormAttachment( wSpecifyDateTimeFormat, 5 );\n    formData.left = new FormAttachment( wCompressSize, 70 );\n    wDateTimeFormat.setLayoutData( formData );\n  }\n\n  protected ComboVar createComboVar( Composite container, String[] options ) {\n    ComboVar combo = new ComboVar( transMeta, container, SWT.LEFT | SWT.BORDER );\n    combo.setItems( options );\n    combo.addModifyListener( lsMod );\n    return combo;\n  }\n\n  protected String getComboVarValue( ComboVar combo ) {\n    String text = combo.getText();\n    String data = (String) combo.getData( text );\n    return data != null ? data : text;\n  }\n\n  private Label createLabel( Composite container, String labelRef ) {\n    Label label = new Label( container, SWT.NONE );\n    label.setText( BaseMessages.getString( PKG, labelRef ) );\n    props.setLook( label );\n    return label;\n  }\n\n  @Override\n  protected void ok() {\n    if ( Utils.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    stepname = wStepname.getText();\n\n    List<String> validationErrorFields = validateOutputFields( wOutputFields );\n\n    if ( validationErrorFields != null && !validationErrorFields.isEmpty() ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"OrcOutput.MissingDefaultFields.Title\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"OrcOutput.MissingDefaultFields.Msg\" ) );\n      mb.open();\n      return;\n    }\n\n    getInfo( meta, false );\n    dispose();\n  }\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   */\n  protected void getData( OrcOutputMeta meta ) {\n    if ( meta.getFilename() != null ) {\n      wPath.setText( meta.getFilename() );\n    }\n    wOverwriteExistingFile.setSelection( meta.isOverrideOutput() );\n    populateFieldsUI( meta, wOutputFields );\n    wCompression.setText( meta.getCompressionType() );\n    wCompressSize.setText( meta.getCompressSize() > 0 ? Integer.toString( meta.getCompressSize() ) : Integer.toString( OrcOutputMeta.DEFAULT_COMPRESS_SIZE ) );\n\n    int rowsBetweenEntries = meta.getRowsBetweenEntries();\n    if ( rowsBetweenEntries != 0 ) {\n      startingRowsBetweenEntries = rowsBetweenEntries;\n      wInlineIndexes.setSelection( true );\n      wRowsBetweenEntries.setText( Integer.toString( rowsBetweenEntries ) );\n      wRowsBetweenEntries.setEnabled( true );\n    } else {\n      startingRowsBetweenEntries = OrcOutputMeta.DEFAULT_ROWS_BETWEEN_ENTRIES;\n      wInlineIndexes.setSelection( false );\n      wRowsBetweenEntries.setText( \"\" );\n      wRowsBetweenEntries.setEnabled( false );\n    }\n\n    wStripeSize.setText( Integer.toString( meta.getStripeSize() ) );\n\n    String dateTimeFormat = coalesce( meta.getDateTimeFormat() );\n    if ( !dateTimeFormat.isEmpty() ) {\n      wTimeInFileName.setSelection( false );\n      wDateInFileName.setSelection( false );\n      wTimeInFileName.setEnabled( false );\n      wDateInFileName.setEnabled( false );\n      wSpecifyDateTimeFormat.setSelection( true );\n      wDateTimeFormat.setText( dateTimeFormat );\n      wDateTimeFormat.setEnabled( true );\n    } else {\n      wTimeInFileName.setEnabled( true );\n      wDateInFileName.setEnabled( true );\n      wTimeInFileName.setSelection( meta.isTimeInFileName() );\n      wDateInFileName.setSelection( meta.isDateInFileName() );\n      wSpecifyDateTimeFormat.setSelection( false );\n      wDateTimeFormat.setEnabled( false );\n      wDateTimeFormat.setText( \"\" );\n    }\n\n  }\n\n  // ui -> meta\n  @Override\n  protected void getInfo( OrcOutputMeta meta, boolean preview ) {\n    meta.setFilename( wPath.getText() );\n    meta.setOverrideOutput( wOverwriteExistingFile.getSelection() );\n    meta.setCompressionType( wCompression.getText() );\n    int compressSize = ( wCompressSize.getText().length() > 0 ) ? Integer.parseInt( wCompressSize.getText() ) : OrcOutputMeta.DEFAULT_COMPRESS_SIZE;\n    meta.setCompressSize( compressSize );\n    int stripeSize = ( wStripeSize.getText().length() > 0 ) ? Integer.parseInt( wStripeSize.getText() ) : OrcOutputMeta.DEFAULT_STRIPE_SIZE;\n    meta.setStripeSize( stripeSize );\n    int rowsBetweenEntries = ( wRowsBetweenEntries.getText().length() > 0 ) ? Integer.parseInt( wRowsBetweenEntries.getText() ) : 0;\n    meta.setRowsBetweenEntries( rowsBetweenEntries );\n    if ( wSpecifyDateTimeFormat.getSelection() ) {\n      meta.setTimeInFileName( false );\n      meta.setDateInFileName( false );\n      meta.setDateTimeFormat( wDateTimeFormat.getText().trim() );\n    } else {\n      meta.setTimeInFileName( wTimeInFileName.getSelection() );\n      meta.setDateInFileName( wDateInFileName.getSelection() );\n      meta.setDateTimeFormat( \"\" );\n    }\n    saveOutputFields( wOutputFields, meta );\n  }\n\n  private void saveOutputFields( TableView wFields, OrcOutputMeta meta ) {\n    int nrFields = wFields.nrNonEmpty();\n\n    List<OrcOutputField> outputFields = new ArrayList<>();\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = wFields.getNonEmpty( i );\n\n      int j = 1;\n      OrcOutputField field = new OrcOutputField();\n      field.setFormatFieldName( item.getText( j++ ) );\n      field.setPentahoFieldName( item.getText( j++ ) );\n      field.setFormatType( item.getText( j++ ) );\n\n\n      if ( field.getOrcType().equals( OrcSpec.DataType.DECIMAL ) ) {\n        field.setPrecision( item.getText( j++ ) );\n        field.setScale( item.getText( j++ ) );\n      } else if ( field.getOrcType().equals( OrcSpec.DataType.FLOAT ) || field.getOrcType().equals( OrcSpec.DataType.DOUBLE ) ) {\n        j++;\n        field.setScale( item.getText( j++ ) );\n      } else {\n        j += 2;\n      }\n\n      field.setDefaultValue( item.getText( j++ ) );\n      field.setAllowNull( getNullableValue( item.getText( j++ ) ) );\n\n      outputFields.add( field );\n    }\n    meta.setOutputFields( outputFields );\n  }\n\n  private List<String> validateOutputFields( TableView wFields ) {\n    int nrFields = wFields.nrNonEmpty();\n    List<String> validationErrorFields = new ArrayList<>();\n\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = wFields.getNonEmpty( i );\n\n      int j = 1;\n\n      String path = item.getText( j++ );\n      String name = item.getText( j++ );\n      String type = item.getText( j++ );\n      String precision = item.getText( j++ );\n      if ( precision == null || precision.trim().isEmpty() ) {\n        item.setText( 4, Integer.toString( OrcSpec.DEFAULT_DECIMAL_PRECISION ) );\n      }\n\n      String scale = item.getText( j++ );\n      if ( scale == null || scale.trim().isEmpty() ) {\n        item.setText( 5, Integer.toString( OrcSpec.DEFAULT_DECIMAL_SCALE ) );\n      }\n\n      String defaultValue = item.getText( j++ );\n      String nullString = getNullableValue( item.getText( j++ ) );\n\n      if ( nullString.equals( NullableValuesEnum.NO.getValue() ) && ( defaultValue == null || defaultValue.trim().isEmpty() ) ) {\n        validationErrorFields.add( name );\n      }\n    }\n    return validationErrorFields;\n  }\n\n  private String getNullableValue( String nullString ) {\n    return ( nullString != null && !nullString.isEmpty() ) ? nullString : NullableValuesEnum.getDefaultValue().getValue();\n  }\n\n  private void populateFieldsUI( OrcOutputMeta meta, TableView wOutputFields ) {\n    populateFieldsUI( meta.getOutputFields(), wOutputFields, ( field, item ) -> {\n      int i = 1;\n      item.setText( i++, coalesce( field.getFormatFieldName() ) );\n      item.setText( i++, coalesce( field.getPentahoFieldName() ) );\n      item.setText( i++, coalesce( field.getOrcType().getName() ) );\n      if ( field.getOrcType().equals( OrcSpec.DataType.DECIMAL ) ) {\n        item.setText( i++, coalesce( String.valueOf( field.getPrecision() ) ) );\n        item.setText( i++, coalesce( String.valueOf( field.getScale() ) ) );\n      } else if ( field.getOrcType().equals( OrcSpec.DataType.FLOAT ) || field.getOrcType().equals( OrcSpec.DataType.DOUBLE ) ) {\n        i++;\n        item.setText( i++, field.getScale() > 0 ? String.valueOf( field.getScale() ) : \"\" );\n      } else {\n        i += 2;\n      }\n      item.setText( i++, coalesce( field.getDefaultValue() ) );\n      item.setText( i++, field.getAllowNull() ? NullableValuesEnum.YES.getValue() : NullableValuesEnum.NO.getValue() );\n    } );\n  }\n\n  private String coalesce( String value ) {\n    return value == null ? \"\" : value;\n  }\n\n  private void populateFieldsUI( List<OrcOutputField> fields, TableView wFields,\n                                 BiConsumer<OrcOutputField, TableItem> converter ) {\n    int nrFields = fields.size();\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = null;\n      if ( i < wFields.table.getItemCount() ) {\n        item = wFields.table.getItem( i );\n      } else {\n        item = new TableItem( wFields.table, SWT.NONE );\n      }\n      converter.accept( fields.get( i ), item );\n    }\n  }\n\n  protected void getFields() {\n    try {\n      RowMetaInterface r = transMeta.getPrevStepFields( stepname );\n      if ( r != null ) {\n        TableItemInsertListener listener = ( tableItem, v ) -> true;\n        getFieldsFromPreviousStep( r, wOutputFields, 1, new int[]{1, 2}, new int[]{3}, 4,\n          5, true, listener );\n      }\n    } catch ( KettleException ke ) {\n      new ErrorDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.GetFieldsFailed.Title\" ), BaseMessages\n        .getString( PKG, \"System.Dialog.GetFieldsFailed.Message\" ), ke );\n    }\n  }\n\n  private MessageDialog getFieldsChoiceDialog( Shell shell, int existingFields, int newFields ) {\n    MessageDialog messageDialog =\n      new MessageDialog( shell,\n        BaseMessages.getString( PKG, \"OrcOutputDialog.GetFieldsChoice.Title\" ), // \"Warning!\"\n        null,\n        BaseMessages.getString( PKG, \"OrcOutputDialog.GetFieldsChoice.Message\", \"\" + existingFields, \"\" + newFields ),\n        MessageDialog.WARNING, new String[] {\n        BaseMessages.getString( PKG, \"OrcOutputDialog.AddNew\" ),\n        BaseMessages.getString( PKG, \"OrcOutputDialog.Add\" ),\n        BaseMessages.getString( PKG, \"OrcOutputDialog.ClearAndAdd\" ),\n        BaseMessages.getString( PKG, \"OrcOutputDialog.Cancel\" ), }, 0 );\n    MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n    return messageDialog;\n  }\n\n  private void getFieldsFromPreviousStep( RowMetaInterface row, TableView tableView, int keyColumn,\n                                          int[] nameColumn, int[] dataTypeColumn, int lengthColumn,\n                                          int precisionColumn, boolean optimizeWidth,\n                                          TableItemInsertListener listener ) {\n    if ( row == null || row.size() == 0 ) {\n      return; // nothing to do\n    }\n\n    Table table = tableView.table;\n\n    // get a list of all the non-empty keys (names)\n    //\n    List<String> keys = new ArrayList<>();\n    for ( int i = 0; i < table.getItemCount(); i++ ) {\n      TableItem tableItem = table.getItem( i );\n      String key = tableItem.getText( keyColumn );\n      if ( !Utils.isEmpty( key ) && keys.indexOf( key ) < 0 ) {\n        keys.add( key );\n      }\n    }\n\n    int choice = 0;\n\n    if ( !keys.isEmpty() ) {\n      // Ask what we should do with the existing data in the step.\n      //\n      MessageDialog getFieldsChoiceDialog = getFieldsChoiceDialog( tableView.getShell(), keys.size(), row.size() );\n\n      int idx = getFieldsChoiceDialog.open();\n      choice = idx & 0xFF;\n    }\n\n    if ( choice == 3 || choice == 255 ) {\n      return; // Cancel clicked\n    }\n\n    if ( choice == 2 ) {\n      tableView.clearAll( false );\n    }\n\n    for ( int i = 0; i < row.size(); i++ ) {\n      ValueMetaInterface v = row.getValueMeta( i );\n\n      boolean add = true;\n\n      // hang on, see if it's not yet in the table view\n      if ( choice == 0 && keys.indexOf( v.getName() ) >= 0 ) {\n        add = false;\n      }\n\n      if ( add ) {\n        TableItem tableItem = new TableItem( table, SWT.NONE );\n\n        for ( int c = 0; c < nameColumn.length; c++ ) {\n          tableItem.setText( nameColumn[ c ], Const.NVL( v.getName(), \"\" ) );\n        }\n\n        String orcTypeName = OrcTypeConverter.convertToOrcType( v.getType() );\n        if ( dataTypeColumn != null ) {\n          for ( int c = 0; c < dataTypeColumn.length; c++ ) {\n            tableItem.setText( dataTypeColumn[ c ], orcTypeName );\n          }\n        }\n\n        if ( orcTypeName.equals( OrcSpec.DataType.DECIMAL.getName() ) ) {\n          if ( lengthColumn > 0 && v.getLength() > 0 ) {\n            tableItem.setText( lengthColumn, Integer.toString( v.getLength() ) );\n          } else {\n            // Set the default precision\n            tableItem.setText( lengthColumn, Integer.toString( OrcSpec.DEFAULT_DECIMAL_PRECISION ) );\n          }\n\n          if ( precisionColumn > 0 && v.getPrecision() >= 0 ) {\n            tableItem.setText( precisionColumn, Integer.toString( v.getPrecision() ) );\n          } else {\n            // Set the default scale\n            tableItem.setText( precisionColumn, Integer.toString( OrcSpec.DEFAULT_DECIMAL_SCALE ) );\n          }\n        } else if ( orcTypeName.equals( OrcSpec.DataType.FLOAT.getName() ) || orcTypeName.equals( OrcSpec.DataType.DOUBLE.getName() ) ) {\n          if ( precisionColumn > 0 && v.getPrecision() > 0 ) {\n            tableItem.setText( precisionColumn, Integer.toString( v.getPrecision() ) );\n          }\n        }\n\n        if ( listener != null && !listener.tableItemInserted( tableItem, v ) ) {\n          tableItem.dispose(); // remove it again\n        }\n      }\n    }\n    tableView.removeEmptyRows();\n    tableView.setRowNums();\n    if ( optimizeWidth ) {\n      tableView.optWidth( true );\n    }\n  }\n\n  @Override\n  protected int getWidth() {\n    return SHELL_WIDTH;\n  }\n\n  @Override\n  protected int getHeight() {\n    return SHELL_HEIGHT;\n  }\n\n  @Override\n  protected Listener getPreview() {\n    return null;\n  }\n\n  @Override protected SelectionOperation selectionOperation() {\n    return SelectionOperation.SAVE_TO_FILE_FOLDER;\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.output.OrcOutputMetaBase;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\n\n@Step( id = \"OrcOutput\", image = \"OO.svg\", name = \"OrcOutput.Name\", description = \"OrcOutput.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.orc\" )\n@InjectionSupported( localizationPrefix = \"OrcOutput.Injection.\", groups = {\"FIELDS\"} )\npublic class OrcOutputMeta extends OrcOutputMetaBase {\n\n  private final NamedClusterResolver namedClusterResolver;\n\n  public OrcOutputMeta() {\n    this( NamedClusterResolver.getInstance() );\n  }\n  public OrcOutputMeta( NamedClusterResolver namedClusterResolver ) {\n    this.namedClusterResolver = namedClusterResolver;\n  }\n\n  @Override\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                                Trans trans ) {\n    return new OrcOutput( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public StepDataInterface getStepData() {\n    return new OrcOutputData();\n  }\n\n  public NamedClusterResolver getNamedClusterResolver() {\n    return namedClusterResolver;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/output/PvfsFileAliaser.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.output;\n\nimport org.apache.commons.io.IOUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.IKettleVFS;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.hadoop.shim.api.format.IPvfsAliasGenerator;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.file.FileAlreadyExistsException;\n\n/**\n * Logic to use a temporary file for output and then copy that file to some VFS/PVFS scheme that wasn't original\n * supoorted for the output content.\n */\npublic class PvfsFileAliaser {\n  private String finalFilePath;\n\n  private String temporaryFilePath;\n\n  private VariableSpace variableSpace;\n\n  private IPvfsAliasGenerator aliasGenerator;\n\n  private boolean isOverwriteOutput;\n\n  private LogChannelInterface log;\n\n  private IKettleVFS ikettleVFS;\n\n  public PvfsFileAliaser( Bowl bowl, String finalFilePath, VariableSpace variableSpace,\n                          IPvfsAliasGenerator aliasGenerator, boolean isOverwriteOutput, LogChannelInterface log ) {\n    this.ikettleVFS = KettleVFS.getInstance( bowl );\n    this.finalFilePath = finalFilePath;\n    this.variableSpace = variableSpace;\n    this.aliasGenerator = aliasGenerator;\n    this.isOverwriteOutput = isOverwriteOutput;\n    this.log = log;\n  }\n\n  public String generateAlias() throws KettleFileException, FileSystemException, FileAlreadyExistsException {\n\n    FileObject pvfsFileObject = ikettleVFS.getFileObject( finalFilePath, variableSpace );\n    if ( AliasedFileObject.isAliasedFile( pvfsFileObject ) ) {\n      finalFilePath = ( (AliasedFileObject) pvfsFileObject ).getOriginalURIString();\n    }\n    //See if we need to use a another URI because the HadoopFileSystem is not supported for this URL.\n    String aliasedFile = aliasGenerator.generateAlias( finalFilePath );\n    temporaryFilePath = finalFilePath;\n    if ( aliasedFile != null ) {\n      if ( pvfsFileObject.exists() ) {\n        if ( isOverwriteOutput ) {\n          pvfsFileObject.delete();\n        } else {\n          throw new FileAlreadyExistsException( temporaryFilePath );\n        }\n      }\n      temporaryFilePath = aliasedFile;  //set the outputFile to the temporary alias file\n    }\n    return temporaryFilePath;\n  }\n\n  public void copyFileToFinalDestination() throws KettleFileException, IOException {\n    if ( aliasingIsActive() ) {\n      FileObject srcFile = ikettleVFS.getFileObject( temporaryFilePath, variableSpace );\n      FileObject destFile = ikettleVFS.getFileObject( finalFilePath, variableSpace );\n      try ( InputStream in = KettleVFS.getInputStream( srcFile );\n            OutputStream out = ikettleVFS.getOutputStream( destFile, false ) ) {\n        IOUtils.copy( in, out );\n      }\n    }\n  }\n\n  public void deleteTempFileAndFolder() {\n    try {\n      if ( aliasingIsActive() ) {\n        FileObject srcFile = ikettleVFS.getFileObject( temporaryFilePath, variableSpace );\n        srcFile.getParent().deleteAll();\n      }\n    } catch ( FileSystemException | KettleFileException e ) {\n      log.logError( e.getMessage(), e );\n    }\n  }\n\n  private boolean aliasingIsActive() {\n    return !finalFilePath.equals( temporaryFilePath ) && temporaryFilePath != null && !s3nSwitchedTos3a();\n  }\n\n  private boolean s3nSwitchedTos3a() {\n    return finalFilePath != null && temporaryFilePath != null && finalFilePath.startsWith( \"s3n\" ) && temporaryFilePath\n      .startsWith( \"s3a\" );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/BaseParquetStepDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet;\n\nimport org.eclipse.jface.window.DefaultToolTip;\nimport org.eclipse.jface.window.ToolTip;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.MouseEvent;\nimport org.eclipse.swt.events.MouseTrackAdapter;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.GC;\nimport org.eclipse.swt.graphics.Point;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterFileDialogTextVar;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterOptions;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\n\npublic abstract class BaseParquetStepDialog<T extends BaseStepMeta & StepMetaInterface> extends BaseStepDialog\n  implements StepDialogInterface {\n  public static final int MARGIN = 15;\n  public static final int FIELDS_SEP = 10;\n  public static final int FIELD_LABEL_SEP = 5;\n  public static final int FIELD_TINY = 100;\n  public static final int FIELD_SMALL = 150;\n  public static final int FIELD_MEDIUM = 250;\n  public static final int FIELD_LARGE = 350;\n  public static final int TABLE_ITEM_MARGIN = 2;\n  public static final int TOOLTIP_SHOW_DELAY = 350;\n  public static final int TOOLTIP_HIDE_DELAY = 2000;\n  // width of the icon in a varfield\n  public static final int VAR_EXTRA_WIDTH = GUIResource.getInstance().getImageVariable().getBounds().width;\n  protected static final Class<?> BPKG = BaseParquetStepDialog.class;\n  private static final String ELLIPSIS = \"...\";\n  protected final Class<?> parquetStepDialogClass = getClass();\n  protected T meta;\n  protected ModifyListener lsMod;\n  protected TextVar wPath;\n  protected Button wbBrowse;\n\n  public BaseParquetStepDialog( Shell parent, T in, TransMeta transMeta, String sname ) {\n    super( parent, (BaseStepMeta) in, transMeta, sname );\n    meta = in;\n  }\n\n  public static String shortenText( GC gc, String text, final int targetWidth ) {\n    if ( Utils.isEmpty( text ) ) {\n      return \"\";\n    }\n    int textWidth = gc.textExtent( text ).x;\n    int extra = gc.textExtent( ELLIPSIS ).x + 2 * TABLE_ITEM_MARGIN;\n    if ( targetWidth <= extra || textWidth <= targetWidth ) {\n      return text;\n    }\n    int len = text.length();\n    for ( int chomp = 1; chomp < len && textWidth + extra >= targetWidth; chomp++ ) {\n      text = text.substring( 0, text.length() - 1 );\n      textWidth = gc.textExtent( text ).x;\n    }\n    return text + ELLIPSIS;\n  }\n\n  public static void setIntegerOnly( TextVar textVar ) {\n    textVar.getTextWidget().addVerifyListener( e -> {\n      if ( !StringUtil.isEmpty( e.text ) && !StringUtil.isVariable( e.text ) && !StringUtil.IsInteger( e.text ) ) {\n        e.doit = false;\n      }\n    } );\n  }\n\n  @Override\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE );\n    props.setLook( shell );\n    setShellImage( shell, meta );\n\n    lsMod = e -> meta.setChanged();\n    changed = meta.hasChanged();\n\n    createUI();\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    int height = Math.max( getMinHeight( shell, getWidth() ), getHeight() );\n    shell.setMinimumSize( getWidth(), height );\n    shell.setSize( getWidth(), height );\n    getData( meta );\n    shell.open();\n    wStepname.setFocus();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return stepname;\n  }\n\n  protected abstract void createUI();\n\n  protected Control createFooter( Composite shell ) {\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( getMsg( \"System.Button.Cancel\" ) );\n    wCancel.addListener( SWT.Selection, lsCancel );\n    new FD( wCancel ).right( 100, 0 ).bottom( 100, 0 ).apply();\n\n    // Some buttons\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( getMsg( \"System.Button.OK\" ) );\n    wOK.addListener( SWT.Selection, lsOK );\n    new FD( wOK ).right( wCancel, -FIELD_LABEL_SEP ).bottom( 100, 0 ).apply();\n    lsPreview = getPreview();\n    if ( lsPreview != null ) {\n      wPreview = new Button( shell, SWT.PUSH );\n      wPreview.setText( getBaseMsg( \"BaseStepDialog.Preview\" ) );\n      wPreview.pack();\n      wPreview.addListener( SWT.Selection, lsPreview );\n      int offset = wPreview.getBounds().width / 2;\n      new FD( wPreview ).left( 50, -offset ).bottom( 100, 0 ).apply();\n    }\n    return wCancel;\n  }\n\n  protected void cancel() {\n    stepname = null;\n    meta.setChanged( changed );\n    dispose();\n  }\n\n  protected void ok() {\n    if ( Utils.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    stepname = wStepname.getText();\n\n    getInfo( meta, false );\n    dispose();\n  }\n\n  protected abstract String getStepTitle();\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   *\n   * @param meta The meta object to obtain the data from.\n   */\n  protected abstract void getData( T meta );\n\n  /**\n   * Fill meta object from UI options.\n   *\n   * @param meta    meta object\n   * @param preview flag for preview or real options should be used. Currently, only one option is differ for preview -\n   *                EOL chars. It uses as \"mixed\" for be able to preview any file.\n   */\n  protected abstract void getInfo( T meta, boolean preview );\n\n  protected abstract int getWidth();\n\n  protected abstract int getHeight();\n\n  protected abstract Listener getPreview();\n\n  protected Label createHeader() {\n    // main form\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = 15;\n    formLayout.marginHeight = 15;\n    shell.setLayout( formLayout );\n    // title\n    shell.setText( getStepTitle() );\n    // buttons\n    lsOK = e -> ok();\n    lsCancel = e -> cancel();\n\n    // Stepname label\n    wlStepname = new Label( shell, SWT.RIGHT );\n    wlStepname.setText( getBaseMsg( \"BaseStepDialog.StepName\" ) );\n    props.setLook( wlStepname );\n    new FD( wlStepname ).left( 0, 0 ).top( 0, 0 ).apply();\n    // Stepname field\n    wStepname = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wStepname.setText( stepname );\n    props.setLook( wStepname );\n    wStepname.addModifyListener( lsMod );\n    new FD( wStepname ).left( 0, 0 ).top( wlStepname, FIELD_LABEL_SEP ).width( FIELD_MEDIUM ).rright().apply();\n\n    // separator\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.top = new FormAttachment( wStepname, 15 );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    addIcon();\n    return separator;\n  }\n\n  protected void addIcon() {\n    Label wicon = new Label( shell, SWT.RIGHT );\n    String stepId = meta.getParentStepMeta().getStepID();\n    wicon.setImage( GUIResource.getInstance().getImagesSteps().get( stepId ).getAsBitmapForSize( shell.getDisplay(),\n      ConstUI.LARGE_ICON_SIZE, ConstUI.LARGE_ICON_SIZE ) );\n    FormData fdlicon = new FormData();\n    fdlicon.top = new FormAttachment( 0, 0 );\n    fdlicon.right = new FormAttachment( 100, 0 );\n    wicon.setLayoutData( fdlicon );\n    props.setLook( wicon );\n  }\n\n  protected Control addFileWidgets( Control prev ) {\n    Label wlPath = new Label( shell, SWT.RIGHT );\n    wlPath.setText( getBaseMsg( \"ParquetDialog.Filename.Label\" ) );\n    props.setLook( wlPath );\n    new FD( wlPath ).left( 0, 0 ).top( prev, MARGIN ).apply();\n    wPath = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wPath.addModifyListener( event -> {\n      if ( wPreview != null ) {\n        wPreview.setEnabled( !Utils.isEmpty( wPath.getText() ) );\n      }\n    } );\n\n    props.setLook( wPath );\n    wPath.addModifyListener( lsMod );\n    new FD( wPath ).left( 0, 0 ).top( wlPath, FIELD_LABEL_SEP ).width( FIELD_LARGE + VAR_EXTRA_WIDTH ).rright().apply();\n\n\n    wbBrowse = new Button( shell, SWT.PUSH );\n    props.setLook( wbBrowse );\n    wbBrowse.setText( getMsg( \"System.Button.Browse\" ) );\n    wbBrowse.addSelectionListener( new SelectionAdapterFileDialogTextVar(\n      log, wPath, transMeta, new SelectionAdapterOptions( transMeta.getBowl(), selectionOperation() ) ) );\n    int bOffset = ( wbBrowse.computeSize( SWT.DEFAULT, SWT.DEFAULT, false ).y\n      - wPath.computeSize( SWT.DEFAULT, SWT.DEFAULT, false ).y ) / 2;\n    new FD( wbBrowse ).left( wPath, FIELD_LABEL_SEP ).top( wlPath, FIELD_LABEL_SEP - bOffset ).apply();\n    return wPath;\n  }\n\n  protected abstract SelectionOperation selectionOperation();\n\n  protected String getBaseMsg( String key ) {\n    return BaseMessages.getString( BPKG, key );\n  }\n\n  protected String getMsg( String key ) {\n    return BaseMessages.getString( parquetStepDialogClass, key );\n  }\n\n  protected int getMinHeight( Composite comp, int minWidth ) {\n    comp.pack();\n    return comp.computeSize( minWidth, SWT.DEFAULT ).y;\n  }\n\n  protected void setTruncatedColumn( Table table, int targetColumn ) {\n    table.addListener( SWT.EraseItem, event -> {\n      if ( event.index == targetColumn ) {\n        event.detail &= ~SWT.FOREGROUND;\n      }\n    } );\n    table.addListener( SWT.PaintItem, event -> {\n      TableItem item = (TableItem) event.item;\n      int colIdx = event.index;\n      if ( colIdx == targetColumn ) {\n        String contents = item.getText( colIdx );\n        if ( Utils.isEmpty( contents ) ) {\n          return;\n        }\n        Point size = event.gc.textExtent( contents );\n        int targetWidth = item.getBounds( colIdx ).width;\n        int yOffset = Math.max( 0, ( event.height - size.y ) / 2 );\n        if ( size.x > targetWidth ) {\n          contents = shortenText( event.gc, contents, targetWidth );\n        }\n        event.gc.drawText( contents, event.x + TABLE_ITEM_MARGIN, event.y + yOffset, true );\n      }\n    } );\n  }\n\n  protected void addColumnTooltip( Table table, int columnIndex ) {\n    final DefaultToolTip toolTip = new DefaultToolTip( table, ToolTip.RECREATE, true );\n    toolTip.setRespectMonitorBounds( true );\n    toolTip.setRespectDisplayBounds( true );\n    toolTip.setPopupDelay( TOOLTIP_SHOW_DELAY );\n    toolTip.setHideDelay( TOOLTIP_HIDE_DELAY );\n    toolTip.setShift( new Point( ConstUI.TOOLTIP_OFFSET, ConstUI.TOOLTIP_OFFSET ) );\n    table.addMouseTrackListener( new MouseTrackAdapter() {\n      @Override\n\n      public void mouseHover( MouseEvent e ) {\n        Point coord = new Point( e.x, e.y );\n        TableItem item = table.getItem( coord );\n        if ( item != null && item.getBounds( columnIndex ).contains( coord ) ) {\n          String contents = item.getText( columnIndex );\n          if ( !Utils.isEmpty( contents ) ) {\n            toolTip.setText( contents );\n            toolTip.show( coord );\n            return;\n          }\n        }\n        toolTip.hide();\n      }\n\n      @Override\n      public void mouseExit( MouseEvent e ) {\n        toolTip.hide();\n      }\n    } );\n  }\n\n  /**\n   * Class for apply layout settings to SWT controls.\n   */\n  protected class FD {\n    private final Control control;\n    private final FormData formData;\n\n    public FD( Control control ) {\n      this.control = control;\n      props.setLook( control );\n      formData = new FormData();\n    }\n\n    private int getControlOffset( Control control, int controlWidth ) {\n      // remaining space for min size match\n      return getWidth() - getMarginWidths( control ) - controlWidth;\n    }\n\n    private int getMarginWidths( Control control ) {\n      // get the width added by container margins and (wm-specific) decorations\n      int extraWidth = 0;\n      for ( Composite parent = control.getParent(); !parent.equals( getParent() ); parent = parent.getParent() ) {\n        extraWidth += parent.computeTrim( 0, 0, 0, 0 ).width;\n        if ( parent.getLayout() instanceof FormLayout ) {\n          extraWidth += 2 * ( (FormLayout) parent.getLayout() ).marginWidth;\n        }\n      }\n      return extraWidth;\n    }\n\n    public FD width( int width ) {\n      formData.width = width;\n      return this;\n    }\n\n    public FD height( int height ) {\n      formData.height = height;\n      return this;\n    }\n\n    public FD top( int numerator, int offset ) {\n      formData.top = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD top( Control control, int offset ) {\n      formData.top = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD bottom( int numerator, int offset ) {\n      formData.bottom = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD bottom( Control control, int offset ) {\n      formData.bottom = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD left( int numerator, int offset ) {\n      formData.left = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD left( int numerator ) {\n      return left( numerator, 0 );\n    }\n\n    public FD left( Control control, int offset ) {\n      formData.left = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public FD right( int numerator, int offset ) {\n      formData.right = new FormAttachment( numerator, offset );\n      return this;\n    }\n\n    public FD rright() {\n      formData.right = new FormAttachment( 100, -getControlOffset( control, formData.width ) );\n      return this;\n    }\n\n    public FD right( Control control, int offset ) {\n      formData.right = new FormAttachment( control, offset );\n      return this;\n    }\n\n    public void apply() {\n      control.setLayoutData( formData );\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.input.ParquetInputField;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.input.ParquetInputMetaBase;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.trans.steps.file.BaseFileInputStep;\nimport org.pentaho.di.trans.steps.file.IBaseFileInputReader;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IParquetInputField;\nimport org.pentaho.hadoop.shim.api.format.IPentahoInputFormat.IPentahoInputSplit;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetInputFormat;\n\nimport java.nio.file.NoSuchFileException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.function.Function;\nimport java.util.stream.Collectors;\n\npublic class ParquetInput extends BaseFileInputStep<ParquetInputMeta, ParquetInputData> {\n  public static final long SPLIT_SIZE = 128 * 1024 * 1024L;\n\n  public ParquetInput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                       Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  public static List<? extends IParquetInputField> retrieveSchema(\n    Bowl bowl, NamedClusterServiceLocator namedClusterServiceLocator, NamedCluster namedCluster, String path )\n    throws Exception {\n    FormatService formatService = namedClusterServiceLocator.getService( namedCluster, FormatService.class );\n    IPentahoParquetInputFormat in = formatService.createInputFormat( IPentahoParquetInputFormat.class, namedCluster );\n    FileObject inputFileObject = KettleVFS.getInstance( bowl ).getFileObject( path );\n    if ( AliasedFileObject.isAliasedFile( inputFileObject ) ) {\n      path = ( (AliasedFileObject) inputFileObject ).getOriginalURIString();\n    }\n    return in.readSchema( path );\n  }\n\n  public static List<IParquetInputField> createSchemaFromMeta( ParquetInputMetaBase meta ) {\n    List<IParquetInputField> fields = new ArrayList<>();\n    for ( ParquetInputField f : meta.getInputFields() ) {\n      fields.add( f );\n    }\n    return fields;\n  }\n\n  @Override public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n    meta = (ParquetInputMeta) smi;\n    data = (ParquetInputData) sdi;\n\n    try {\n      if ( data.splits == null ) {\n        initSplits();\n      }\n\n      if ( data.currentSplit >= data.splits.size() ) {\n        setOutputDone();\n        return false;\n      }\n\n      if ( data.reader == null ) {\n        openReader( data );\n      }\n\n      if ( data.rowIterator.hasNext() ) {\n        RowMetaAndData row = data.rowIterator.next();\n        putRow( row.getRowMeta(), row.getData() );\n        return true;\n      } else {\n        data.reader.close();\n        data.reader = null;\n        logDebug( \"Close split {0}\", data.currentSplit );\n        data.currentSplit++;\n        return true;\n      }\n    } catch ( NoSuchFileException ex ) {\n      throw new KettleException( \"No input file\" );\n    } catch ( KettleException ex ) {\n      throw ex;\n    } catch ( Exception ex ) {\n      throw new KettleException( ex );\n    }\n  }\n\n  void initSplits() throws Exception {\n    FormatService\n      formatService =\n      meta.getNamedClusterResolver().getNamedClusterServiceLocator()\n        .getService( getNamedCluster(), FormatService.class );\n    if ( meta.inputFiles == null || meta.inputFiles.fileName == null || meta.inputFiles.fileName.length == 0 ) {\n      throw new KettleException( \"No input files defined\" );\n    }\n    String[] resolvedInputFileNames = new String[ meta.inputFiles.fileName.length ];\n    int i = 0;\n    for ( String file : meta.inputFiles.fileName ) {\n      resolvedInputFileNames[ i ] = StringUtil.toUri( environmentSubstitute( file ) ).toString();\n      FileObject inputFileObject = KettleVFS.getInstance( getTransMeta().getBowl() )\n        .getFileObject( resolvedInputFileNames[ i ], getTransMeta() );\n      if ( AliasedFileObject.isAliasedFile( inputFileObject ) ) {\n        resolvedInputFileNames[ i ] = ( (AliasedFileObject) inputFileObject ).getOriginalURIString();\n      }\n      i++;\n    }\n    data.input = formatService.createInputFormat( IPentahoParquetInputFormat.class, getNamedCluster() );\n\n    // Pentaho 8.0 transformations will have the formatType set to 0. Get the fields from the schema and set the\n    // formatType to the formatType retrieved from the schema.\n    List<? extends IParquetInputField>\n      actualFileFields =\n      ParquetInput.retrieveSchema( getTransMeta().getBowl(),\n        meta.getNamedClusterResolver().getNamedClusterServiceLocator(), getNamedCluster(), resolvedInputFileNames[ 0 ] );\n\n    if ( meta.isIgnoreEmptyFolder() && ( actualFileFields.isEmpty() ) ) {\n      data.splits = new ArrayList<>();\n      logBasic( \"No Parquet input files found.\" );\n    } else {\n      Map<String, IParquetInputField>\n        fieldNamesToTypes =\n        actualFileFields.stream()\n          .collect( Collectors.toMap( IParquetInputField::getFormatFieldName, Function.identity() ) );\n      for ( ParquetInputField f : meta.getInputFields() ) {\n        if ( fieldNamesToTypes.containsKey( f.getFormatFieldName() ) ) {\n          if ( f.getFormatType() == 0 ) {\n            f.setFormatType( fieldNamesToTypes.get( f.getFormatFieldName() ).getFormatType() );\n          }\n          f.setPrecision( fieldNamesToTypes.get( f.getFormatFieldName() ).getPrecision() );\n          f.setScale( fieldNamesToTypes.get( f.getFormatFieldName() ).getScale() );\n        }\n      }\n\n      data.input.setSchema( createSchemaFromMeta( meta ) );\n      if ( resolvedInputFileNames != null && resolvedInputFileNames.length == 1 ) {\n        data.input.setInputFile( resolvedInputFileNames[ 0 ] );\n      } else if ( resolvedInputFileNames != null && resolvedInputFileNames.length > 1 ) {\n        data.input.setInputFiles( resolvedInputFileNames );\n      }\n      data.input.setSplitSize( SPLIT_SIZE );\n\n      data.splits = data.input.getSplits();\n      logDebug( \"Input split count: {0}\", data.splits.size() );\n    }\n    data.currentSplit = 0;\n  }\n\n  private NamedCluster getNamedCluster() {\n    return meta.getNamedClusterResolver().resolveNamedCluster( environmentSubstitute( meta.getFilename() ) );\n  }\n\n  void openReader( ParquetInputData data ) throws Exception {\n    logDebug( \"Open split {0}\", data.currentSplit );\n    IPentahoInputSplit sp = data.splits.get( data.currentSplit );\n    data.reader = data.input.createRecordReader( sp );\n    data.rowIterator = data.reader.iterator();\n  }\n\n  @Override protected boolean init() {\n    return true;\n  }\n\n  @Override protected IBaseFileInputReader createReader( ParquetInputMeta meta, ParquetInputData data, FileObject file )\n    throws Exception {\n    return null;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\nimport java.util.Iterator;\nimport java.util.List;\n\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.steps.file.BaseFileInputStepData;\nimport org.pentaho.hadoop.shim.api.format.IPentahoInputFormat.IPentahoRecordReader;\nimport org.pentaho.hadoop.shim.api.format.IPentahoInputFormat.IPentahoInputSplit;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetInputFormat;\n\npublic class ParquetInputData extends BaseFileInputStepData {\n  IPentahoParquetInputFormat input;\n  List<IPentahoInputSplit> splits;\n  int currentSplit;\n  IPentahoRecordReader reader;\n  Iterator<RowMetaAndData> rowIterator;\n  RowMetaInterface outputRowMeta;\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.parquet.BaseParquetStepDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.input.ParquetInputField;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransPreviewFactory;\nimport org.pentaho.di.ui.core.dialog.EnterNumberDialog;\nimport org.pentaho.di.ui.core.dialog.EnterTextDialog;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.dialog.PreviewRowsDialog;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ColumnsResizer;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.trans.dialog.TransPreviewProgressDialog;\nimport org.pentaho.hadoop.shim.api.format.IParquetInputField;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\nimport java.util.List;\n\n@PluginDialog( id = \"ParquetInput\", image = \"PI.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/parquet-input\" )\npublic class ParquetInputDialog extends BaseParquetStepDialog<ParquetInputMeta> {\n\n  private static final int SHELL_WIDTH = 526;\n  private static final int SHELL_HEIGHT = 506;\n\n  private static final int PARQUET_PATH_COLUMN_INDEX = 1;\n\n  private static final int FIELD_NAME_COLUMN_INDEX = 2;\n\n  private static final int FIELD_TYPE_COLUMN_INDEX = 3;\n\n  private static final int FORMAT_COLUMN_INDEX = 4;\n\n  private static final int FIELD_SOURCE_TYPE_COLUMN_INDEX = 5;\n  private static final String UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE =\n    \"ParquetInput.Error.UnableToLoadSchemaFromContainerFile\";\n\n  private TableView wInputFields;\n  private Button wPassThruFields;\n  private Button wIgnoreEmptyFolder;\n\n  public ParquetInputDialog( Shell parent, Object in, TransMeta transMeta, String sname ) {\n    super( parent, (ParquetInputMeta) in, transMeta, sname );\n  }\n\n  @Override\n  protected void createUI( ) {\n    Control prev = createHeader();\n\n    //main fields\n    prev = addFileWidgets( prev );\n\n    createFooter( shell );\n\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.bottom = new FormAttachment( wCancel, -MARGIN );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    wIgnoreEmptyFolder = new Button( shell, SWT.CHECK );\n    wIgnoreEmptyFolder.setText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.IgnoreEmptyFolder.Label\" ) );\n    wIgnoreEmptyFolder.setToolTipText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.IgnoreEmptyFolder.Tooltip\" ) );\n    wIgnoreEmptyFolder.setOrientation( SWT.LEFT_TO_RIGHT );\n    props.setLook( wIgnoreEmptyFolder );\n    new FD( wIgnoreEmptyFolder ).left( 0, 0 ).top( prev, MARGIN ).apply();\n\n    Group fieldsContainer = new Group( shell, SWT.SHADOW_IN );\n    fieldsContainer.setLayout( new FormLayout() );\n    fieldsContainer.setText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.Label\" ) );\n    new FD( fieldsContainer ).left( 0, 0 ).top( wIgnoreEmptyFolder, MARGIN ).right( 100, 0 ).bottom( separator, -MARGIN ).apply();\n\n    // Accept fields from previous steps?\n    //\n    wPassThruFields = new Button( fieldsContainer, SWT.CHECK );\n    wPassThruFields.setText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.PassThruFields.Label\" ) );\n    wPassThruFields.setToolTipText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.PassThruFields.Tooltip\" ) );\n    wPassThruFields.setOrientation( SWT.LEFT_TO_RIGHT );\n    props.setLook( wPassThruFields );\n    new FD( wPassThruFields ).left( 0, MARGIN ).top( 0, MARGIN ).apply();\n\n\n    //get fields button\n    lsGet = e -> populateFieldsTable();\n    Button wGetFields = new Button( fieldsContainer, SWT.PUSH );\n    wGetFields.setText( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.Get\" ) );\n    props.setLook( wGetFields );\n    new FD( wGetFields ).bottom( 100, -FIELDS_SEP ).right( 100, -MARGIN ).apply();\n    wGetFields.addListener( SWT.Selection, lsGet );\n\n    // fields table\n    ColumnInfo parquetPathColumnInfo = new ColumnInfo( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.column.Path\" ), ColumnInfo.COLUMN_TYPE_TEXT, false, true );\n    ColumnInfo nameColumnInfo = new ColumnInfo( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.column.Name\" ), ColumnInfo.COLUMN_TYPE_TEXT, false, false );\n    ColumnInfo typeColumnInfo = new ColumnInfo( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.column.Type\" ), ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaFactory.getValueMetaNames() );\n    ColumnInfo formatColumnInfo = new ColumnInfo( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.column.Format\" ), ColumnInfo.COLUMN_TYPE_CCOMBO, Const.getDateFormats() );\n    ColumnInfo sourceTypeColumnInfo = new ColumnInfo( BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Fields.column.SourceType\" ), ColumnInfo.COLUMN_TYPE_TEXT, ValueMetaFactory.getValueMetaNames(), true );\n\n    ColumnInfo[] parameterColumns = new ColumnInfo[] {parquetPathColumnInfo, nameColumnInfo, typeColumnInfo, formatColumnInfo, sourceTypeColumnInfo};\n    parameterColumns[0].setAutoResize( false );\n    parameterColumns[1].setUsingVariables( true );\n    parameterColumns[3].setAutoResize( false );\n\n    wInputFields =\n      new TableView( transMeta, fieldsContainer, SWT.FULL_SELECTION | SWT.SINGLE | SWT.BORDER | SWT.NO_SCROLL | SWT.V_SCROLL,\n        parameterColumns, 7, null, props );\n    ColumnsResizer resizer = new ColumnsResizer( 0, 40, 20, 20, 20, 0 );\n    wInputFields.getTable().addListener( SWT.Resize, resizer );\n\n    props.setLook( wInputFields );\n    new FD( wInputFields ).left( 0, MARGIN ).right( 100, -MARGIN ).top( wPassThruFields, FIELDS_SEP )\n      .bottom( wGetFields, -FIELDS_SEP ).apply();\n\n    wInputFields.setRowNums();\n    wInputFields.optWidth( true );\n\n    for ( ColumnInfo col : parameterColumns ) {\n      col.setAutoResize( false );\n    }\n    resizer.addColumnResizeListeners( wInputFields.getTable() );\n    setTruncatedColumn( wInputFields.getTable(), 1 );\n    if ( !Const.isWindows() ) {\n      addColumnTooltip( wInputFields.getTable(), 1 );\n    }\n  }\n\n  protected void populateFieldsTable() {\n    try {\n      List<? extends IParquetInputField> inputFields = getInputFieldsFromParquetFile( false );\n      wInputFields.clearAll();\n      for ( IParquetInputField field : inputFields ) {\n        TableItem item = new TableItem( wInputFields.table, SWT.NONE );\n        if ( field != null ) {\n          setField( item, concatenateParquetNameAndType( field ), PARQUET_PATH_COLUMN_INDEX );\n          setField( item, field.getPentahoFieldName(), FIELD_NAME_COLUMN_INDEX );\n          setField( item, ValueMetaFactory.getValueMetaName( field.getPentahoType() ), FIELD_TYPE_COLUMN_INDEX );\n          setField( item, field.getStringFormat(), FORMAT_COLUMN_INDEX );\n          setField( item, ParquetSpec.DataType.getDataType( field.getFormatType() ).getName(), FIELD_SOURCE_TYPE_COLUMN_INDEX );\n        }\n      }\n\n      wInputFields.removeEmptyRows();\n      wInputFields.setRowNums();\n      wInputFields.optWidth( true );\n    } catch ( Exception ex ) {\n      logError( BaseMessages.getString( parquetStepDialogClass, UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE ), ex );\n      new ErrorDialog( shell, stepname, BaseMessages.getString( parquetStepDialogClass,\n        UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE, getProcessedFileName() ), ex );\n    }\n  }\n\n  private String getProcessedFileName() {\n    return transMeta.environmentSubstitute( wPath.getText() );\n  }\n\n  private List<? extends IParquetInputField> getInputFieldsFromParquetFile( boolean failQuietly ) {\n    String parquetFileName = getProcessedFileName();\n    List<? extends IParquetInputField> inputFields = null;\n    try {\n      inputFields = ParquetInput.retrieveSchema( transMeta.getBowl(),\n        meta.getNamedClusterResolver().getNamedClusterServiceLocator(),\n        meta.getNamedClusterResolver().resolveNamedCluster( parquetFileName ), parquetFileName );\n    } catch ( Exception ex ) {\n      if ( !failQuietly ) {\n        logError( BaseMessages.getString( parquetStepDialogClass, UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE ), ex );\n        new ErrorDialog( shell, stepname, BaseMessages.getString( parquetStepDialogClass,\n          UNABLE_TO_LOAD_SCHEMA_FROM_CONTAINER_FILE, parquetFileName ), ex );\n      }\n    }\n    return inputFields;\n  }\n\n  private void setField( TableItem item, String fieldValue, int fieldIndex ) {\n    if ( !Utils.isEmpty( fieldValue ) ) {\n      item.setText( fieldIndex, fieldValue );\n    }\n  }\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   */\n  @Override\n  protected void getData( ParquetInputMeta meta ) {\n    if ( meta.getFilename() != null && meta.getFilename().length() > 0 ) {\n      wPath.setText( meta.getFilename() );\n    }\n    wPassThruFields.setSelection( meta.inputFiles.passingThruFields );\n    wIgnoreEmptyFolder.setSelection( meta.isIgnoreEmptyFolder() );\n    int itemIndex = 0;\n    for ( IParquetInputField inputField : meta.getInputFields() ) {\n      TableItem item = null;\n      if ( itemIndex < wInputFields.table.getItemCount() ) {\n        item = wInputFields.table.getItem( itemIndex );\n      } else {\n        item = new TableItem( wInputFields.table, SWT.NONE );\n      }\n\n      if ( inputField.getFormatFieldName() != null ) {\n        item.setText( PARQUET_PATH_COLUMN_INDEX,\n          concatenateParquetNameAndType( inputField ) );\n      }\n      if ( inputField.getPentahoFieldName() != null ) {\n        item.setText( FIELD_NAME_COLUMN_INDEX, inputField.getPentahoFieldName() );\n      }\n      if ( getTypeDesc( inputField.getPentahoType() ) != null ) {\n        item.setText( FIELD_TYPE_COLUMN_INDEX, getTypeDesc( inputField.getPentahoType() ) );\n      }\n      if ( getSourceTypeDesc( inputField.getFormatType() ) != null ) {\n        item.setText( FIELD_SOURCE_TYPE_COLUMN_INDEX, getSourceTypeDesc( inputField.getFormatType() ) );\n      }\n      if ( inputField.getStringFormat() != null ) {\n        item.setText( FORMAT_COLUMN_INDEX, inputField.getStringFormat() );\n      } else {\n        item.setText( FORMAT_COLUMN_INDEX, \"\" );\n      }\n      itemIndex++;\n    }\n  }\n\n  public String getTypeDesc( int type ) {\n    return ValueMetaFactory.getValueMetaName( type );\n  }\n\n  public String getSourceTypeDesc( int type ) {\n    return ParquetSpec.DataType.getDataType( type ).getName();\n  }\n\n  /**\n   * Fill meta object from UI options.\n   */\n  @Override\n  protected void getInfo( ParquetInputMeta meta, boolean preview ) {\n    String filePath = wPath.getText();\n    if ( filePath != null && !filePath.isEmpty() ) {\n      meta.allocateFiles( 1 );\n      meta.setFilename( wPath.getText().trim() );\n    }\n\n    meta.inputFiles.passingThruFields = wPassThruFields.getSelection();\n    meta.setIgnoreEmptyFolder( wIgnoreEmptyFolder.getSelection() );\n\n    List<? extends IParquetInputField> actualParquetFileInputFields = getInputFieldsFromParquetFile( true );\n\n    int nrFields = wInputFields.nrNonEmpty();\n    meta.setInputFields( new ParquetInputField[ nrFields ] );\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = wInputFields.getNonEmpty( i );\n      ParquetInputField field = new ParquetInputField();\n      field.setFormatFieldName( extractFieldName( item.getText( PARQUET_PATH_COLUMN_INDEX ) ) );\n      if ( actualParquetFileInputFields != null ) {\n        IParquetInputField actualParquetField = actualParquetFileInputFields.stream()\n          .filter( x -> field.getFormatFieldName().equals( x.getFormatFieldName() ) )\n          .findFirst( ).orElse( null );\n        if ( actualParquetField != null ) {\n          field.setFormatType( actualParquetField.getFormatType() );\n        } else {\n          ParquetSpec.DataType sourceType = extractParquetType( item.getText( PARQUET_PATH_COLUMN_INDEX ) );\n          if ( ( sourceType == null ) ) {\n            String uiTypeTrimmed = item.getText( FIELD_SOURCE_TYPE_COLUMN_INDEX ).trim();\n            for ( ParquetSpec.DataType temp : ParquetSpec.DataType.values() ) {\n              if ( temp.getName().equalsIgnoreCase( uiTypeTrimmed ) ) {\n                sourceType = temp;\n              }\n            }\n          }\n          field.setFormatType( sourceType.getId() );\n          item.setText( concatenateParquetNameAndType( field ) );\n        }\n      }\n      field.setPentahoFieldName( item.getText( FIELD_NAME_COLUMN_INDEX ) );\n      field.setPentahoType( ValueMetaFactory.getIdForValueMeta( item.getText( FIELD_TYPE_COLUMN_INDEX ) ) );\n      field.setStringFormat( item.getText( FORMAT_COLUMN_INDEX ) );\n      meta.inputFields[ i ] = field;\n    }\n  }\n\n  /**\n   * When all else fails, extract he parquet type from the field description.\n   *\n   * @see #concatenateParquetNameAndType(IParquetInputField)\n   */\n  private ParquetSpec.DataType extractParquetType( String parquetNameTypeFromUI ) {\n    if ( parquetNameTypeFromUI != null ) {\n      String uiType = StringUtils.substringBetween( parquetNameTypeFromUI, \"(\", \")\" );\n      if ( uiType != null ) {\n        String uiTypeTrimmed = uiType.trim();\n        for ( ParquetSpec.DataType temp : ParquetSpec.DataType.values() ) {\n          if ( temp.getName().equalsIgnoreCase( uiTypeTrimmed ) ) {\n            return temp;\n          }\n        }\n      }\n    }\n    return null;\n  }\n\n  /**\n   * Get the field name from the UI path column\n   *\n   * @see #concatenateParquetNameAndType(IParquetInputField)\n   */\n  private String extractFieldName( String parquetNameTypeFromUI ) {\n    if ( parquetNameTypeFromUI != null ) {\n      return StringUtils.substringBefore( parquetNameTypeFromUI, \"(\" ).trim();\n    }\n    return parquetNameTypeFromUI;\n  }\n\n  /**\n   * this method must be changed only with change {@link #extractParquetType(String)}\n   * since this method converts the field for show user and the extract methods myst convert to internal format\n   */\n  private String concatenateParquetNameAndType( IParquetInputField field ) {\n    String typeName;\n    ParquetSpec.DataType parquetDataType = ParquetSpec.DataType.getDataType( field.getFormatType() );\n    if ( parquetDataType == null ) {\n      typeName = \"unknown\";\n    } else {\n      typeName = ParquetSpec.DataType.getDataType( field.getFormatType() ).getName();\n    }\n    return field.getFormatFieldName() + \" (\" + typeName + \")\";\n  }\n\n  private void doPreview() {\n    getInfo( meta, true );\n    TransMeta previewMeta =\n      TransPreviewFactory.generatePreviewTransformation( transMeta, meta, wStepname.getText() );\n    transMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n    previewMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n\n    EnterNumberDialog numberDialog =\n      new EnterNumberDialog( shell, props.getDefaultPreviewSize(), BaseMessages.getString( parquetStepDialogClass,\n        \"ParquetInputDialog.PreviewSize.DialogTitle\" ), BaseMessages.getString( parquetStepDialogClass,\n        \"ParquetInputDialog.PreviewSize.DialogMessage\" ) );\n    int previewSize = numberDialog.open();\n\n    if ( previewSize > 0 ) {\n      TransPreviewProgressDialog progressDialog =\n        new TransPreviewProgressDialog( shell, previewMeta, new String[] { wStepname.getText() },\n          new int[] { previewSize } );\n      progressDialog.open();\n\n      Trans trans = progressDialog.getTrans();\n      String loggingText = progressDialog.getLoggingText();\n\n      if ( !progressDialog.isCancelled() ) {\n        if ( trans.getResult() != null && trans.getResult().getNrErrors() > 0 ) {\n          EnterTextDialog etd =\n            new EnterTextDialog( shell, BaseMessages.getString( parquetStepDialogClass, \"System.Dialog.PreviewError.Title\" ),\n              BaseMessages.getString( parquetStepDialogClass, \"System.Dialog.PreviewError.Message\" ), loggingText, true );\n          etd.setReadOnly();\n          etd.open();\n        }\n      }\n\n      PreviewRowsDialog prd =\n        new PreviewRowsDialog( shell, transMeta, SWT.NONE, wStepname.getText(), progressDialog\n          .getPreviewRowsMeta( wStepname.getText() ),\n          progressDialog.getPreviewRows( wStepname.getText() ), loggingText );\n      prd.open();\n    }\n  }\n\n  @Override\n  protected int getWidth() {\n    return SHELL_WIDTH;\n  }\n\n  @Override\n  protected int getHeight() {\n    return SHELL_HEIGHT;\n  }\n\n  @Override\n  protected String getStepTitle() {\n    return BaseMessages.getString( parquetStepDialogClass, \"ParquetInputDialog.Shell.Title\" );\n  }\n\n  @Override\n  protected Listener getPreview() {\n    return e -> doPreview();\n  }\n\n  @Override protected SelectionOperation selectionOperation() {\n    return SelectionOperation.FILE_OR_FOLDER;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.input.ParquetInputMetaBase;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\n\n@Step( id = \"ParquetInput\", image = \"PI.svg\", name = \"ParquetInput.Name\", description = \"ParquetInput.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.parquet\" )\n@InjectionSupported( localizationPrefix = \"ParquetInput.Injection.\", groups = { \"FILENAME_LINES\", \"FIELDS\" }, hide = {\n  \"FILEMASK\", \"EXCLUDE_FILEMASK\", \"FILE_REQUIRED\", \"INCLUDE_SUBFOLDERS\", \"FIELD_POSITION\", \"FIELD_LENGTH\",\n  \"FIELD_IGNORE\", \"FIELD_FORMAT\", \"FIELD_PRECISION\", \"FIELD_CURRENCY\",\n  \"FIELD_DECIMAL\", \"FIELD_GROUP\", \"FIELD_REPEAT\", \"FIELD_TRIM_TYPE\", \"FIELD_NULL_STRING\", \"FIELD_IF_NULL\",\n  \"FIELD_NULLABLE\", \"ACCEPT_FILE_NAMES\", \"ACCEPT_FILE_STEP\", \"PASS_THROUGH_FIELDS\", \"ACCEPT_FILE_FIELD\",\n  \"ADD_FILES_TO_RESULT\", \"IGNORE_ERRORS\", \"FILE_ERROR_FIELD\", \"FILE_ERROR_MESSAGE_FIELD\", \"SKIP_BAD_FILES\",\n  \"WARNING_FILES_TARGET_DIR\", \"WARNING_FILES_EXTENTION\",\n  \"ERROR_FILES_TARGET_DIR\", \"ERROR_FILES_EXTENTION\", \"LINE_NR_FILES_TARGET_DIR\", \"LINE_NR_FILES_EXTENTION\",\n  \"FILE_SHORT_FILE_FIELDNAME\",\n  \"FILE_EXTENSION_FIELDNAME\", \"FILE_PATH_FIELDNAME\", \"FILE_SIZE_FIELDNAME\", \"FILE_HIDDEN_FIELDNAME\",\n  \"FILE_LAST_MODIFICATION_FIELDNAME\",\n  \"FILE_URI_FIELDNAME\", \"FILE_ROOT_URI_FIELDNAME\"\n} )\npublic class ParquetInputMeta extends ParquetInputMetaBase {\n\n  private final NamedClusterResolver namedClusterResolver;\n\n  public ParquetInputMeta() {\n    this( NamedClusterResolver.getInstance() );\n  }\n  public ParquetInputMeta( NamedClusterResolver namedClusterResolver ) {\n    this.namedClusterResolver = namedClusterResolver;\n  }\n\n  @Override\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                                Trans trans ) {\n    return new ParquetInput( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public StepDataInterface getStepData() {\n    return new ParquetInputData();\n  }\n\n  public NamedClusterResolver getNamedClusterResolver() {\n    return namedClusterResolver;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/VFSScheme.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\npublic class VFSScheme {\n\n  private final String scheme;\n\n  private final String schemeName;\n\n  public VFSScheme( String scheme, String schemeName ) {\n    this.scheme = scheme;\n    this.schemeName = schemeName;\n  }\n\n  public String getScheme() {\n    return scheme;\n  }\n\n  public String getSchemeName() {\n    return schemeName;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.apache.parquet.hadoop.metadata.CompressionCodecName;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.output.PvfsFileAliaser;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.output.ParquetOutputMetaBase;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetOutputFormat;\n\nimport java.io.IOException;\n\npublic class ParquetOutput extends BaseStep implements StepInterface {\n\n  private ParquetOutputMeta meta;\n\n  private ParquetOutputData data;\n\n  private PvfsFileAliaser pvfsFileAliaser;\n\n  public ParquetOutput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                        Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public synchronized boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n    try {\n      if ( data.output == null ) {\n        init( getInputRowMeta() );\n      }\n\n      Object[] currentRow = getRow();\n      if ( currentRow != null ) {\n        RowMetaAndData row = new RowMetaAndData( getInputRowMeta(), currentRow );\n        data.writer.write( row );\n        incrementLinesOutput();\n        putRow( row.getRowMeta(), row.getData() ); // in case we want it to go further or DET...\n        return true;\n      } else {\n        // no more input to be expected...\n        closeWriter();\n        pvfsFileAliaser.copyFileToFinalDestination();\n        pvfsFileAliaser.deleteTempFileAndFolder();\n        setOutputDone();\n        return false;\n      }\n    } catch ( KettleException ex ) {\n      try {\n        closeWriter();\n        pvfsFileAliaser.deleteTempFileAndFolder();\n      } catch ( Exception ex2 ) {\n        // Do nothing\n      }\n      throw ex;\n    } catch ( IllegalStateException e ) {\n      getLogChannel().logError( e.getMessage() );\n      setErrors( 1 );\n      pvfsFileAliaser.deleteTempFileAndFolder();\n      setOutputDone();\n      return false;\n    } catch ( Exception ex ) {\n      try {\n        closeWriter();\n        pvfsFileAliaser.deleteTempFileAndFolder();\n      } catch ( Exception ex2 ) {\n        // Do nothing\n      }\n      throw new KettleException( ex );\n    }\n  }\n\n  public void init( RowMetaInterface rowMeta ) throws Exception {\n    FormatService formatService;\n    try {\n      formatService = meta.getNamedClusterResolver().getNamedClusterServiceLocator()\n        .getService( getNamedCluster(), FormatService.class );\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleException( \"can't get service format shim \", e );\n    }\n    if ( meta.getFilename() == null ) {\n      throw new KettleException( \"No output files defined\" );\n    }\n\n    data.output = formatService.createOutputFormat( IPentahoParquetOutputFormat.class, getNamedCluster() );\n\n    String outputFileName = environmentSubstitute( meta.constructOutputFilename() );\n    pvfsFileAliaser = new PvfsFileAliaser( getTransMeta().getBowl(), outputFileName, getTransMeta(), data.output,\n      meta.overrideOutput, getLogChannel() );\n    data.output.setOutputFile( pvfsFileAliaser.generateAlias(), meta.overrideOutput );\n    data.output.setFields( meta.getOutputFields() );\n\n    CompressionCodecName compression;\n    try {\n      compression =\n              CompressionCodecName.valueOf( meta.getCompressionType( variables ).name().toUpperCase() );\n    } catch ( Exception ex ) {\n      compression = CompressionCodecName.UNCOMPRESSED;\n    }\n    data.output.setCompression( compression );\n    data.output\n      .setVersion(\n        ParquetOutputMetaBase.ParquetVersion.PARQUET_1.equals( meta.getParquetVersion( variables ) )\n          ? IPentahoParquetOutputFormat.VERSION.VERSION_1_0 : IPentahoParquetOutputFormat.VERSION.VERSION_2_0 );\n    if ( meta.getRowGroupSize( variables ) > 0 ) {\n      data.output.setRowGroupSize( meta.getRowGroupSize( variables ) * 1024 * 1024 );\n    }\n    if ( meta.getDataPageSize( variables ) > 0 ) {\n      data.output.setDataPageSize( meta.getDataPageSize( variables ) * 1024 );\n    }\n    data.output.enableDictionary( meta.enableDictionary );\n    if ( meta.getDictPageSize( variables ) > 0 ) {\n      data.output.setDictionaryPageSize( meta.getDictPageSize( variables ) * 1024 );\n    }\n\n    data.writer = data.output.createRecordWriter();\n  }\n\n  private NamedCluster getNamedCluster() {\n    return meta.getNamedClusterResolver().resolveNamedCluster( environmentSubstitute( meta.getFilename() ) );\n  }\n\n  public void closeWriter() throws KettleException {\n    try {\n      data.writer.close();\n    } catch ( IOException e ) {\n      throw new KettleException( e );\n    }\n    data.output = null;\n  }\n\n  @Override\n  public boolean init( StepMetaInterface smi, StepDataInterface sdi ) {\n    meta = (ParquetOutputMeta) smi;\n    data = (ParquetOutputData) sdi;\n    if ( !super.init( smi, sdi ) ) {\n      return false;\n    }\n\n    //Set Embedded NamedCluter MetatStore Provider Key so that it can be passed to VFS\n    if ( getTransMeta().getNamedClusterEmbedManager() != null ) {\n      getTransMeta().getNamedClusterEmbedManager()\n        .passEmbeddedMetastoreKey( getTransMeta(), getTransMeta().getEmbeddedMetastoreProviderKey() );\n    }\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOutputFormat.IPentahoRecordWriter;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetOutputFormat;\n\npublic class ParquetOutputData extends BaseStepData implements StepDataInterface {\n\n  public IPentahoParquetOutputFormat output;\n  public IPentahoRecordWriter writer;\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NullableValuesEnum;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.parquet.BaseParquetStepDialog;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.ParquetTypeConverter;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.output.ParquetOutputField;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.events.dialog.SelectionOperation;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ColumnsResizer;\nimport org.pentaho.di.ui.core.widget.ComboVar;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.TableItemInsertListener;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.function.BiConsumer;\n\n@PluginDialog( id = \"ParquetOutput\", image = \"PO.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/parquet-output\" )\npublic class ParquetOutputDialog extends BaseParquetStepDialog<ParquetOutputMeta> implements StepDialogInterface {\n\n  public static final Class<?> PKG = ParquetOutputMeta.class;\n\n  public static final int PARQUET_OUTPUT_FIELD_TINY = Const.isLinux() ? FIELD_TINY + 20 : FIELD_TINY;\n  public static final int COLUMNS_SEP = 5 * MARGIN;\n  public static final int OFFSET = 16;\n  public static final ParquetSpec.DataType[] SUPPORTED_PARQUET_TYPES = {\n    ParquetSpec.DataType.BINARY,\n    ParquetSpec.DataType.BOOLEAN,\n    ParquetSpec.DataType.DATE,\n    ParquetSpec.DataType.DECIMAL,\n    ParquetSpec.DataType.DOUBLE,\n    ParquetSpec.DataType.FLOAT,\n    ParquetSpec.DataType.INT_32,\n    ParquetSpec.DataType.INT_64,\n    ParquetSpec.DataType.INT_96,\n    ParquetSpec.DataType.TIMESTAMP_MILLIS,\n    ParquetSpec.DataType.UTF8\n  };\n  private static final int SHELL_WIDTH = 698;\n  private static final int SHELL_HEIGHT = 620;\n  private TableView wOutputFields;\n  private Button wOverwriteExistingFile;\n  private ComboVar wCompression;\n  private ComboVar wVersion;\n  private TextVar wRowSize;\n  private TextVar wPageSize;\n  private TextVar wExtension;\n  private TextVar wDictPageSize;\n  private Label lDict;\n  private Button wDictionaryEncoding;\n  private Button wIncludeDateInFilename;\n  private Button wIncludeTimeInFilename;\n  private Button wSpecifyDateTimeFormat;\n  private ComboVar wDateTimeFormat;\n\n\n  public ParquetOutputDialog( Shell parent, Object parquetOutputMeta, TransMeta transMeta, String sname ) {\n    this( parent, (ParquetOutputMeta) parquetOutputMeta, transMeta, sname );\n  }\n\n  public ParquetOutputDialog( Shell parent, ParquetOutputMeta parquetOutputMeta, TransMeta transMeta, String sname ) {\n    super( parent, parquetOutputMeta, transMeta, sname );\n    this.meta = parquetOutputMeta;\n  }\n\n  public static ComboVar createComboVar( TransMeta transMeta, ModifyListener lsMod, Composite container,\n                                         String[] options ) {\n    ComboVar combo = new ComboVar( transMeta, container, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    combo.setItems( options );\n    combo.addModifyListener( lsMod );\n    return combo;\n  }\n\n  public static Label createLabel( Composite container, String labelRef, PropsUI props ) {\n    Label label = new Label( container, SWT.NONE );\n    label.setText( BaseMessages.getString( PKG, labelRef ) );\n    props.setLook( label );\n    return label;\n  }\n\n  public static MessageDialog getFieldsChoiceDialog( Shell shell, int newFields ) {\n    MessageDialog messageDialog =\n      new MessageDialog( shell,\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.Title\" ), // \"Warning!\"\n        null,\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.Message\", \"\" + newFields ),\n        MessageDialog.WARNING, new String[] {\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.AddNew\" ),\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.Add\" ),\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.ClearAndAdd\" ),\n        BaseMessages.getString( PKG, \"ParquetOutput.GetFieldsChoice.Cancel\" ), }, 0 ) {\n\n        @Override\n        public void create() {\n          super.create();\n          getShell().setBackground( GUIResource.getInstance().getColorWhite() );\n        }\n\n        @Override\n        protected Control createMessageArea( Composite composite ) {\n          Control control = super.createMessageArea( composite );\n          imageLabel.setBackground( GUIResource.getInstance().getColorWhite() );\n          messageLabel.setBackground( GUIResource.getInstance().getColorWhite() );\n          return control;\n        }\n\n        @Override\n        protected Control createDialogArea( Composite parent ) {\n          Control control = super.createDialogArea( parent );\n          control.setBackground( GUIResource.getInstance().getColorWhite() );\n          return control;\n        }\n\n        @Override\n        protected Control createButtonBar( Composite parent ) {\n          Control control = super.createButtonBar( parent );\n          control.setBackground( GUIResource.getInstance().getColorWhite() );\n          return control;\n        }\n      };\n    org.eclipse.jface.window.Window.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n    return messageDialog;\n  }\n\n  protected void createUI() {\n    Control prev = createHeader();\n\n    //main fields\n    prev = addFileWidgets( prev );\n\n    createFooter( shell );\n\n    Composite afterFile = new Composite( shell, SWT.NONE );\n    afterFile.setLayout( new FormLayout() );\n    Label separator = new Label( shell, SWT.HORIZONTAL | SWT.SEPARATOR );\n    FormData fdSpacer = new FormData();\n    fdSpacer.height = 2;\n    fdSpacer.left = new FormAttachment( 0, 0 );\n    fdSpacer.bottom = new FormAttachment( wCancel, -MARGIN );\n    fdSpacer.right = new FormAttachment( 100, 0 );\n    separator.setLayoutData( fdSpacer );\n\n    new FD( afterFile ).left( 0, 0 ).top( prev, 0 ).right( 100, 0 ).bottom( separator, -MARGIN ).apply();\n\n    createAfterFile( afterFile );\n  }\n\n  protected Control createAfterFile( Composite afterFile ) {\n    wOverwriteExistingFile = new Button( afterFile, SWT.CHECK );\n    wOverwriteExistingFile.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.OverwriteFile.Label\" ) );\n    props.setLook( wOverwriteExistingFile );\n    new FD( wOverwriteExistingFile ).left( 0, 0 ).top( afterFile, FIELDS_SEP ).apply();\n    wOverwriteExistingFile.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n      }\n    } );\n\n    CTabFolder wTabFolder = new CTabFolder( afterFile, SWT.BORDER );\n    props.setLook( wTabFolder, Props.WIDGET_STYLE_TAB );\n    wTabFolder.setSimple( false );\n\n    addFieldsTab( wTabFolder );\n    addOptionsTab( wTabFolder );\n\n    new FD( wTabFolder ).left( 0, 0 ).top( wOverwriteExistingFile, MARGIN ).right( 100, 0 ).bottom( 100, 0 ).apply();\n    wTabFolder.setSelection( 0 );\n    return wTabFolder;\n  }\n\n  @Override\n  protected String getStepTitle() {\n    return BaseMessages.getString( PKG, \"ParquetOutputDialog.Shell.Title\" );\n  }\n\n  private void addFieldsTab( CTabFolder wTabFolder ) {\n    CTabItem wTab = new CTabItem( wTabFolder, SWT.NONE );\n    wTab.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.FieldsTab.TabTitle\" ) );\n\n    Composite wComp = new Composite( wTabFolder, SWT.NONE );\n    props.setLook( wComp );\n\n    FormLayout layout = new FormLayout();\n    layout.marginWidth = MARGIN;\n    layout.marginHeight = MARGIN;\n    wComp.setLayout( layout );\n\n    lsGet = e -> getFields();\n\n    Button wGetFields = new Button( wComp, SWT.PUSH );\n    wGetFields.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.Get\" ) );\n    props.setLook( wGetFields );\n    new FD( wGetFields ).bottom( 100, 0 ).right( 100, 0 ).apply();\n\n    wGetFields.addListener( SWT.Selection, lsGet );\n\n    String[] typeNames = new String[ SUPPORTED_PARQUET_TYPES.length ];\n    for ( int i = 0; i < typeNames.length; i++ ) {\n      typeNames[ i ] = SUPPORTED_PARQUET_TYPES[ i ].getName();\n    }\n    ColumnInfo[] parameterColumns = new ColumnInfo[] {\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Path\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Name\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Type\" ),\n        ColumnInfo.COLUMN_TYPE_CCOMBO, typeNames, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Precision\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Scale\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Default\" ),\n        ColumnInfo.COLUMN_TYPE_TEXT, false, false ),\n      new ColumnInfo( BaseMessages.getString( PKG, \"ParquetOutputDialog.Fields.column.Null\" ),\n        ColumnInfo.COLUMN_TYPE_CCOMBO, NullableValuesEnum.getValuesArr(), true ) };\n    parameterColumns[ 0 ].setAutoResize( false );\n    parameterColumns[ 1 ].setUsingVariables( true );\n    wOutputFields =\n      new TableView( transMeta, wComp, SWT.FULL_SELECTION | SWT.SINGLE | SWT.BORDER | SWT.NO_SCROLL | SWT.V_SCROLL,\n        parameterColumns, 7, lsMod, props );\n    ColumnsResizer resizer = new ColumnsResizer( 0, 30, 20, 10, 10, 10, 15, 5 );\n    wOutputFields.getTable().addListener( SWT.Resize, resizer );\n\n\n    props.setLook( wOutputFields );\n    new FD( wOutputFields ).left( 0, 0 ).right( 100, 0 ).top( wComp, 0 ).bottom( wGetFields, -FIELDS_SEP ).apply();\n\n    wOutputFields.setRowNums();\n    wOutputFields.optWidth( true );\n\n    new FD( wComp ).left( 0, 0 ).top( 0, 0 ).right( 100, 0 ).bottom( 100, 0 ).apply();\n\n    wTab.setControl( wComp );\n    for ( ColumnInfo col : parameterColumns ) {\n      col.setAutoResize( false );\n    }\n    resizer.addColumnResizeListeners( wOutputFields.getTable() );\n    setTruncatedColumn( wOutputFields.getTable(), 1 );\n    if ( !Const.isWindows() ) {\n      addColumnTooltip( wOutputFields.getTable(), 1 );\n    }\n  }\n\n  private void addOptionsTab( CTabFolder wTabFolder ) {\n    CTabItem wTab = new CTabItem( wTabFolder, SWT.NONE );\n    wTab.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.TabTitle\" ) );\n    Composite wComp = new Composite( wTabFolder, SWT.NONE );\n    wTab.setControl( wComp );\n    props.setLook( wComp );\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginHeight = formLayout.marginWidth = MARGIN;\n    wComp.setLayout( formLayout );\n\n    Label lCompression = createLabel( wComp, \"ParquetOutputDialog.Options.Compression\", props );\n    new FD( lCompression ).left( 0, 0 ).top( wComp, 0 ).apply();\n    wCompression = createComboVar( transMeta, lsMod, wComp, meta.getCompressionTypes() );\n    new FD( wCompression ).left( 0, 0 ).top( lCompression, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH ).apply();\n\n    Label lVersion = createLabel( wComp, \"ParquetOutputDialog.Options.Version\", props );\n    new FD( lVersion ).left( 0, 0 ).top( wCompression, FIELDS_SEP ).apply();\n    wVersion = createComboVar( transMeta, lsMod, wComp, meta.getVersionTypes() );\n    new FD( wVersion ).left( 0, 0 ).top( lVersion, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH ).apply();\n\n    Label lRowSize = createLabel( wComp, \"ParquetOutputDialog.Options.RowSize\", props );\n    new FD( lRowSize ).left( 0, 0 ).top( wVersion, FIELDS_SEP ).apply();\n    wRowSize = new TextVar( transMeta, wComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    new FD( wRowSize ).left( 0, 0 ).top( lRowSize, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH ).apply();\n    setIntegerOnly( wRowSize );\n    wRowSize.addModifyListener( lsMod );\n\n    Label lDataPageSize = createLabel( wComp, \"ParquetOutputDialog.Options.PageSize\", props );\n    new FD( lDataPageSize ).left( 0, 0 ).top( wRowSize, FIELDS_SEP ).apply();\n    wPageSize = new TextVar( transMeta, wComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    new FD( wPageSize ).left( 0, 0 ).top( lDataPageSize, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH ).apply();\n    setIntegerOnly( wPageSize );\n    wPageSize.addModifyListener( lsMod );\n\n    wDictionaryEncoding = new Button( wComp, SWT.CHECK );\n    wDictionaryEncoding.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.DictionaryEncoding\" ) );\n    props.setLook( wDictionaryEncoding );\n    new FD( wDictionaryEncoding ).left( 0, 0 ).top( wPageSize, FIELDS_SEP ).apply();\n\n    wDictionaryEncoding.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        actualizeDictionaryPageSizeControl();\n      }\n    } );\n\n    lDict = new Label( wComp, SWT.NONE );\n    lDict.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.DictPageSize\" ) );\n    new FD( lDict ).left( 0, OFFSET ).top( wDictionaryEncoding, FIELD_LABEL_SEP ).apply();\n    wDictPageSize = new TextVar( transMeta, wComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    new FD( wDictPageSize ).left( 0, OFFSET ).top( lDict, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH - OFFSET ).apply();\n    setIntegerOnly( wDictPageSize );\n    wDictPageSize.addModifyListener( lsMod );\n\n    Control leftRef = wCompression;\n    // 2nd column\n    Label lExtension = new Label( wComp, SWT.NONE );\n    lExtension.setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.Extension\" ) );\n    new FD( lExtension ).left( leftRef, COLUMNS_SEP ).top( wComp, 0 ).apply();\n    wExtension = new TextVar( transMeta, wComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    new FD( wExtension ).left( leftRef, COLUMNS_SEP ).top( lExtension, FIELD_LABEL_SEP )\n      .width( PARQUET_OUTPUT_FIELD_TINY + VAR_EXTRA_WIDTH ).apply();\n    wExtension.addModifyListener( lsMod );\n\n    wIncludeDateInFilename = new Button( wComp, SWT.CHECK );\n    wIncludeDateInFilename\n      .setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.IncludeDateInFilename\" ) );\n    props.setLook( wIncludeDateInFilename );\n    new FD( wIncludeDateInFilename ).left( leftRef, COLUMNS_SEP ).top( wExtension, MARGIN ).apply();\n    wIncludeDateInFilename.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n      }\n    } );\n\n    wIncludeTimeInFilename = new Button( wComp, SWT.CHECK );\n    wIncludeTimeInFilename\n      .setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.IncludeTimeInFilename\" ) );\n    props.setLook( wIncludeTimeInFilename );\n    new FD( wIncludeTimeInFilename ).left( leftRef, COLUMNS_SEP ).top( wIncludeDateInFilename, FIELDS_SEP ).apply();\n    wIncludeTimeInFilename.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n      }\n    } );\n\n    wSpecifyDateTimeFormat = new Button( wComp, SWT.CHECK );\n    wSpecifyDateTimeFormat\n      .setText( BaseMessages.getString( PKG, \"ParquetOutputDialog.Options.SpecifyDateTimeFormat\" ) );\n    props.setLook( wSpecifyDateTimeFormat );\n    new FD( wSpecifyDateTimeFormat ).left( leftRef, COLUMNS_SEP ).top( wIncludeTimeInFilename, FIELDS_SEP ).apply();\n\n    wSpecifyDateTimeFormat.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        meta.setChanged();\n        wDateTimeFormat.setEnabled( wSpecifyDateTimeFormat.getSelection() );\n        actualizeDateTimeControls();\n      }\n    } );\n\n    String[] dates = Const.getDateFormats();\n    dates =\n      Arrays.stream( dates ).filter( d -> d.indexOf( '/' ) < 0 && d.indexOf( '\\\\' ) < 0 && d.indexOf( ':' ) < 0 )\n        .toArray( String[]::new ); // remove formats with slashes and colons\n    wDateTimeFormat = createComboVar( transMeta, lsMod, wComp, dates );\n    props.setLook( wDateTimeFormat );\n    new FD( wDateTimeFormat ).left( leftRef, COLUMNS_SEP + OFFSET ).top( wSpecifyDateTimeFormat, FIELD_LABEL_SEP )\n      .width( 200 ).apply();\n    wDateTimeFormat.addModifyListener( lsMod );\n\n\n  }\n\n  void actualizeDictionaryPageSizeControl() {\n    boolean dictionaryEncoding = wDictionaryEncoding.getSelection();\n    lDict.setEnabled( dictionaryEncoding );\n    wDictPageSize.setEnabled( dictionaryEncoding );\n  }\n\n  void actualizeDateTimeControls() {\n    boolean allowedToIncludeDateTime = !wSpecifyDateTimeFormat.getSelection();\n    wIncludeDateInFilename.setEnabled( allowedToIncludeDateTime );\n    wIncludeTimeInFilename.setEnabled( allowedToIncludeDateTime );\n    if ( !allowedToIncludeDateTime ) {\n      wIncludeDateInFilename.setSelection( false );\n      wIncludeTimeInFilename.setSelection( false );\n    }\n  }\n\n  protected String getComboVarValue( ComboVar combo ) {\n    String text = combo.getText();\n    String data = (String) combo.getData( text );\n    return data != null ? data : text;\n  }\n\n  /**\n   * Read the data from the meta object and show it in this dialog.\n   */\n  protected void getData( ParquetOutputMeta meta ) {\n    if ( meta.getFilename() != null ) {\n      wPath.setText( meta.getFilename() );\n    }\n    wOverwriteExistingFile.setSelection( meta.isOverrideOutput() );\n    populateFieldsUI( meta, wOutputFields );\n    wCompression.setText( coalesce( meta.getCompressionType() ) );\n    wVersion.setText( coalesce( meta.getParquetVersion() ) );\n    wDictionaryEncoding.setSelection( meta.isEnableDictionary() );\n\n    wDictPageSize.setText( coalesce( meta.getDictPageSize() ) );\n    wRowSize.setText( coalesce( meta.getRowGroupSize() ) );\n    wPageSize.setText( coalesce( meta.getDataPageSize() ) );\n    wExtension.setText( coalesce( meta.getExtension() ) );\n    wIncludeDateInFilename.setSelection( meta.isDateInFilename() );\n    wIncludeTimeInFilename.setSelection( meta.isTimeInFilename() );\n\n    String dateTimeFormat = coalesce( meta.getDateTimeFormat() );\n    if ( !dateTimeFormat.isEmpty() ) {\n      wSpecifyDateTimeFormat.setSelection( true );\n      wDateTimeFormat.setText( dateTimeFormat );\n    } else {\n      wSpecifyDateTimeFormat.setSelection( false );\n      wDateTimeFormat.setEnabled( false );\n    }\n\n    actualizeDictionaryPageSizeControl();\n    actualizeDateTimeControls();\n  }\n\n  public static String coalesce( String value ) {\n    return value == null ? \"\" : value;\n  }\n\n  // ui -> meta\n  @Override\n  protected void getInfo( ParquetOutputMeta meta, boolean preview ) {\n    meta.setFilename( wPath.getText() );\n    meta.setOverrideOutput( wOverwriteExistingFile.getSelection() );\n    saveOutputFields( wOutputFields, meta );\n    meta.setCompressionType( wCompression.getText() );\n    meta.setParquetVersion( wVersion.getText() );\n    meta.setEnableDictionary( wDictionaryEncoding.getSelection() );\n    meta.setDictPageSize( wDictPageSize.getText() );\n    meta.setRowGroupSize( wRowSize.getText() );\n    meta.setDataPageSize( wPageSize.getText() );\n    meta.setExtension( wExtension.getText() );\n    if ( wSpecifyDateTimeFormat.getSelection() ) {\n      meta.setDateTimeFormat( wDateTimeFormat.getText() );\n      meta.setDateInFilename( false );\n      meta.setTimeInFilename( false );\n    } else {\n      meta.setDateTimeFormat( null );\n      meta.setDateInFilename( wIncludeDateInFilename.getSelection() );\n      meta.setTimeInFilename( wIncludeTimeInFilename.getSelection() );\n    }\n  }\n\n  private void saveOutputFields( TableView wFields, ParquetOutputMeta meta ) {\n    int nrFields = wFields.nrNonEmpty();\n\n    List<ParquetOutputField> outputFields = new ArrayList<>();\n    for ( int i = 0; i < nrFields; i++ ) {\n      TableItem item = wFields.getNonEmpty( i );\n\n      int j = 1;\n      ParquetOutputField field = new ParquetOutputField();\n      field.setFormatFieldName( item.getText( j++ ) );\n      field.setPentahoFieldName( item.getText( j++ ) );\n      String typeName = item.getText( j++ );\n      for ( ParquetSpec.DataType parqueType : SUPPORTED_PARQUET_TYPES ) {\n        if ( parqueType.getName().equals( typeName ) ) {\n          field.setFormatType( parqueType.getId() );\n        }\n      }\n\n      if ( field.getParquetType().equals( ParquetSpec.DataType.DECIMAL ) ) {\n        field.setPrecision( item.getText( j++ ) );\n        field.setScale( item.getText( j++ ) );\n      } else if ( field.getParquetType().equals( ParquetSpec.DataType.FLOAT ) || field.getParquetType()\n        .equals( ParquetSpec.DataType.DOUBLE ) ) {\n        j++;\n        field.setScale( item.getText( j++ ) );\n      } else {\n        j += 2;\n      }\n\n      field.setDefaultValue( item.getText( j++ ) );\n      field.setAllowNull( NullableValuesEnum.YES.getValue().equals( item.getText( j ) ) );\n      outputFields.add( field );\n    }\n    meta.setOutputFields( outputFields );\n  }\n\n  private void populateFieldsUI( ParquetOutputMeta meta, TableView wOutputFields ) {\n    populateFieldsUI( meta.getOutputFields(), wOutputFields, ( field, item ) -> {\n      int i = 1;\n      item.setText( i++, coalesce( field.getFormatFieldName() ) );\n      item.setText( i++, coalesce( field.getPentahoFieldName() ) );\n      item.setText( i++, coalesce( field.getParquetType().getName() ) );\n\n      if ( field.getParquetType().equals( ParquetSpec.DataType.DECIMAL ) ) {\n        item.setText( i++, coalesce( String.valueOf( field.getPrecision() ) ) );\n        item.setText( i++, coalesce( String.valueOf( field.getScale() ) ) );\n      } else if ( field.getParquetType().equals( ParquetSpec.DataType.FLOAT ) || field.getParquetType()\n        .equals( ParquetSpec.DataType.DOUBLE ) ) {\n        item.setText( i++, \"\" );\n        item.setText( i++, field.getScale() > 0 ? String.valueOf( field.getScale() ) : \"\" );\n      } else {\n        item.setText( i++, \"\" );\n        item.setText( i++, \"\" );\n      }\n\n      item.setText( i++, coalesce( field.getDefaultValue() ) );\n      item.setText( i++, field.getAllowNull() ? NullableValuesEnum.YES.getValue() : NullableValuesEnum.NO.getValue() );\n    } );\n  }\n\n  private void populateFieldsUI( List<ParquetOutputField> fields, TableView wFields,\n                                 BiConsumer<ParquetOutputField, TableItem> converter ) {\n    for ( int i = 0; i < fields.size(); i++ ) {\n      TableItem item = null;\n      if ( i < wFields.table.getItemCount() ) {\n        item = wFields.table.getItem( i );\n      } else {\n        item = new TableItem( wFields.table, SWT.NONE );\n      }\n      converter.accept( fields.get( i ), item );\n    }\n  }\n\n  private void getFieldsFromPreviousStep( RowMetaInterface row, TableView tableView, int keyColumn,\n                                          int[] nameColumn, int[] dataTypeColumn, int lengthColumn,\n                                          int precisionColumn, boolean optimizeWidth,\n                                          TableItemInsertListener listener ) {\n    if ( row == null || row.size() == 0 ) {\n      return; // nothing to do\n    }\n\n    Table table = tableView.table;\n\n    // get a list of all the non-empty keys (names)\n    //\n    List<String> keys = new ArrayList<>();\n    for ( int i = 0; i < table.getItemCount(); i++ ) {\n      TableItem tableItem = table.getItem( i );\n      String key = tableItem.getText( keyColumn );\n      if ( !Utils.isEmpty( key ) && keys.indexOf( key ) < 0 ) {\n        keys.add( key );\n      }\n    }\n\n    int choice = 0;\n\n    if ( !keys.isEmpty() ) {\n      // Ask what we should do with the existing data in the step.\n      //\n      MessageDialog getFieldsChoiceDialog = getFieldsChoiceDialog( tableView.getShell(), row.size() );\n\n      int idx = getFieldsChoiceDialog.open();\n      choice = idx & 0xFF;\n    }\n\n    if ( choice == 3 || choice == 255 ) {\n      return; // Cancel clicked\n    }\n\n    if ( choice == 2 ) {\n      tableView.clearAll( false );\n    }\n\n    for ( int i = 0; i < row.size(); i++ ) {\n      ValueMetaInterface v = row.getValueMeta( i );\n\n      boolean add = true;\n\n      // hang on, see if it's not yet in the table view\n      if ( choice == 0 && keys.indexOf( v.getName() ) >= 0 ) {\n        add = false;\n      }\n\n      if ( add ) {\n        TableItem tableItem = new TableItem( table, SWT.NONE );\n\n        for ( int c = 0; c < nameColumn.length; c++ ) {\n          tableItem.setText( nameColumn[ c ], Const.NVL( v.getName(), \"\" ) );\n        }\n\n        String parquetTypeName = ParquetTypeConverter.convertToParquetType( v.getType() );\n        if ( dataTypeColumn != null ) {\n          for ( int c = 0; c < dataTypeColumn.length; c++ ) {\n            tableItem.setText( dataTypeColumn[ c ], parquetTypeName );\n          }\n        }\n\n        if ( parquetTypeName.equals( ParquetSpec.DataType.DECIMAL.getName() ) ) {\n          if ( lengthColumn > 0 && v.getLength() > 0 ) {\n            tableItem.setText( lengthColumn, Integer.toString( v.getLength() ) );\n          } else {\n            // Set the default precision\n            tableItem.setText( lengthColumn, Integer.toString( ParquetSpec.DEFAULT_DECIMAL_PRECISION ) );\n          }\n\n          if ( precisionColumn > 0 && v.getPrecision() >= 0 ) {\n            tableItem.setText( precisionColumn, Integer.toString( v.getPrecision() ) );\n          } else {\n            // Set the default scale\n            tableItem.setText( precisionColumn, Integer.toString( ParquetSpec.DEFAULT_DECIMAL_SCALE ) );\n          }\n        } else if ( parquetTypeName.equals( ParquetSpec.DataType.FLOAT.getName() )\n          || parquetTypeName.equals( ParquetSpec.DataType.DOUBLE.getName() )\n          && ( precisionColumn > 0 && v.getPrecision() > 0 ) ) {\n          tableItem.setText( precisionColumn, Integer.toString( v.getPrecision() ) );\n        }\n\n        if ( listener != null && !listener.tableItemInserted( tableItem, v ) ) {\n          tableItem.dispose(); // remove it again\n        }\n      }\n    }\n    tableView.removeEmptyRows();\n    tableView.setRowNums();\n    if ( optimizeWidth ) {\n      tableView.optWidth( true );\n    }\n  }\n\n  protected void getFields() {\n    try {\n      RowMetaInterface r = transMeta.getPrevStepFields( stepname );\n      if ( r != null ) {\n        TableItemInsertListener listener = ( tableItem, v ) -> true;\n        getFieldsFromPreviousStep( r, wOutputFields, 1, new int[] { 1, 2 }, new int[] { 3 }, 4, 5, true, listener );\n\n        // fix empty null fields to nullable\n        for ( int i = 0; i < wOutputFields.table.getItemCount(); i++ ) {\n          TableItem tableItem = wOutputFields.table.getItem( i );\n          if ( StringUtils.isEmpty( tableItem.getText( 7 ) ) ) {\n            tableItem.setText( 7, \"Yes\" );\n          }\n        }\n\n        meta.setChanged();\n      }\n    } catch ( KettleException ke ) {\n      new ErrorDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.GetFieldsFailed.Title\" ), BaseMessages\n        .getString( PKG, \"System.Dialog.GetFieldsFailed.Message\" ), ke );\n    }\n  }\n\n  @Override\n  protected int getWidth() {\n    return SHELL_WIDTH;\n  }\n\n  @Override\n  protected int getHeight() {\n    return SHELL_HEIGHT;\n  }\n\n  @Override\n  protected Listener getPreview() {\n    // no preview\n    return null;\n  }\n\n  @Override protected SelectionOperation selectionOperation() {\n    return SelectionOperation.SAVE_TO_FILE_FOLDER;\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.output.ParquetOutputMetaBase;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\n\n@Step( id = \"ParquetOutput\", image = \"PO.svg\", name = \"ParquetOutput.Name\", description = \"ParquetOutput.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.parquet\" )\n@InjectionSupported( localizationPrefix = \"ParquetOutput.Injection.\", groups = { \"FILENAME_LINES\", \"FIELDS\" }, hide = {\n  \"FIELD_POSITION\", \"FIELD_LENGTH\", \"FIELD_IGNORE\", \"FIELD_FORMAT\", \"FIELD_PRECISION\", \"FIELD_CURRENCY\",\n  \"FIELD_DECIMAL\", \"FIELD_GROUP\", \"FIELD_REPEAT\", \"FIELD_TRIM_TYPE\", \"FIELD_NULL_STRING\"\n} )\npublic class ParquetOutputMeta extends ParquetOutputMetaBase {\n\n  private final NamedClusterResolver namedClusterResolver;\n\n  public ParquetOutputMeta() {\n    this( NamedClusterResolver.getInstance() );\n  }\n\n  public ParquetOutputMeta( NamedClusterResolver namedClusterResolver ) {\n    this.namedClusterResolver = namedClusterResolver;\n  }\n\n  @Override\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                                Trans trans ) {\n    return new ParquetOutput( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  @Override\n  public StepDataInterface getStepData() {\n    return new ParquetOutputData();\n  }\n\n  public NamedClusterResolver getNamedClusterResolver() {\n    return namedClusterResolver;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xmlns:pen=\"http://www.pentaho.com/xml/schemas/pentaho-blueprint\"\n           xsi:schemaLocation=\"\n            http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n            http://www.pentaho.com/xml/schemas/pentaho-blueprint http://www.pentaho.com/xml/schemas/pentaho-blueprint.xsd\">\n  <bean id=\"namedClusterResolver\" class=\"org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver\" scope=\"singleton\">\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <argument ref=\"namedClusterService\"/>\n  </bean>\n  <bean id=\"parquetOutputMeta\" class=\"org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output.ParquetOutputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterResolver\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"parquetInputMeta\" class=\"org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input.ParquetInputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterResolver\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"orcInputMeta\" class=\"org.pentaho.big.data.kettle.plugins.formats.impl.orc.input.OrcInputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterResolver\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"orcOutputMeta\" class=\"org.pentaho.big.data.kettle.plugins.formats.impl.orc.output.OrcOutputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterResolver\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n\n  <reference id=\"namedClusterService\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n  <reference id=\"namedClusterServiceLocator\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator\"/>\n  <reference id=\"runtimeTester\" interface=\"org.pentaho.runtime.test.RuntimeTester\"/>\n  <reference id=\"runtimeTestActionService\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionService\"/>\n\n</blueprint>"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/messages/messages_en_US.properties",
    "content": "OrcInput.Name=ORC input\nOrcInput.Description=Reads data from ORC file\n\nOrcInputDialog.StepName.Label=Step name\nOrcInputDialog.Shell.Title=ORC input\n\nOrcInputDialog.Fields.Label=Fields\nOrcInputDialog.Fields.column.Name=Name\nOrcInputDialog.Fields.column.Path=ORC path (ORC type)\nOrcInputDialog.Fields.column.Type=Type\nOrcInputDialog.Fields.column.SourceType=Source Type\nOrcInputDialog.Fields.column.Format=Format\nOrcInputDialog.Fields.Get=Get fields\n\nOrcInputDialog.PassThruFields.Tooltip=Enable this if you have other fields in the previous step\\nand you want those fields to appear in every record\nOrcInputDialog.PassThruFields.Label=Pass through fields from previous step\n\nOrcInputDialog.FileBrowser.KettleFileException=Kettle File Exception\nOrcInputDialog.FileBrowser.FileSystemException=File System Exception\n\nOrcInputDialog.PreviewSize.DialogTitle=Preview size\nOrcInputDialog.PreviewSize.DialogMessage=Enter the number of rows to preview\n\nOrcInput.Error.UnableToLoadSchemaFromContainerFile=Unable to find schema\n\nOrcInput.Injection.FILENAME=The name of the ORC file to use as input.\nOrcInput.Injection.FIELD_NAME=The name of the field to output to the Kettle stream.\nOrcInput.Injection.FIELD_PATH=The column name in the ORC file.\nOrcInput.Injection.FIELD_TYPE=The Kettle field type.\nOrcInput.Injection.ORC_TYPE=The ORC type for the field.\nOrcInput.Injection.FIELD_IF_NULL=Specify whether the incoming field will contain null values. If no, then the default value will be used.\nOrcInput.Injection.FIELD_NULL_STRING=This option will skip errors when specified paths or fields are not present in the active ORC schema.\nOrcInput.Injection.FIELDS=Fields.\nOrcInput.Injection.FILENAME_LINES=The list of file definitions.\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/orc/messages/messages_en_US.properties",
    "content": "BaseStepDialog.StepName=Step name:\nBaseStepDialog.Preview=Preview\n\nOrcDialog.Location.Label=Location:\nOrcDialog.Filename.Label=Folder/File name:\n\n#ToDo\nOrcDialog.FileBrowser.KettleFileException=\nOrcDialog.FileBrowser.FileSystemException=\nOrcDialog.SchemaFileBrowser.KettleFileException=\nOrcDialog.SchemaFileBrowser.FileSystemException=\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/messages/messages_en_US.properties",
    "content": "OrcOutput.Name=ORC output\nOrcOutput.Description=Writes data to an Orc file according to a mapping\n\nOrcOutputDialog.Shell.Title=ORC output\nOrcOutputDialog.OverwriteFile.Label=Overwrite existing output file\n\nOrcOutputDialog.FieldsTab.TabTitle=Fields\nOrcOutputDialog.Fields.column.Name=Name\nOrcOutputDialog.Fields.column.Path=ORC path\nOrcOutputDialog.Fields.column.Type=ORC type\nOrcOutputDialog.Fields.column.Precision=Precision\nOrcOutputDialog.Fields.column.Scale=Scale\nOrcOutputDialog.Fields.column.Default=Default value\nOrcOutputDialog.Fields.column.Null=Null\nOrcOutputDialog.Fields.Get=Get fields\nOrcOutputDialog.Options.TabTitle=Options\nOrcOutputDialog.Options.Compression=Compression:\nOrcOutputDialog.Options.StripeSize=Stripe size (MB):\nOrcOutputDialog.Options.CompressSize=Compress size (KB):\nOrcOutputDialog.Options.InlineIndexes=Inline Indexes\nOrcOutputDialog.Options.RowsBetweenEntries=Rows between entries:\nOrcOutputDialog.Options.DateInFileName=Include date in file name\nOrcOutputDialog.Options.TimeInFileName=Include time in file name\nOrcOutputDialog.Options.SpecifyDateTimeFormat=Specify date time format\n\nOrcOutputDialog.AddNew=Add &new\nOrcOutputDialog.Add=Add &all\nOrcOutputDialog.ClearAndAdd=C&lear and add all\nOrcOutputDialog.Cancel=&Cancel\nOrcOutputDialog.GetFieldsChoice.Title=Question\nOrcOutputDialog.GetFieldsChoice.Message=There already is data entered, {0} lines were found.\\nHow do you want to add the {1} field that were found?\nOrcOutput.CompressionType.NONE=None\n\nOrcOutput.MissingDefaultFields.Title=Missing values\nOrcOutput.MissingDefaultFields.Msg=One or more fields are missing a value. Please set a default value for all fields that have ''Null'' set to ''No''.\n\nOrcOutput.Injection.OPTIONS_COMPRESSION=This option will let you specify the type of compression to use on the file output.\nOrcOutput.Injection.OPTIONS_STRIPE_SIZE=This option defines the file stripe size.\nOrcOutput.Injection.OPTIONS_COMPRESS_SIZE=This option defines the file compression.\nOrcOutput.Injection.OPTIONS_ROWS_BETWEEN_ENTRIES=This options defines the number of rows between entries.\nOrcOutput.Injection.OPTIONS_DATE_IN_FILE_NAME=This defines whether to include the current date in the output file/directory name.\nOrcOutput.Injection.OPTIONS_TIME_IN_FILE_NAME=This defines whether to include the current time in the output file/directory name.\nOrcOutput.Injection.OPTIONS_DATE_FORMAT=This option defines the format of the output date format.\nOrcOutput.Injection.OVERRIDE_OUTPUT=Enable this option to overwrite the existing output file(s).\nOrcOutput.Injection.FILENAME=The name of the folder/file to write to.\nOrcOutput.Injection.FIELD_PATH=The path to the field in the Orc file.\nOrcOutput.Injection.FIELD_NAME=The name of the output field.\nOrcOutput.Injection.FIELD_TYPE=The Kettle field type.\nOrcOutput.Injection.FIELD_IF_NULL=The default value to use in case the incoming field value is null.\nOrcOutput.Injection.FIELD_NULLABLE=Specify whether the incoming field will contain null values. If no, then the default value will be used.\nOrcOutput.Injection.FIELD_NULL_STRING=Deprecated: Replaced by FIELD_NULLABLE.\nOrcOutput.Injection.FIELD_DECIMAL_PRECISION=Maximum number of digits allowed in the number. (only applies to numbers stored as decimal type)\nOrcOutput.Injection.FIELD_DECIMAL_SCALE=Maximum number of digits after the decimal point. (only applies to numbers stored as decimal type)\nOrcOutput.Injection.FIELD_POSITION=Position\nOrcOutput.Injection.FIELD_LENGTH=Length\nOrcOutput.Injection.FIELD_IGNORE=Ignore? (Y/N)\nOrcOutput.Injection.FIELD_FORMAT=Format\nOrcOutput.Injection.FIELD_PRECISION=Precision\nOrcOutput.Injection.FIELD_CURRENCY=Currency symbol\nOrcOutput.Injection.FIELD_DECIMAL=Decimal symbol\nOrcOutput.Injection.FIELD_GROUP=Grouping symbol\nOrcOutput.Injection.FIELD_REPEAT=Repeat values? (Y/N)\nOrcOutput.Injection.FIELD_TRIM_TYPE=Trim Type\nOrcOutput.Injection.FIELDS=\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/messages/messages_en_US.properties",
    "content": "ParquetInput.Name=Parquet input\nParquetInput.Description=Reads data from a Parquet file.\n\nParquetInputDialog.Shell.Title=Parquet input\nParquetInputDialog.StepName.Label=Step name\n\nParquetInputDialog.FileTab.TabTitle=Source\nParquetInputDialog.FieldsTab.TabTitle=Input\n\nParquetInputDialog.PassThruFields.Tooltip=Enable this if you have other fields in the previous step\\nand you want those fields to appear in every record\nParquetInputDialog.PassThruFields.Label=Pass through fields from previous step\nParquetInputDialog.IgnoreEmptyFolder.Tooltip=Enable this if you wish transformation to keep running even if the target folder is empty.\nParquetInputDialog.IgnoreEmptyFolder.Label=Ignore empty folder\nParquetInputDialog.Fields.Label=Fields:\nParquetInputDialog.Fields.Get=Get Fields\n\nParquetInputDialog.Fields.column.Name=Name\nParquetInputDialog.Fields.column.Path=Path\nParquetInputDialog.Fields.column.Type=Type\nParquetInputDialog.Fields.column.Format=Format\n\nParquetInputDialog.FileBrowser.KettleFileException=Kettle File Exception\nParquetInputDialog.FileBrowser.FileSystemException=File System Exception\n\nParquetInputDialog.PreviewSize.DialogTitle=Enter preview size\nParquetInputDialog.PreviewSize.DialogMessage=Enter the number of rows you would like to preview\\:\n\nParquetInput.Injection.FILENAME_LINES=The list of file definitions.\nParquetInput.Injection.FILENAME=The name of the folder/file where the Parquet data comes from.\nParquetInput.Injection.FIELDS=Fields.\nParquetInput.Injection.FIELD_PATH=The path to the field in the Parquet file.\nParquetInput.Injection.FIELD_NAME=The name of the field to output to the Kettle stream.\nParquetInput.Injection.FIELD_TYPE=The Kettle field type.\nParquetInput.Injection.IGNORE_EMPTY_FOLDER=Enable this if you wish transformation to keep running even if the target folder is empty.\nParquetInput.Injection.PARQUET_TYPE=The Parquet type for the field.\n\nParquetInput.GetFieldsChoice.Title=New fields were found\nParquetInput.GetFieldsChoice.Message=We found {0} new fields. What would you like to do with the new fields?\nParquetInput.GetFieldsChoice.AddNew=Add &new fields\nParquetInput.GetFieldsChoice.Add=Add &all fields\nParquetInput.GetFieldsChoice.ClearAndAdd=C&lear and add all\nParquetInput.GetFieldsChoice.Cancel=&Cancel\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/messages/messages_en_US.properties",
    "content": "BaseStepDialog.StepName=Step name:\nBaseStepDialog.Preview=Preview\n\nParquetDialog.Location.Label=Location:\nParquetDialog.Filename.Label=Folder/File name:\n\n#ToDo\nParquetDialog.FileBrowser.KettleFileException=\nParquetDialog.FileBrowser.FileSystemException=\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/messages/messages_en_US.properties",
    "content": "ParquetOutput.Name=Parquet output\nParquetOutput.Description=Writes data to a Parquet file according to a mapping.\n\nParquetOutputDialog.OverwriteFile.Label=Overwrite existing output file\n\nParquetOutputDialog.Shell.Title=Parquet output\n\nParquetOutputDialog.FieldsTab.TabTitle=Fields\nParquetOutputDialog.Fields.column.Name=Name\nParquetOutputDialog.Fields.column.Path=Parquet Path\nParquetOutputDialog.Fields.column.Type=Parquet Type\nParquetOutputDialog.Fields.column.Default=Default value\nParquetOutputDialog.Fields.column.Null=Null\nParquetOutputDialog.Fields.column.Precision=Precision\nParquetOutputDialog.Fields.column.Scale=Scale\nParquetOutputDialog.Fields.Get=Get Fields\n\nParquetOutputDialog.Options.TabTitle=Options\nParquetOutputDialog.Options.Compression=Compression:\nParquetOutputDialog.Options.Version=Version:\nParquetOutputDialog.Options.RowSize=Row group size (MB):\nParquetOutputDialog.Options.PageSize=Data page size (KB):\nParquetOutputDialog.Options.Extension=Extension:\nParquetOutputDialog.Options.DictionaryEncoding=Dictionary encoding\nParquetOutputDialog.Options.IncludeDateInFilename=Include date in file name\nParquetOutputDialog.Options.IncludeTimeInFilename=Include time in file name\nParquetOutputDialog.Options.SpecifyDateTimeFormat=Specify date time format\nParquetOutputDialog.Options.DictPageSize=Page size (KB):\n\nParquetOutput.Injection.FILENAME_LINES=The list of file definitions.\nParquetOutput.Injection.FILENAME=The name of the folder/file to write to.\nParquetOutput.Injection.OVERRIDE_OUTPUT=Enable this option to overwrite the existing output file(s).\nParquetOutput.Injection.FIELDS=Fields.\nParquetOutput.Injection.FIELD_PATH=The path of the output field.\nParquetOutput.Injection.FIELD_NAME=The name of the output field.\nParquetOutput.Injection.FIELD_TYPE=(Deprecated: Use FIELD_PARQUET_TYPE) The Kettle field type. \nParquetOutput.Injection.FIELD_PARQUET_TYPE=The Parquet output field type.\nParquetOutput.Injection.FIELD_NULLABLE=Specify whether the incoming field will contain null values. If no, then the default value will be used.\nParquetOutput.Injection.FIELD_IF_NULL=The default value to use in case the incoming field value is null.\nParquetOutput.Injection.FIELD_DECIMAL_PRECISION=Maximum number of digits allowed in the number. (only applies to numbers stored as decimal type)\nParquetOutput.Injection.FIELD_DECIMAL_SCALE=Maximum number of digits after the decimal point. (only applies to numbers stored as decimal type)\nParquetOutput.Injection.COMPRESSION=This option will let you specify the type of compression to use on the file output.\nParquetOutput.Injection.PARQUET_VERSION=Specify the parquet version.\nParquetOutput.Injection.ROW_GROUP_SIZE=Specify the group size for the rows.\nParquetOutput.Injection.DATA_PAGE_SIZE=Specify the page size for the data.\nParquetOutput.Injection.ENABLE_DICTIONARY=Enable this option to indicate that the data will have dictionary encoding.\nParquetOutput.Injection.DICT_PAGE_SIZE=Specify the dictionary page size.\nParquetOutput.Injection.INC_DATE_IN_FILENAME=This option will include the system date in the file name.\nParquetOutput.Injection.INC_TIME_IN_FILENAME=This option will include the system time in the file name.\nParquetOutput.Injection.DATE_FORMAT=Specify which date & time format you want to go into each file name.\nParquetOutput.Injection.EXTENSION=The extension of the output file.\n\nParquetOutput.GetFieldsChoice.Title=New fields were found\nParquetOutput.GetFieldsChoice.Message=We found {0} new fields. What would you like to do with the new fields?\nParquetOutput.GetFieldsChoice.AddNew=Add &new fields\nParquetOutput.GetFieldsChoice.Add=Add &all fields\nParquetOutput.GetFieldsChoice.ClearAndAdd=C&lear and add all\nParquetOutput.GetFieldsChoice.Cancel=&Cancel"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/NamedClusterResolverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl;\n\nimport java.lang.reflect.Field;\nimport java.util.ArrayList;\nimport java.util.Collection;\n\nimport org.junit.After;\nimport org.junit.AfterClass;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentMatchers;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\n\nimport static org.mockito.Mockito.lenient;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.internal.verification.VerificationModeFactory.times;\n\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.KettleLoggingEventListener;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class NamedClusterResolverTest {\n  @Mock\n  private MetastoreLocator metaStoreService;\n  @Mock\n  private IMetaStore metaStore;\n  @Mock\n  private NamedClusterService namedClusterService;\n  @Mock\n  private KettleLoggingEventListener kettleLoggingEventListener;\n  @Mock\n  private NamedCluster namedCluster;\n  @Mock\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n\n  private NamedClusterResolver namedClusterResolver;\n\n  private static MockedStatic<PluginServiceLoader> pluginServiceLoaderMockedStatic;\n\n  @BeforeClass\n  public static void setupClass() {\n    // Create a class-level static mock that will stay open for all tests\n    pluginServiceLoaderMockedStatic = Mockito.mockStatic( PluginServiceLoader.class );\n  }\n\n  @AfterClass\n  public static void tearDownClass() {\n    // Close the static mock after all tests complete\n    if ( pluginServiceLoaderMockedStatic != null ) {\n      pluginServiceLoaderMockedStatic.close();\n    }\n  }\n\n  @Before\n  public void before() throws Exception {\n    // Reset the singleton before each test\n    resetSingleton();\n\n    KettleLogStore.init();\n    KettleLogStore.getAppender().addLoggingEventListener( kettleLoggingEventListener );\n\n    // Mock the metastore locator to return a metastore\n    lenient().when( metaStoreService.getMetastore() ).thenReturn( metaStore );\n    lenient().when( metaStoreService.getMetastore( ArgumentMatchers.any() ) ).thenReturn( metaStore );\n    lenient().when( metaStoreService.getExplicitMetastore( ArgumentMatchers.any() ) ).thenReturn( metaStore );\n\n    // Use the specific metaStore object in the mocks to ensure matching\n    lenient().when( namedClusterService.getNamedClusterByName( ArgumentMatchers.eq( \"testhc\" ), ArgumentMatchers.same( metaStore ) ) )\n      .thenReturn( namedCluster );\n    lenient().when( namedClusterService.getNamedClusterByHost( ArgumentMatchers.eq( \"somehost\" ), ArgumentMatchers.same( metaStore ) ) )\n      .thenReturn( namedCluster );\n\n    // Mock MetastoreLocator loading in the @Before so it's available for all tests\n    Collection<MetastoreLocator> metastoreLocatorCollection = new ArrayList<>();\n    metastoreLocatorCollection.add( metaStoreService );\n    pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( MetastoreLocator.class ) )\n      .thenReturn( metastoreLocatorCollection );\n\n    Collection<NamedClusterServiceLocator> namedClusterServiceLocatorCollection = new ArrayList<>();\n    namedClusterServiceLocatorCollection.add( namedClusterServiceLocator );\n\n    pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( NamedClusterServiceLocator.class ) )\n      .thenReturn( namedClusterServiceLocatorCollection );\n\n    Collection<NamedClusterService> namedClusterServiceCollection = new ArrayList<>();\n    namedClusterServiceCollection.add( namedClusterService );\n\n    pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( NamedClusterService.class ) )\n      .thenReturn( namedClusterServiceCollection );\n\n    // Create NamedClusterResolver with mocked dependencies using the package-private constructor\n    namedClusterResolver = createResolverWithMocks();\n\n    // Set it as the singleton instance\n    Field instance = NamedClusterResolver.class.getDeclaredField( \"namedClusterResolver\" );\n    instance.setAccessible( true );\n    instance.set( null, namedClusterResolver );\n  }\n\n  private NamedClusterResolver createResolverWithMocks() throws Exception {\n    // Use reflection to call the package-private constructor\n    java.lang.reflect.Constructor<NamedClusterResolver> constructor =\n      NamedClusterResolver.class.getDeclaredConstructor( NamedClusterServiceLocator.class, NamedClusterService.class );\n    constructor.setAccessible( true );\n    return constructor.newInstance( namedClusterServiceLocator, namedClusterService );\n  }\n\n  @After\n  public void after() throws Exception {\n    // Reset the singleton after each test to ensure test isolation\n    resetSingleton();\n  }\n\n  private void resetSingleton() throws Exception {\n    // Use reflection to reset the singleton instance\n    Field instance = NamedClusterResolver.class.getDeclaredField( \"namedClusterResolver\" );\n    instance.setAccessible( true );\n    instance.set( null, null );\n  }\n\n  @Test\n  public void windowsFilePathsAreHandled() {\n    assertNull(\n      namedClusterResolver.resolveNamedCluster( \"C:/path/to some/file\" ) );\n    verify( kettleLoggingEventListener, times( 0 ) ).eventAdded( ArgumentMatchers.any() );\n  }\n\n  @Test\n  public void testNamedClusterByName() throws Exception {\n    // Reset the metastoreLocator cache to force reload\n    Field metaStoreServiceField = NamedClusterResolver.class.getDeclaredField( \"metaStoreService\" );\n    metaStoreServiceField.setAccessible( true );\n    metaStoreServiceField.set( namedClusterResolver, null );\n\n    NamedCluster cluster = namedClusterResolver.resolveNamedCluster( \"hc://testhc/path\" );\n    assertEquals( namedCluster, cluster );\n\n    cluster = namedClusterResolver.resolveNamedCluster( \"hc://nosuchhc/path\" );\n    assertNull( cluster );\n  }\n\n  @Test\n  public void testNamedClusterByHost() throws Exception {\n    // Reset the metastoreLocator cache to force reload\n    Field metaStoreServiceField = NamedClusterResolver.class.getDeclaredField( \"metaStoreService\" );\n    metaStoreServiceField.setAccessible( true );\n    metaStoreServiceField.set( namedClusterResolver, null );\n\n    NamedCluster cluster = namedClusterResolver.resolveNamedCluster( \"hdfs://somehost/path\" );\n    assertEquals( namedCluster, cluster );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport org.junit.Before;\nimport org.junit.ClassRule;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.OrcInputField;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.di.junit.rules.RestorePDIEngineEnvironment;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport static org.mockito.Mockito.mock;\n\npublic class OrcInputMetaInjectionTest extends BaseMetadataInjectionTest<OrcInputMeta> {\n  @ClassRule public static RestorePDIEngineEnvironment env = new RestorePDIEngineEnvironment();\n\n  @Before\n  public void setup() {\n    NamedClusterResolver mockNamedClusterResolver = mock( NamedClusterResolver.class );\n    setup( new OrcInputMeta( mockNamedClusterResolver ) );\n    OrcInputField orcInputField = new OrcInputField();\n    meta.setInputFields( new OrcInputField[] { orcInputField } );\n  }\n\n  @Test\n  public void test() throws Exception {\n\n    check( \"FILENAME\", () -> meta.inputFiles.fileName[0] );\n    checkStringToEnum( \"ORC_TYPE\", () -> meta.getInputFields()[0].getOrcType(), OrcSpec.DataType.class );\n\n    check( \"FIELD_PATH\", () -> meta.getInputFields()[ 0 ].getFormatFieldName() );\n    check( \"FIELD_NAME\", () -> meta.getInputFields()[ 0 ].getName() );\n    checkPdiTypes( \"FIELD_TYPE\", () -> meta.getInputFields()[ 0 ].getType() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/input/OrcInputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.input;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.RowHandler;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcInputFormat;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.Iterator;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class OrcInputTest {\n  private static final String INPUT_STEP_NAME = \"Input Step Name\";\n  private static final String INPUT_STREAM_FIELD_NAME = \"inputStreamFieldName\";\n  private static final String PASS_FIELD_NAME = \"passFieldName\";\n  private static final String FILENAME = \"orcFile\";\n\n  @Mock\n  private StepMeta mockStepMeta;\n  @Mock\n  private StepDataInterface mockStepDataInterface;\n  @Mock\n  private TransMeta mockTransMeta;\n  @Mock\n  private Trans mockTrans;\n  @Mock\n  private NamedClusterServiceLocator mockNamedClusterServiceLocator;\n  @Mock\n  private NamedClusterService mockNamedClusterService;\n  @Mock\n  private MetastoreLocator mockMetaStoreLocator;\n  @Mock\n  private FormatService mockFormatService;\n  @Mock\n  private OrcInputData orcInputData;\n\n  @Mock\n  private RowHandler mockRowHandler;\n  @Mock\n  private IPentahoOrcInputFormat mockPentahoOrcInputFormat;\n  @Mock\n  private IPentahoOrcInputFormat.IPentahoRecordReader mockPentahoOrcRecordReader;\n\n  private OrcInputMeta orcInputMeta;\n  private OrcInput orcInput;\n  private RowMeta orcRowMeta;\n  private RowMetaAndData[] orcRows;\n  private RowMeta inputRowMeta;\n  private RowMetaAndData[] inputRows;\n  private int currentOrcInputRow;\n\n  @Before\n  public void setUp() throws Exception {\n    KettleLogStore.init();\n    currentOrcInputRow = 0;\n    setInputRows();\n    setOrcRows();\n    Collection<MetastoreLocator> metastoreLocatorCollection = new ArrayList<>();\n    metastoreLocatorCollection.add( mockMetaStoreLocator );\n    NamedClusterResolver namedClusterResolver;\n    try ( MockedStatic<PluginServiceLoader> pluginServiceLoaderMockedStatic = Mockito.mockStatic( PluginServiceLoader.class ) ) {\n      pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( MetastoreLocator.class ) )\n        .thenReturn( metastoreLocatorCollection );\n\n      // Mock the NamedClusterResolver instead of using the singleton\n      namedClusterResolver = Mockito.mock( NamedClusterResolver.class );\n      when( namedClusterResolver.getNamedClusterServiceLocator() ).thenReturn( mockNamedClusterServiceLocator );\n      when( namedClusterResolver.resolveNamedCluster( any( String.class ) ) ).thenReturn( null );\n\n      orcInputMeta = spy( new OrcInputMeta( namedClusterResolver ) );\n      orcInputMeta.inputFiles.fileName = new String[1];\n      orcInputMeta.setFilename( INPUT_STREAM_FIELD_NAME );\n\n      orcInputMeta.setParentStepMeta( mockStepMeta );\n      when( mockStepMeta.getParentTransMeta() ).thenReturn( mockTransMeta );\n      when( mockStepMeta.getName() ).thenReturn( INPUT_STEP_NAME );\n      when( mockTransMeta.findStep( INPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n\n      orcInputData.input = mockPentahoOrcInputFormat;\n      when( mockFormatService.createInputFormat( IPentahoOrcInputFormat.class,\n        orcInputMeta.getNamedClusterResolver().resolveNamedCluster( orcInputMeta.getFilename() ) ) )\n        .thenReturn( mockPentahoOrcInputFormat );\n      when( mockNamedClusterServiceLocator.getService( nullable( NamedCluster.class ), any( Class.class ) ) )\n        .thenReturn( mockFormatService );\n      when( mockTransMeta.environmentSubstitute( INPUT_STREAM_FIELD_NAME ) ).thenReturn( INPUT_STREAM_FIELD_NAME );\n      when( mockPentahoOrcInputFormat.createRecordReader( null ) ).thenReturn( mockPentahoOrcRecordReader );\n      when( mockPentahoOrcRecordReader.iterator() ).thenReturn( new OrcInputTest.OrcRecordIterator() );\n\n      orcInput = spy( new OrcInput( mockStepMeta, mockStepDataInterface, 0, mockTransMeta,\n        mockTrans ) );\n      orcInput.setRowHandler( mockRowHandler );\n      orcInput.setInputRowMeta( inputRowMeta );\n      orcInput.setLogLevel( LogLevel.ERROR );\n      orcInput.setTransMeta( mockTransMeta );\n    }\n  }\n\n  private Object[] returnNextInputRow() {\n    Object[] result = null;\n    if ( currentOrcInputRow < inputRows.length ) {\n      result = inputRows[currentOrcInputRow].getData().clone();\n      currentOrcInputRow++;\n    }\n    return result;\n  }\n\n  @Test\n  public void testProcessRow() throws Exception {\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = orcInput.processRow( orcInputMeta, orcInputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 1 file, 2 rows.\n    assertEquals( 2, rowsProcessed );\n    verify( mockRowHandler, times( 2 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    List<RowMeta> rowMeta = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 2; rowNum++ ) {\n      assertEquals( 0, rowMeta.get( rowNum ).indexOfValue( \"str\" ) );\n      assertEquals( \"string\" + ( rowNum % 2 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n  }\n\n  @Test\n  public void testInit() {\n    assertEquals( true, orcInput.init() );\n  }\n\n  @Test\n  public void testProcessRowKettleFailure() {\n    String expectedMessage = \"KettleExceptionMessage\";\n    try {\n      doThrow( new KettleException( expectedMessage ) )\n        .when( mockPentahoOrcInputFormat ).createRecordReader( null );\n      orcInput.processRow( orcInputMeta, orcInputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  @Test\n  public void testProcessRowGeneralFailure() {\n    String expectedMessage = \"KettleExceptionMessage\";\n    try {\n      doThrow( new Exception( expectedMessage ) )\n        .when( mockPentahoOrcInputFormat ).createRecordReader( null );\n      orcInput.processRow( orcInputMeta, orcInputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  private RowMeta setOrcRowMeta() {\n    orcRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( \"str\" );\n    orcRowMeta.addValueMeta( valueMetaString );\n    ValueMetaInterface valueMetaBoolean = new ValueMetaBoolean( \"bool\" );\n    orcRowMeta.addValueMeta( valueMetaBoolean );\n    ValueMetaInterface valueMetaInteger = new ValueMetaInteger( \"int\" );\n    orcRowMeta.addValueMeta( valueMetaInteger );\n    return orcRowMeta;\n  }\n\n  private RowMeta setInputRowMeta() {\n    inputRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( INPUT_STREAM_FIELD_NAME );\n    inputRowMeta.addValueMeta( valueMetaString );\n    ValueMetaInterface valueMetaString2 = new ValueMetaString( PASS_FIELD_NAME );\n    inputRowMeta.addValueMeta( valueMetaString2 );\n    return inputRowMeta;\n  }\n\n  private void setInputRows() {\n    setInputRowMeta();\n    inputRows = new RowMetaAndData[] {\n      new RowMetaAndData( orcRowMeta, FILENAME, \"pass1\" )\n    };\n\n  }\n\n  private void setOrcRows() {\n    setOrcRowMeta();\n    orcRows = new RowMetaAndData[] {\n      new RowMetaAndData( orcRowMeta, \"string1\", true, new Integer( 123 ) ),\n      new RowMetaAndData( orcRowMeta, \"string2\", true, new Integer( 321 ) )\n    };\n  }\n\n  private class OrcRecordIterator implements Iterator<RowMetaAndData> {\n    private Iterator<RowMetaAndData> iter;\n    private boolean reset;\n\n    OrcRecordIterator() {\n      init();\n    }\n\n    private void init() {\n      iter = Arrays.asList( orcRows ).iterator();\n      reset = false;\n    }\n\n    @Override\n    public boolean hasNext() {\n      if ( reset ) {\n        init();\n      }\n      if ( !iter.hasNext() ) {\n        reset = true;\n      }\n      return iter.hasNext();\n    }\n\n    @Override\n    public RowMetaAndData next() {\n      if ( reset ) {\n        init(); // Simultate a new iterator for the new file\n      }\n      return iter.next().clone();\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\nimport org.junit.Before;\nimport org.junit.ClassRule;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.output.OrcOutputField;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.junit.rules.RestorePDIEngineEnvironment;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport java.util.Arrays;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\n\npublic class OrcOutputMetaInjectionTest  extends BaseMetadataInjectionTest<OrcOutputMeta> {\n  @ClassRule public static RestorePDIEngineEnvironment env = new RestorePDIEngineEnvironment();\n\n  @Before\n  public void setup() {\n    NamedClusterResolver mockNamedClusterResolver = mock( NamedClusterResolver.class );\n    setup( new OrcOutputMeta( mockNamedClusterResolver ) );\n    OrcOutputField orcOutputField = new OrcOutputField();\n    meta.setOutputFields( Arrays.asList( orcOutputField ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n\n    check( \"FILENAME\", () -> meta.getFilename() );\n    check( \"OPTIONS_COMPRESS_SIZE\", () -> meta.getCompressSize() );\n    check( \"OPTIONS_DATE_FORMAT\", () -> meta.getDateTimeFormat() );\n    check( \"OPTIONS_DATE_IN_FILE_NAME\", () -> meta.isDateInFileName() );\n    check( \"OPTIONS_ROWS_BETWEEN_ENTRIES\", () -> meta.getRowsBetweenEntries() );\n    check( \"OPTIONS_STRIPE_SIZE\", () -> meta.getStripeSize() );\n    check( \"OPTIONS_TIME_IN_FILE_NAME\", () -> meta.isTimeInFileName() );\n    check( \"OVERRIDE_OUTPUT\", () -> meta.isOverrideOutput() );\n\n    check( \"FIELD_DECIMAL_PRECISION\", () -> meta.getOutputFields().get( 0 ).getPrecision() );\n    check( \"FIELD_DECIMAL_SCALE\", () -> meta.getOutputFields().get( 0 ).getScale() );\n    check( \"FIELD_IF_NULL\", () -> meta.getOutputFields().get( 0 ).getDefaultValue() );\n    check( \"FIELD_NAME\", () -> meta.getOutputFields().get( 0 ).getPentahoFieldName() );\n    check( \"FIELD_NULLABLE\", () -> meta.getOutputFields().get( 0 ).getAllowNull() );\n    check( \"FIELD_NULL_STRING\", () -> meta.getOutputFields().get( 0 ).getAllowNull() );\n    check( \"FIELD_PATH\", () -> meta.getOutputFields().get( 0 ).getFormatFieldName() );\n    checkOrcTypes( \"FIELD_TYPE\", () -> meta.getOutputFields().get( 0 ).getFormatType(), OrcSpec.DataType.class );\n    check( \"OPTIONS_COMPRESSION\", () -> meta.getCompressionType().toUpperCase(), \"SNAPPY\" );\n  }\n\n  protected void checkOrcTypes( String propertyName, IntGetter getter, Class enumType )\n    throws KettleException {\n\n    OrcSpec.DataType[] values = OrcSpec.DataType.values();\n    ValueMetaInterface valueMeta = new ValueMetaString( \"f\" );\n\n    for ( OrcSpec.DataType v : values ) {\n      injector.setProperty( meta, propertyName, setValue( valueMeta, v.toString() ), \"f\" );\n      assertEquals( v.getId(), getter.get() );\n    }\n\n    skipPropertyTest( propertyName );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/orc/output/OrcOutputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.orc.output;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.output.OrcOutputField;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.RowHandler;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoOrcOutputFormat;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyBoolean;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class OrcOutputTest {\n\n  private static final String OUTPUT_STEP_NAME = \"Output Step Name\";\n  private static final String OUTPUT_TRANS_NAME = \"Output Trans Name\";\n  private static final String OUTPUT_FILE_NAME = \"outputFileName\";\n\n  @Mock\n  private StepMeta mockStepMeta;\n  @Mock\n  private StepDataInterface mockStepDataInterface;\n  @Mock\n  private TransMeta mockTransMeta;\n  @Mock\n  private Trans mockTrans;\n  @Mock\n  private NamedClusterServiceLocator mockNamedClusterServiceLocator;\n  @Mock\n  private NamedClusterService mockNamedClusterService;\n  @Mock\n  private MetastoreLocator mockMetaStoreLocator;\n  @Mock\n  private FormatService mockFormatService;\n  @Mock\n  private OrcOutputData orcOutputData;\n  @Mock\n  private RowHandler mockRowHandler;\n  @Mock\n  private IPentahoOrcOutputFormat mockPentahoOrcOutputFormat;\n  @Mock\n  private LogChannelInterface mockLogChannelInterface;\n  @Mock\n  private IPentahoOrcOutputFormat.IPentahoRecordWriter mockPentahoOrcRecordWriter;\n\n  private OrcOutput orcOutput;\n  private List<OrcOutputField> orcOutputFields;\n  private OrcOutputMeta orcOutputMeta;\n  private RowMeta dataInputRowMeta;\n  private RowMetaAndData[] dataInputRows;\n  private int currentOrcRow;\n\n  @Before\n  public void setUp() throws Exception {\n    KettleLogStore.init();\n    currentOrcRow = 0;\n    setDataInputRows();\n    setOrcOutputRows();\n    Collection<MetastoreLocator> metastoreLocatorCollection = new ArrayList<>();\n    metastoreLocatorCollection.add( mockMetaStoreLocator );\n    NamedClusterResolver namedClusterResolver;\n    try ( MockedStatic<PluginServiceLoader> pluginServiceLoaderMockedStatic = Mockito.mockStatic( PluginServiceLoader.class ) ) {\n      pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( MetastoreLocator.class ) )\n        .thenReturn( metastoreLocatorCollection );\n\n      // Mock the NamedClusterResolver instead of using the singleton\n      namedClusterResolver = Mockito.mock( NamedClusterResolver.class );\n      when( namedClusterResolver.getNamedClusterServiceLocator() ).thenReturn( mockNamedClusterServiceLocator );\n      when( namedClusterResolver.resolveNamedCluster( any( String.class ) ) ).thenReturn( null );\n\n      orcOutputMeta = new OrcOutputMeta( namedClusterResolver );\n      orcOutputMeta.setFilename( OUTPUT_FILE_NAME );\n      orcOutputMeta.setOutputFields( orcOutputFields );\n      orcOutputMeta.setOverrideOutput( true );\n      orcOutputMeta.setParentStepMeta( mockStepMeta );\n      when( mockStepMeta.getName() ).thenReturn( OUTPUT_STEP_NAME );\n      when( mockTransMeta.findStep( OUTPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.findStep( OUTPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n\n      try {\n        when( mockRowHandler.getRow() ).thenAnswer( answer -> returnNextParquetRow() );\n      } catch ( KettleException ke ) {\n        ke.printStackTrace();\n      }\n\n      when( mockFormatService.createOutputFormat( IPentahoOrcOutputFormat.class,\n        orcOutputMeta.getNamedClusterResolver().resolveNamedCluster( orcOutputMeta.getFilename() ) ) )\n        .thenReturn( mockPentahoOrcOutputFormat );\n      when( mockNamedClusterServiceLocator.getService( nullable( NamedCluster.class ), any( Class.class ) ) )\n        .thenReturn( mockFormatService );\n      when( mockPentahoOrcOutputFormat.createRecordWriter() ).thenReturn( mockPentahoOrcRecordWriter );\n\n      orcOutput = spy( new OrcOutput( mockStepMeta, mockStepDataInterface, 0, mockTransMeta, mockTrans ) );\n      orcOutput.init( orcOutputMeta, orcOutputData );\n      orcOutput.setInputRowMeta( dataInputRowMeta );\n      orcOutput.setRowHandler( mockRowHandler );\n      orcOutput.setLogLevel( LogLevel.ERROR );\n      orcOutput.setTransMeta( mockTransMeta );\n    }\n  }\n\n  @Test\n  public void testProcessRow() throws Exception {\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = orcOutput.processRow( orcOutputMeta, orcOutputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 3 rows to be outputted to an Orc file\n    assertEquals( 3, rowsProcessed );\n    verify( mockRowHandler, times( 3 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    List<RowMeta> rowMetaCaptured = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 3; rowNum++ ) {\n      assertEquals( 0, rowMetaCaptured.get( rowNum ).indexOfValue( \"StringName\" ) );\n      assertEquals( \"string\" + ( rowNum % 3 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n  }\n\n  @Test\n  public void testProcessRowIllegalState() throws Exception {\n    doThrow( new IllegalStateException( \"IllegalStateExceptionMessage\" ) ).when( mockPentahoOrcOutputFormat )\n      .setOutputFile( anyString(), anyBoolean() );\n    when( orcOutput.getLogChannel() ).thenReturn( mockLogChannelInterface );\n    assertFalse( orcOutput.processRow( orcOutputMeta, orcOutputData ) );\n\n    verify( mockLogChannelInterface, times( 1 ) ).logError( \"IllegalStateExceptionMessage\" );\n  }\n\n  @Test\n  public void testProcessRowKettleFailure() {\n    String expectedMessage = \"KettleExceptionMessage\";\n    try {\n      doThrow( new KettleException( expectedMessage ) ).when( orcOutput ).init();\n      orcOutput.processRow( orcOutputMeta, orcOutputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  @Test\n  public void testProcessRowGeneralFailure() {\n    String expectedMessage = \"GeneralExceptionMessage\";\n    try {\n      doThrow( new Exception( expectedMessage ) ).when( orcOutput ).init();\n      orcOutput.processRow( orcOutputMeta, orcOutputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  private Object[] returnNextParquetRow() {\n    Object[] result = null;\n    if ( currentOrcRow < dataInputRows.length ) {\n      result = dataInputRows[currentOrcRow].getData().clone();\n      currentOrcRow++;\n    }\n    return result;\n  }\n\n  private void setOrcOutputRows() {\n    OrcOutputField orcOutputField = mock( OrcOutputField.class );\n    when( orcOutputField.getPentahoFieldName() ).thenReturn( \"StringName\" );\n    orcOutputFields = new ArrayList<>();\n    orcOutputFields.add( orcOutputField );\n  }\n\n  private void setDataInputRowMeta() {\n    dataInputRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( \"StringName\" );\n    dataInputRowMeta.addValueMeta( valueMetaString );\n  }\n\n  private void setDataInputRows() {\n    setDataInputRowMeta();\n    dataInputRows = new RowMetaAndData[] {\n      new RowMetaAndData( dataInputRowMeta, \"string1\" ),\n      new RowMetaAndData( dataInputRowMeta, \"string2\" ),\n      new RowMetaAndData( dataInputRowMeta, \"string3\" )\n    };\n  }\n\n  @Test\n  public void testAliasFile() throws Exception {\n    String aliasPath = Files.createTempDirectory( \"testAliasFile\" ) + File.separator + \"dummyFile\";\n    new File( aliasPath ).createNewFile();  //create the alias file so it and it's parent can be successfully deleted\n    when( mockPentahoOrcOutputFormat.generateAlias( anyString() ) ).thenReturn( aliasPath );\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = orcOutput.processRow( orcOutputMeta, orcOutputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 3 rows to be outputted to an Orc file\n    assertEquals( 3, rowsProcessed );\n    verify( mockRowHandler, times( 3 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    List<RowMeta> rowMetaCaptured = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 3; rowNum++ ) {\n      assertEquals( 0, rowMetaCaptured.get( rowNum ).indexOfValue( \"StringName\" ) );\n      assertEquals( \"string\" + ( rowNum % 3 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n    assertFalse( new File( aliasPath ).exists() );\n    File outputFile = new File( OUTPUT_FILE_NAME );\n    assertTrue( outputFile.exists() );\n    outputFile.delete();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/output/PvfsFileAliaserTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.output;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.hadoop.shim.api.format.IPvfsAliasGenerator;\n\nimport java.io.File;\nimport java.nio.file.Files;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class PvfsFileAliaserTest {\n  PvfsFileAliaser pvfsFileAliaser;\n  @Mock\n  VariableSpace variableSpace;\n  @Mock\n  IPvfsAliasGenerator aliasGenerator;\n  @Mock\n  LogChannelInterface log;\n\n  private static final String TEMP_DIR_PREFIX = \"PvfsFileAliaserTest\";\n  private static String finalPath;\n  private static File finalFile;\n  private String temporaryPath;\n\n  @BeforeClass\n  public static void setup() throws Exception {\n    finalPath = Files.createTempDirectory( TEMP_DIR_PREFIX ) + File.separator + \"finalFile\";\n    finalFile = new File( finalPath );\n  }\n\n  @Before\n  public void setUp() throws Exception {\n    finalFile.delete();\n    temporaryPath = Files.createTempDirectory( TEMP_DIR_PREFIX ) + File.separator + \"temporaryile\";\n    new File( temporaryPath )\n      .createNewFile();  //create the alias file so it and it's parent can be successfully deleted\n    when( aliasGenerator.generateAlias( anyString() ) ).thenReturn( temporaryPath );\n    pvfsFileAliaser = new PvfsFileAliaser( DefaultBowl.getInstance(), finalPath, variableSpace, aliasGenerator, true,\n      log );\n  }\n\n  @Test\n  public void testGenerateWithActiveAlias() throws Exception {\n    String aliasPath = pvfsFileAliaser.generateAlias();\n    assertEquals( temporaryPath, aliasPath );\n    assertFalse( finalFile.exists() );\n    pvfsFileAliaser.copyFileToFinalDestination();\n    assertTrue( finalFile.exists() );\n    pvfsFileAliaser.deleteTempFileAndFolder();\n    assertFalse( new File( new File( temporaryPath ).getParent() ).exists() );\n  }\n\n  @Test\n  public void testGenerateWithInactiveAlias() throws Exception {\n    when( aliasGenerator.generateAlias( anyString() ) ).thenReturn( null );\n    String aliasPath = pvfsFileAliaser.generateAlias();\n    assertEquals( finalPath, aliasPath );\n    assertFalse( finalFile.exists() );\n    pvfsFileAliaser.copyFileToFinalDestination();\n    assertFalse( finalFile.exists() );\n  }\n\n  @Test\n  public void testCopyFileToFinalDestinationWithoutGenerate() throws Exception {\n    pvfsFileAliaser.copyFileToFinalDestination();\n    assertFalse( finalFile.exists() );\n    assertTempFileExistsAndDelete();\n  }\n\n  @Test\n  public void testDeleteTempFileAndFolderWithoutGenerate() {\n    pvfsFileAliaser.deleteTempFileAndFolder();\n    assertFalse( finalFile.exists() );\n    assertTempFileExistsAndDelete();\n  }\n\n  private void assertTempFileExistsAndDelete() {\n    File tempFile = new File( temporaryPath );\n    assertTrue( tempFile.exists() );\n    deleteTempFile();\n  }\n\n  private void deleteTempFile() {\n    File tempFile = new File( temporaryPath );\n    tempFile.delete();\n    new File( tempFile.getParent() ).delete();\n  }\n}"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.input.ParquetInputField;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\nimport static org.mockito.Mockito.mock;\n\npublic class ParquetInputMetaInjectionTest extends BaseMetadataInjectionTest<ParquetInputMeta> {\n\n  @Before\n  public void setup() {\n    NamedClusterResolver namedClusterResolver = mock( NamedClusterResolver.class );\n    setup( new ParquetInputMeta( namedClusterResolver ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"FILENAME\", new StringGetter() {\n      public String get() {\n        return meta.inputFiles.fileName[ 0 ];\n      }\n    } );\n\n    check( \"FIELD_NAME\", new StringGetter() {\n      public String get() {\n        return meta.inputFields[ 0 ].getPentahoFieldName();\n      }\n    } );\n\n    check( \"IGNORE_EMPTY_FOLDER\", new BooleanGetter() {\n      public boolean get() {\n        return meta.isIgnoreEmptyFolder();\n      }\n    } );\n\n\n    String[] typeNames = ValueMetaBase.getAllTypes();\n    checkStringToInt( \"FIELD_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.inputFields[ 0 ].getPentahoType();\n      }\n    }, typeNames, getTypeCodes( typeNames ) );\n\n    check( \"FIELD_PATH\", new StringGetter() {\n      public String get() {\n        return meta.inputFields[ 0 ].getFormatFieldName();\n      }\n    } );\n\n    String[] parquetTypeNames = ParquetSpec.DataType.getDisplayableTypeNames();\n    checkStringToInt( \"PARQUET_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.inputFields[ 0 ].getParquetType().getId();\n      }\n    }, parquetTypeNames, getParquetTypeCodes( parquetTypeNames ) );\n  }\n\n  public static int[] getParquetTypeCodes( String[] parquetTypeNames ) {\n    int[] parquetTypeCodes = new int[ parquetTypeNames.length ];\n\n    for ( int i = 0; i < parquetTypeNames.length; ++i ) {\n      ParquetInputField field = new ParquetInputField();\n      field.setParquetType( parquetTypeNames[ i ] );\n      parquetTypeCodes[ i ] = field.getParquetType().getId();\n    }\n\n    return parquetTypeCodes;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/input/ParquetInputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.input;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.RowHandler;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoInputFormat;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetInputFormat;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.nio.file.NoSuchFileException;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.Iterator;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class ParquetInputTest {\n\n  private static final String INPUT_STEP_NAME = \"Input Step Name\";\n  private static final String INPUT_STREAM_FIELD_NAME = \"inputStreamFieldName\";\n  private static final String PASS_FIELD_NAME = \"passFieldName\";\n\n  @Mock\n  private StepMeta mockStepMeta;\n  @Mock\n  private StepDataInterface mockStepDataInterface;\n  @Mock\n  private TransMeta mockTransMeta;\n  @Mock\n  private Trans mockTrans;\n  @Mock\n  private NamedClusterServiceLocator mockNamedClusterServiceLocator;\n  @Mock\n  private NamedClusterService mockNamedClusterService;\n  @Mock\n  private MetastoreLocator mockMetaStoreLocator;\n  @Mock\n  private FormatService mockFormatService;\n  @Mock\n  private ParquetInputData parquetInputData;\n\n  @Mock\n  private RowHandler mockRowHandler;\n  @Mock\n  private IPentahoParquetInputFormat mockPentahoParquetInputFormat;\n  @Mock\n  private IPentahoParquetInputFormat.IPentahoRecordReader mockPentahoParquetRecordReader;\n  @Mock\n  private IPentahoParquetInputFormat.IPentahoInputSplit mockPentahoInputSplit;\n\n  private ParquetInputMeta parquetInputMeta;\n  private ParquetInput parquetInput;\n  private RowMeta parquetRowMeta;\n  private RowMetaAndData[] parquetRows;\n  private RowMeta inputRowMeta;\n  private RowMetaAndData[] inputRows;\n  private int currentParquetInputRow;\n\n  @Before\n  public void setUp() throws Exception {\n    KettleLogStore.init();\n    currentParquetInputRow = 0;\n    setInputRows();\n    setParquetRows();\n    Collection<MetastoreLocator> metastoreLocatorCollection = new ArrayList<>();\n    metastoreLocatorCollection.add( mockMetaStoreLocator );\n    NamedClusterResolver namedClusterResolver;\n    try ( MockedStatic<PluginServiceLoader> pluginServiceLoaderMockedStatic = Mockito.mockStatic( PluginServiceLoader.class ) ) {\n      pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( MetastoreLocator.class ) )\n        .thenReturn( metastoreLocatorCollection );\n      namedClusterResolver = Mockito.mock( NamedClusterResolver.class );\n      when( namedClusterResolver.getNamedClusterServiceLocator() ).thenReturn( mockNamedClusterServiceLocator );\n      when( namedClusterResolver.resolveNamedCluster( any( String.class ) ) ).thenReturn( null );\n\n      parquetInputMeta = new ParquetInputMeta( namedClusterResolver );\n      parquetInputMeta.inputFiles.fileName = new String[1];\n      parquetInputMeta.setFilename( INPUT_STREAM_FIELD_NAME );\n\n      parquetInputMeta.setParentStepMeta( mockStepMeta );\n      when( mockStepMeta.getName() ).thenReturn( INPUT_STEP_NAME );\n      when( mockTransMeta.findStep( INPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n\n      parquetInputData.input = mockPentahoParquetInputFormat;\n      when( mockFormatService.createInputFormat( IPentahoParquetInputFormat.class,\n        parquetInputMeta.getNamedClusterResolver().resolveNamedCluster( parquetInputMeta.getFilename() ) ) )\n        .thenReturn( mockPentahoParquetInputFormat );\n      when( mockNamedClusterServiceLocator.getService( nullable( NamedCluster.class ), any( Class.class ) ) )\n        .thenReturn( mockFormatService );\n      when( mockPentahoParquetInputFormat.createRecordReader( mockPentahoInputSplit ) ).thenReturn(\n        mockPentahoParquetRecordReader );\n      when( mockPentahoParquetRecordReader.iterator() ).thenReturn( new ParquetInputTest.ParquetRecordIterator() );\n      List<IPentahoInputFormat.IPentahoInputSplit> splits = new ArrayList<>();\n      splits.add( mockPentahoInputSplit );\n      when( parquetInputData.input.getSplits() ).thenReturn( splits );\n\n      parquetInput = spy( new ParquetInput( mockStepMeta, mockStepDataInterface, 0, mockTransMeta,\n        mockTrans ) );\n      parquetInput.setRowHandler( mockRowHandler );\n      parquetInput.setInputRowMeta( inputRowMeta );\n      parquetInput.setLogLevel( LogLevel.ERROR );\n      parquetInput.setTransMeta( mockTransMeta );\n    }\n  }\n\n  private Object[] returnNextInputRow() {\n    Object[] result = null;\n    if ( currentParquetInputRow < inputRows.length ) {\n      result = inputRows[currentParquetInputRow].getData().clone();\n      currentParquetInputRow++;\n    }\n    return result;\n  }\n\n  @Test\n  public void testProcessRow() throws Exception {\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = parquetInput.processRow( parquetInputMeta, parquetInputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 1 file, 2 rows. The third run is to increase the split count, which will return false on the next processRow call\n    assertEquals( 3, rowsProcessed );\n    verify( mockRowHandler, times( 2 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    List<RowMeta> rowMeta = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 2; rowNum++ ) {\n      assertEquals( 0, rowMeta.get( rowNum ).indexOfValue( \"str\" ) );\n      assertEquals( \"string\" + ( rowNum % 2 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n  }\n\n  @Test\n  public void testInit() {\n    assertEquals( true, parquetInput.init() );\n  }\n\n  @Test\n  public void testProcessNoSuchFile() throws Exception {\n    String expectedMessage = \"No input file\";\n    try {\n      doThrow( new NoSuchFileException( \"NoSuchFileExceptionMessage\" ) ).when( parquetInput ).initSplits();\n      parquetInput.processRow( parquetInputMeta, parquetInputData );\n      fail( \"No Kettle Exception thrown\" );\n\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    }\n  }\n\n  @Test\n  public void testProcessRowKettleFailure() {\n    String expectedMessage = \"KettleExceptionMessage\";\n    try {\n      doThrow( new KettleException( expectedMessage ) ).when( parquetInput ).initSplits();\n      parquetInput.processRow( parquetInputMeta, parquetInputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  @Test\n  public void testProcessRowGeneralFailure() {\n    String expectedMessage = \"GeneralExceptionMessage\";\n    try {\n      doThrow( new Exception( expectedMessage ) ).when( parquetInput ).initSplits();\n      parquetInput.processRow( parquetInputMeta, parquetInputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  private RowMeta setParquetRowMeta() {\n    parquetRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( \"str\" );\n    parquetRowMeta.addValueMeta( valueMetaString );\n    ValueMetaInterface valueMetaBoolean = new ValueMetaBoolean( \"bool\" );\n    parquetRowMeta.addValueMeta( valueMetaBoolean );\n    ValueMetaInterface valueMetaInteger = new ValueMetaInteger( \"int\" );\n    parquetRowMeta.addValueMeta( valueMetaInteger );\n    return parquetRowMeta;\n  }\n\n  private RowMeta setInputRowMeta() {\n    inputRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( INPUT_STREAM_FIELD_NAME );\n    inputRowMeta.addValueMeta( valueMetaString );\n    ValueMetaInterface valueMetaString2 = new ValueMetaString( PASS_FIELD_NAME );\n    inputRowMeta.addValueMeta( valueMetaString2 );\n    return inputRowMeta;\n  }\n\n  private void setInputRows() {\n    setInputRowMeta();\n    inputRows = new RowMetaAndData[] {\n      new RowMetaAndData( parquetRowMeta, \"parquetFile\", \"pass1\" )\n    };\n\n  }\n\n  private void setParquetRows() {\n    setParquetRowMeta();\n    parquetRows = new RowMetaAndData[] {\n      new RowMetaAndData( parquetRowMeta, \"string1\", true, new Integer( 123 ) ),\n      new RowMetaAndData( parquetRowMeta, \"string2\", true, new Integer( 321 ) )\n    };\n  }\n\n  private class ParquetRecordIterator implements Iterator<RowMetaAndData> {\n\n    private Iterator<RowMetaAndData> iter;\n    private boolean reset;\n\n    ParquetRecordIterator() {\n      init();\n    }\n\n    private void init() {\n      iter = Arrays.asList( parquetRows ).iterator();\n      reset = false;\n    }\n\n    @Override\n    public boolean hasNext() {\n      if ( reset ) {\n        init();\n      }\n      if ( !iter.hasNext() ) {\n        reset = true;\n      }\n      return iter.hasNext();\n    }\n\n    @Override\n    public RowMetaAndData next() {\n      if ( reset ) {\n        init(); // Simultate a new iterator for the new file\n      }\n      return iter.next().clone();\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.mockito.Mockito.mock;\n\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.ParquetTypeConverter;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\npublic class ParquetOutputMetaInjectionTest extends BaseMetadataInjectionTest<ParquetOutputMeta> {\n\n  @Before\n  public void setup() {\n    NamedClusterResolver namedClusterResolver = mock( NamedClusterResolver.class );\n    setup( new ParquetOutputMeta( namedClusterResolver ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"FILENAME\", new StringGetter() {\n      public String get() {\n        return meta.getFilename();\n      }\n    } );\n\n    check( \"ROW_GROUP_SIZE\", new StringGetter() {\n      public String get() {\n        return meta.getRowGroupSize();\n      }\n    } );\n    check( \"DATA_PAGE_SIZE\", new StringGetter() {\n      public String get() {\n        return meta.getDataPageSize();\n      }\n    } );\n    check( \"ENABLE_DICTIONARY\", new BooleanGetter() {\n      public boolean get() {\n        return meta.isEnableDictionary();\n      }\n    } );\n    check( \"DICT_PAGE_SIZE\", new StringGetter() {\n      public String get() {\n        return meta.getDictPageSize();\n      }\n    } );\n    check( \"OVERRIDE_OUTPUT\", new BooleanGetter() {\n      public boolean get() {\n        return meta.isOverrideOutput();\n      }\n    } );\n    check( \"INC_DATE_IN_FILENAME\", new BooleanGetter() {\n      public boolean get() {\n        return meta.isDateInFilename();\n      }\n    } );\n    check( \"INC_TIME_IN_FILENAME\", new BooleanGetter() {\n      public boolean get() {\n        return meta.isTimeInFilename();\n      }\n    } );\n    check( \"EXTENSION\", new StringGetter() {\n      public String get() {\n        return meta.getExtension();\n      }\n    } );\n\n    check( \"DATE_FORMAT\", new StringGetter() {\n      public String get() {\n        return meta.getDateTimeFormat();\n      }\n    } );\n\n    check( \"FIELD_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFields().get( 0 ).getPentahoFieldName();\n      }\n    } );\n\n    int [] supportedPdiTypes = {\n      ValueMetaInterface.TYPE_NUMBER,\n      ValueMetaInterface.TYPE_STRING,\n      ValueMetaInterface.TYPE_DATE,\n      ValueMetaInterface.TYPE_BOOLEAN,\n      ValueMetaInterface.TYPE_INTEGER,\n      ValueMetaInterface.TYPE_BIGNUMBER,\n      ValueMetaInterface.TYPE_SERIALIZABLE,\n      ValueMetaInterface.TYPE_BINARY,\n      ValueMetaInterface.TYPE_TIMESTAMP,\n      ValueMetaInterface.TYPE_INET\n    };\n    String[] typeNames = new String[ supportedPdiTypes.length ];\n    int[] typeIds = new int[ supportedPdiTypes.length ];\n    for ( int j = 0; j < supportedPdiTypes.length; j++ ) {\n      typeNames[ j ] = ValueMetaInterface.getTypeDescription( supportedPdiTypes[ j ] );\n      String parquetTypeName = ParquetTypeConverter.convertToParquetType( supportedPdiTypes[ j ] );\n      for ( ParquetSpec.DataType parquetType : ParquetSpec.DataType.values() ) {\n        if ( parquetType.getName().equals( parquetTypeName ) ) {\n          typeIds[ j ] = parquetType.getId();\n          break;\n        }\n      }\n    }\n    checkStringToInt( \"FIELD_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.getOutputFields().get( 0 ).getFormatType();\n      }\n    }, typeNames, typeIds );\n\n\n    ParquetSpec.DataType[] supportedParquetTypes = {\n      ParquetSpec.DataType.UTF8,\n      ParquetSpec.DataType.INT_32,\n      ParquetSpec.DataType.INT_64,\n      ParquetSpec.DataType.FLOAT,\n      ParquetSpec.DataType.DOUBLE,\n      ParquetSpec.DataType.BOOLEAN,\n      ParquetSpec.DataType.DECIMAL,\n      ParquetSpec.DataType.DATE,\n      ParquetSpec.DataType.TIMESTAMP_MILLIS,\n      ParquetSpec.DataType.BINARY\n    };\n    typeNames = new String[ supportedParquetTypes.length ];\n    typeIds = new int[ supportedParquetTypes.length ];\n    for ( int i = 0; i < supportedParquetTypes.length; i++ ) {\n      typeNames[ i ] = supportedParquetTypes[ i ].getName();\n      typeIds[ i ] = supportedParquetTypes[ i ].getId();\n    }\n    checkStringToInt( \"FIELD_PARQUET_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.getOutputFields().get( 0 ).getFormatType();\n      }\n    }, typeNames, typeIds );\n\n    check( \"FIELD_DECIMAL_PRECISION\", new IntGetter() {\n      public int get() {\n        return meta.getOutputFields().get( 0 ).getPrecision();\n      }\n    } );\n\n    check( \"FIELD_DECIMAL_SCALE\", new IntGetter() {\n      public int get() {\n        return meta.getOutputFields().get( 0 ).getScale();\n      }\n    } );\n\n    check( \"FIELD_PATH\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFields().get( 0 ).getFormatFieldName();\n      }\n    } );\n\n    check( \"FIELD_IF_NULL\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFields().get( 0 ).getDefaultValue();\n      }\n    } );\n    check( \"FIELD_NULLABLE\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getOutputFields().get( 0 ).getAllowNull();\n      }\n    } );\n\n    skipPropertyTest( \"COMPRESSION\" );\n    skipPropertyTest( \"PARQUET_VERSION\" );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/core/src/test/java/org/pentaho/big/data/kettle/plugins/formats/impl/parquet/output/ParquetOutputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.impl.parquet.output;\n\nimport org.junit.Before;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.ExpectedException;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.formats.impl.NamedClusterResolver;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.output.ParquetOutputField;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.RowMetaAndData;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.RowHandler;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.format.FormatService;\nimport org.pentaho.hadoop.shim.api.format.IPentahoParquetOutputFormat;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.io.File;\nimport java.nio.file.Files;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyBoolean;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doNothing;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith(MockitoJUnitRunner.class)\npublic class ParquetOutputTest {\n\n  private static final String OUTPUT_STEP_NAME = \"Output Step Name\";\n  private static final String OUTPUT_TRANS_NAME = \"Output Trans Name\";\n  private static final String OUTPUT_FILE_NAME = \"outputFileName\";\n\n  @Rule\n  public ExpectedException expectedException;\n\n  @Mock\n  private StepMeta mockStepMeta;\n  @Mock\n  private StepDataInterface mockStepDataInterface;\n  @Mock\n  private TransMeta mockTransMeta;\n  @Mock\n  private Trans mockTrans;\n  @Mock\n  private NamedClusterServiceLocator mockNamedClusterServiceLocator;\n  @Mock\n  private NamedClusterService mockNamedClusterService;\n  @Mock\n  private MetastoreLocator mockMetaStoreLocator;\n  @Mock\n  private FormatService mockFormatService;\n  @Mock\n  private ParquetOutputData parquetOutputData;\n  @Mock\n  private RowHandler mockRowHandler;\n  @Mock\n  private IPentahoParquetOutputFormat mockPentahoParquetOutputFormat;\n  @Mock\n  private LogChannelInterface mockLogChannelInterface;\n  @Mock\n  private IPentahoParquetOutputFormat.IPentahoRecordWriter mockPentahoParquetRecordWriter;\n\n  private ParquetOutput parquetOutput;\n  private List<ParquetOutputField> parquetOutputFields;\n  private ParquetOutputMeta parquetOutputMeta;\n  private RowMeta dataInputRowMeta;\n  private RowMetaAndData[] dataInputRows;\n  private int currentParquetRow;\n\n  @Before\n  public void setUp() throws Exception {\n    KettleLogStore.init();\n    expectedException = ExpectedException.none();\n    currentParquetRow = 0;\n    setDataInputRows();\n    setParquetOutputRows();\n    Collection<MetastoreLocator> metastoreLocatorCollection = new ArrayList<>();\n    metastoreLocatorCollection.add( mockMetaStoreLocator );\n    NamedClusterResolver namedClusterResolver;\n    try ( MockedStatic<PluginServiceLoader> pluginServiceLoaderMockedStatic = Mockito.mockStatic( PluginServiceLoader.class ) ) {\n      pluginServiceLoaderMockedStatic.when( () -> PluginServiceLoader.loadServices( MetastoreLocator.class ) )\n        .thenReturn( metastoreLocatorCollection );\n\n      // Mock the NamedClusterResolver instead of using the singleton\n      namedClusterResolver = Mockito.mock( NamedClusterResolver.class );\n      when( namedClusterResolver.getNamedClusterServiceLocator() ).thenReturn( mockNamedClusterServiceLocator );\n      when( namedClusterResolver.resolveNamedCluster( any( String.class ) ) ).thenReturn( null );\n\n      parquetOutputMeta = new ParquetOutputMeta( namedClusterResolver );\n      parquetOutputMeta.setFilename( OUTPUT_FILE_NAME );\n      parquetOutputMeta.setOverrideOutput( true );\n      parquetOutputMeta.setOutputFields( parquetOutputFields );\n\n      parquetOutputMeta.setParentStepMeta( mockStepMeta );\n      when( mockStepMeta.getName() ).thenReturn( OUTPUT_STEP_NAME );\n      when( mockTransMeta.findStep( OUTPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.findStep( OUTPUT_STEP_NAME ) ).thenReturn( mockStepMeta );\n      when( mockTransMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n\n      try {\n        when( mockRowHandler.getRow() ).thenAnswer( answer -> returnNextParquetRow() );\n      } catch ( KettleException ke ) {\n        ke.printStackTrace();\n      }\n\n      when( mockFormatService.createOutputFormat( IPentahoParquetOutputFormat.class,\n        parquetOutputMeta.getNamedClusterResolver().resolveNamedCluster( parquetOutputMeta.getFilename() ) ) )\n        .thenReturn( mockPentahoParquetOutputFormat );\n      when( mockNamedClusterServiceLocator.getService( nullable( NamedCluster.class ), any( Class.class ) ) )\n        .thenReturn( mockFormatService );\n      when( mockPentahoParquetOutputFormat.createRecordWriter() ).thenReturn( mockPentahoParquetRecordWriter );\n\n      parquetOutput = spy( new ParquetOutput( mockStepMeta, mockStepDataInterface, 0, mockTransMeta, mockTrans ) );\n      parquetOutput.init( parquetOutputMeta, parquetOutputData );\n      parquetOutput.setInputRowMeta( dataInputRowMeta );\n      parquetOutput.setRowHandler( mockRowHandler );\n      parquetOutput.setLogLevel( LogLevel.ERROR );\n      parquetOutput.setTransMeta( mockTransMeta );\n    }\n  }\n\n  @Test\n  public void testProcessRow() throws Exception {\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = parquetOutput.processRow( parquetOutputMeta, parquetOutputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 3 rows to be outputted to an parquet file\n    assertEquals( 3, rowsProcessed );\n    verify( mockRowHandler, times( 3 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    verify( parquetOutput, times( 3 ) ).incrementLinesOutput();\n    List<RowMeta> rowMetaCaptured = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 3; rowNum++ ) {\n      assertEquals( 0, rowMetaCaptured.get( rowNum ).indexOfValue( \"StringName\" ) );\n      assertEquals( \"string\" + ( rowNum % 3 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n  }\n\n  @Test\n  public void initShouldPassEmbeddedMetastoreKey() {\n    ParquetOutputMeta stepMetaInterface = mock( ParquetOutputMeta.class );\n    ParquetOutputData stepDataInterface = mock( ParquetOutputData.class );\n    NamedClusterEmbedManager namedClusterEmbedManager = mock( NamedClusterEmbedManager.class );\n    when( mockTransMeta.getNamedClusterEmbedManager() ).thenReturn( namedClusterEmbedManager );\n    when( mockTransMeta.getEmbeddedMetastoreProviderKey() ).thenReturn( \"metastoreProviderKey\" );\n    parquetOutput.init( stepMetaInterface, stepDataInterface );\n\n    verify( namedClusterEmbedManager ).passEmbeddedMetastoreKey( mockTransMeta, \"metastoreProviderKey\" );\n  }\n\n  @Test\n  public void testProcessRowIllegalState() throws Exception {\n    doThrow( new IllegalStateException( \"IllegalStateExceptionMessage\" ) ).when( mockPentahoParquetOutputFormat )\n      .setOutputFile( anyString(), anyBoolean() );\n    when( parquetOutput.getLogChannel() ).thenReturn( mockLogChannelInterface );\n    assertFalse( parquetOutput.processRow( parquetOutputMeta, parquetOutputData ) );\n\n    verify( mockLogChannelInterface,\n      times( 1 ) )\n      .logError( \"IllegalStateExceptionMessage\" );\n  }\n\n  @Test\n  public void testProcessRowKettleFailure() {\n    String expectedMessage = \"KettleExceptionMessage\";\n    try {\n      doNothing().when( parquetOutput ).closeWriter();\n      doThrow( new KettleException( expectedMessage ) ).when( parquetOutput ).init( any() );\n      parquetOutput.processRow( parquetOutputMeta, parquetOutputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  @Test\n  public void testProcessRowGeneralFailure() {\n    String expectedMessage = \"GeneralExceptionMessage\";\n    try {\n      doNothing().when( parquetOutput ).closeWriter();\n      doThrow( new Exception( expectedMessage ) ).when( parquetOutput ).init( any() );\n      parquetOutput.processRow( parquetOutputMeta, parquetOutputData );\n      fail( \"No Kettle Exception thrown\" );\n    } catch ( KettleException kex ) {\n      assertTrue( kex.getMessage().contains( expectedMessage ) );\n    } catch ( Exception ex ) {\n      fail( \"No other type of exception should be thrown\" );\n    }\n  }\n\n  private Object[] returnNextParquetRow() {\n    Object[] result = null;\n    if ( currentParquetRow < dataInputRows.length ) {\n      result = dataInputRows[currentParquetRow].getData().clone();\n      currentParquetRow++;\n    }\n    return result;\n  }\n\n  private void setParquetOutputRows() {\n    ParquetOutputField parquetOutputField = mock( ParquetOutputField.class );\n    parquetOutputFields = new ArrayList<>();\n    parquetOutputFields.add( parquetOutputField );\n  }\n\n  private void setDataInputRowMeta() {\n    dataInputRowMeta = new RowMeta();\n    ValueMetaInterface valueMetaString = new ValueMetaString( \"StringName\" );\n    dataInputRowMeta.addValueMeta( valueMetaString );\n  }\n\n  private void setDataInputRows() {\n    setDataInputRowMeta();\n    dataInputRows = new RowMetaAndData[] {\n      new RowMetaAndData( dataInputRowMeta, \"string1\" ),\n      new RowMetaAndData( dataInputRowMeta, \"string2\" ),\n      new RowMetaAndData( dataInputRowMeta, \"string3\" )\n    };\n  }\n\n  @Test\n  public void testAliasFile() throws Exception {\n    String aliasPath = Files.createTempDirectory( \"testAliasFile\" ) + File.separator + \"dummyFile\";\n    new File( aliasPath ).createNewFile();  //create the alias file so it and it's parent can be successfully deleted\n    when( mockPentahoParquetOutputFormat.generateAlias( anyString() ) ).thenReturn( aliasPath );\n    boolean result;\n    int rowsProcessed = 0;\n    ArgumentCaptor<RowMeta> rowMetaCaptor = ArgumentCaptor.forClass( RowMeta.class );\n    ArgumentCaptor<Object[]> dataCaptor = ArgumentCaptor.forClass( Object[].class );\n\n    do {\n      result = parquetOutput.processRow( parquetOutputMeta, parquetOutputData );\n      if ( result ) {\n        rowsProcessed++;\n      }\n    } while ( result );\n\n    // 3 rows to be outputted to an parquet file\n    assertEquals( 3, rowsProcessed );\n    verify( mockRowHandler, times( 3 ) ).putRow( rowMetaCaptor.capture(), dataCaptor.capture() );\n    List<RowMeta> rowMetaCaptured = rowMetaCaptor.getAllValues();\n    List<Object[]> dataCaptured = dataCaptor.getAllValues();\n    for ( int rowNum = 0; rowNum < 3; rowNum++ ) {\n      assertEquals( 0, rowMetaCaptured.get( rowNum ).indexOfValue( \"StringName\" ) );\n      assertEquals( \"string\" + ( rowNum % 3 + 1 ), dataCaptured.get( rowNum )[0] );\n    }\n    assertFalse( new File( aliasPath ).exists() );\n    File outputFile = new File( OUTPUT_FILE_NAME );\n    assertTrue( outputFile.exists() );\n    outputFile.delete();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-formats</artifactId>\n  <packaging>pom</packaging>\n\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/formats-meta/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins-formats-meta</artifactId>\n  <packaging>jar</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <easymock.versin>3.0</easymock.versin>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-all</artifactId>\n      <version>${dependency.mockito.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-hadoop</artifactId>\n      <version>${parquet.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/BaseFormatInputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats;\n\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.trans.steps.file.BaseFileField;\nimport org.pentaho.hadoop.shim.api.format.IFormatInputField;\n\n/**\n * Base input step field for various big data file formats\n *\n * @author tkafalas\n */\npublic class BaseFormatInputField extends BaseFileField implements IFormatInputField {\n  @Injection( name = \"FIELD_PATH\", group = \"FIELDS\" )\n  protected String formatFieldName = null;\n\n  private int formatType;\n  private int precision = 0;\n  private int scale = 0;\n  private String stringFormat = \"\";\n\n  @Override\n  public String getFormatFieldName() {\n    return formatFieldName;\n  }\n\n  @Override\n  public void setFormatFieldName( String formatFieldName ) {\n    this.formatFieldName = formatFieldName;\n  }\n\n  @Override\n  public String getPentahoFieldName() {\n    return getName();\n  }\n\n  @Override\n  public void setPentahoFieldName( String pentahoFieldName ) {\n    setName( pentahoFieldName );\n  }\n\n  @Override\n  public int getPentahoType() {\n    return getType();\n  }\n\n  @Override\n  public void setPentahoType( int pentahoType ) {\n    setType( pentahoType );\n  }\n\n  @Override public int getFormatType() {\n    return formatType;\n  }\n\n  @Override public void setFormatType( int formatType ) {\n    this.formatType = formatType;\n  }\n\n  @Override public int getPrecision() {\n    return this.precision;\n  }\n\n  @Override public void setPrecision( int precision ) {\n    this.precision = precision;\n  }\n\n  @Override public int getScale() {\n    return scale;\n  }\n\n  @Override public void setScale( int scale ) {\n    this.scale = scale;\n  }\n\n  @Override\n  public String getStringFormat() {\n    return stringFormat;\n  }\n\n  @Override\n  public void setStringFormat( String stringFormat ) {\n    this.stringFormat = stringFormat == null ? \"\" : stringFormat;\n  }\n\n  public void setPentahoType( String value ) {\n    setType( value );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/BaseFormatOutputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats;\n\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.hadoop.shim.api.format.IFormatOutputField;\n\npublic class BaseFormatOutputField implements IFormatOutputField {\n  public static final int DEFAULT_DECIMAL_PRECISION = 10;\n  public static final int DEFAULT_DECIMAL_SCALE = 0;\n\n  protected int formatType;\n\n  protected int pentahoType;\n\n  @Injection( name = \"FIELD_PATH\", group = \"FIELDS\" )\n  protected String formatFieldName;\n\n  @Injection( name = \"FIELD_NAME\", group = \"FIELDS\" )\n  protected String pentahoFieldName;\n\n  @Injection( name = \"FIELD_NULLABLE\", group = \"FIELDS\" )\n  protected boolean allowNull;\n\n  @Injection( name = \"FIELD_IF_NULL\", group = \"FIELDS\" )\n  protected String defaultValue;\n\n  @Injection( name = \"FIELD_DECIMAL_PRECISION\", group = \"FIELDS\" )\n  protected int precision;\n\n  @Injection( name = \"FIELD_DECIMAL_SCALE\", group = \"FIELDS\" )\n  protected int scale;\n\n  @Override\n  public String getFormatFieldName() {\n    return formatFieldName;\n  }\n\n  @Override\n  public void setFormatFieldName( String formatFieldName ) {\n    this.formatFieldName = formatFieldName;\n  }\n\n  @Override\n  public String getPentahoFieldName() {\n    return pentahoFieldName;\n  }\n\n  @Override\n  public void setPentahoFieldName( String pentahoFieldName ) {\n    this.pentahoFieldName = pentahoFieldName;\n  }\n\n  @Override\n  public boolean getAllowNull() {\n    return allowNull;\n  }\n\n  @Override\n  public void setAllowNull( boolean allowNull ) {\n    this.allowNull = allowNull;\n  }\n\n  @Override\n  public String getDefaultValue() {\n    return defaultValue;\n  }\n\n  @Override\n  public void setDefaultValue( String defaultValue ) {\n    this.defaultValue = defaultValue;\n  }\n\n  @Injection( name = \"FIELD_NULL_STRING\", group = \"FIELDS\" )\n  public void setAllowNull( String allowNull ) {\n    if ( allowNull != null && allowNull.length() > 0 ) {\n      if ( allowNull.equalsIgnoreCase( \"yes\" ) || allowNull.equalsIgnoreCase( \"y\" ) ) {\n        this.allowNull = true;\n      } else if ( allowNull.equalsIgnoreCase( \"no\" ) || allowNull.equalsIgnoreCase( \"n\" ) ) {\n        this.allowNull = false;\n      } else {\n        this.allowNull = Boolean.parseBoolean( allowNull );\n      }\n    }\n  }\n\n  @Override\n  public int getFormatType() {\n    return formatType;\n  }\n\n  @Override\n  public void setFormatType( int formatType ) {\n    this.formatType = formatType;\n  }\n\n  @Override\n  public int getPrecision() {\n    return precision;\n  }\n\n  @Override\n  public void setPrecision( String precision ) {\n    if ( precision == null || precision.equals( \"\" ) ) {\n      this.precision = DEFAULT_DECIMAL_PRECISION;\n    } else {\n      this.precision = Integer.valueOf( precision );\n      if ( this.precision <= 0 ) {\n        this.precision = DEFAULT_DECIMAL_PRECISION;\n      }\n    }\n  }\n\n  @Override\n  public int getScale() {\n    return scale;\n  }\n\n  @Override\n  public void setScale( String scale ) {\n    if ( scale == null || scale.equals( \"\" ) ) {\n      this.scale = DEFAULT_DECIMAL_SCALE;\n    } else {\n      this.scale = Integer.valueOf( scale );\n      if ( this.scale < 0 ) {\n        this.scale = DEFAULT_DECIMAL_SCALE;\n      }\n    }\n  }\n\n  @Override\n  public int getPentahoType() {\n    return pentahoType;\n  }\n\n  @Override\n  public void setPentahoType( int pentahoType ) {\n    this.pentahoType = pentahoType;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/FormatInputFile.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats;\n\nimport org.pentaho.di.trans.steps.file.BaseFileInputFiles;\n\n/**\n * Base class for format's input file - env added.\n * \n * @author <alexander_buloichik@epam.com>\n */\npublic class FormatInputFile extends BaseFileInputFiles {\n\n  public String[] environment = {};\n\n  /**\n   * we need to reallocate {@link #environment} too since it can have other length\n   */\n  @Override\n  public void normalizeAllocation( int length ) {\n    super.normalizeAllocation( length );\n    environment = normalizeAllocation( environment, length );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/FormatInputOutputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats;\n\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.trans.steps.file.BaseFileField;\n\n/**\n * Base class for format's input/output field - path added.\n * \n * @author <alexander_buloichik@epam.com>\n */\npublic class FormatInputOutputField extends BaseFileField {\n  @Injection( name = \"FIELD_PATH\", group = \"FIELDS\" )\n  protected String path;\n\n  @Injection( name = \"FIELD_NULLABLE\", group = \"FIELDS\" )\n  protected boolean nullable = true;\n\n  protected int sourceType;\n\n  public String getPath() {\n    return path;\n  }\n\n  public void setPath( String path ) {\n    this.path = path;\n  }\n\n  public boolean isNullable() {\n    return nullable;\n  }\n\n  public void setNullable( boolean nullable ) {\n    this.nullable = nullable;\n  }\n\n  /**\n   * @return The field type when read from the source before it was possibly overriden in the UI\n   * (eg. AvroInput step)\n   */\n  public int getSourceType() {\n    return sourceType;\n  }\n\n  public void setSourceType( int sourceType ) {\n    this.sourceType = sourceType;\n  }\n\n  @Injection( name = \"FIELD_SOURCE_TYPE\", group = \"FIELDS\" )\n  public void setSourceType( String value ) {\n    this.sourceType = ValueMetaFactory.getIdForValueMeta( value );\n  }\n\n  public String getSourceTypeDesc() {\n    return ValueMetaFactory.getValueMetaName( sourceType );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/OrcFormatInputOutputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc;\n\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\n\n/**\n * Base class for format's input/output field - path added.\n * \n * @author JRice <joseph.rice@hitachivantara.com>\n */\npublic class OrcFormatInputOutputField {\n  @Injection( name = \"FIELD_PATH\", group = \"FIELDS\" )\n  protected String path;\n\n  @Injection( name = \"FIELD_NAME\", group = \"FIELDS\" )\n  private String name;\n\n  @Injection( name = \"FIELD_NULL_STRING\", group = \"FIELDS\" )\n  private String nullString;\n\n  @Injection( name = \"FIELD_IF_NULL\", group = \"FIELDS\" )\n  private String ifNullValue;\n\n  private int type;\n\n  public String getPath() {\n    return path;\n  }\n\n  public void setPath( String path ) {\n    this.path = path;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getNullString() {\n    return nullString;\n  }\n\n  public void setNullString( String nullString ) {\n    this.nullString = nullString;\n  }\n\n  public String getIfNullValue() {\n    return ifNullValue;\n  }\n\n  public void setIfNullValue( String ifNullValue ) {\n    this.ifNullValue = ifNullValue;\n  }\n\n  public int getType() {\n    return type;\n  }\n\n  public void setType( int type ) {\n    this.type = type;\n  }\n\n  public String getTypeDesc() {\n    return ValueMetaFactory.getValueMetaName( type );\n  }\n\n  @Injection( name = \"FIELD_TYPE\", group = \"FIELDS\" )\n  public void setType( String value ) {\n    this.type = ValueMetaFactory.getIdForValueMeta( value );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/OrcInputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc;\n\nimport org.pentaho.big.data.kettle.plugins.formats.BaseFormatInputField;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.hadoop.shim.api.format.IOrcInputField;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\n/**\n * @Author tkafalas\n */\npublic class OrcInputField extends BaseFormatInputField implements IOrcInputField {\n  public OrcSpec.DataType getOrcType() {\n    return OrcSpec.DataType.getDataType( getFormatType() );\n  }\n\n  @Override\n  public void setOrcType( OrcSpec.DataType orcType ) {\n    setFormatType( orcType.getId() );\n  }\n\n  @Injection( name = \"ORC_TYPE\", group = \"FIELDS\" )\n  @Override\n  public void setOrcType( String orcType ) {\n    for ( OrcSpec.DataType tmpType : OrcSpec.DataType.values() ) {\n      // Match on Name ( for dialog ) or Enum Name ( For metadata injection ), note that the former uses \"Int\" and\n      // the latter uses \"INTEGER\"\n      if ( tmpType.getName().equalsIgnoreCase( orcType ) || tmpType.toString().equalsIgnoreCase( orcType ) ) {\n        setFormatType( tmpType.getId() );\n        break;\n      }\n    }\n  }\n\n  @Override\n  public String getTypeDesc() {\n    return ValueMetaFactory.getValueMetaName( getPentahoType() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/OrcTypeConverter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc;\n\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\n/**\n * Created by rmansoor on 8/8/2018.\n */\npublic class OrcTypeConverter {\n\n  public static String convertToOrcType( int pdiType ) {\n    switch ( pdiType ) {\n      case ValueMetaInterface.TYPE_INET:\n      case ValueMetaInterface.TYPE_STRING:\n        return OrcSpec.DataType.STRING.getName();\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        return OrcSpec.DataType.TIMESTAMP.getName();\n      case ValueMetaInterface.TYPE_BINARY:\n        return OrcSpec.DataType.BINARY.getName();\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        return OrcSpec.DataType.DECIMAL.getName();\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        return OrcSpec.DataType.BOOLEAN.getName();\n      case ValueMetaInterface.TYPE_DATE:\n        return OrcSpec.DataType.DATE.getName();\n      case ValueMetaInterface.TYPE_INTEGER:\n        return OrcSpec.DataType.INTEGER.getName();\n      case ValueMetaInterface.TYPE_NUMBER:\n        return OrcSpec.DataType.DOUBLE.getName();\n      default:\n        return OrcSpec.DataType.NULL.getName();\n    }\n  }\n\n  public static String convertToOrcType( String type ) {\n    int pdiType = ValueMetaFactory.getIdForValueMeta( type );\n    if ( pdiType > 0 ) {\n      return convertToOrcType( pdiType );\n    } else {\n      return type;\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/input/OrcInputMetaBase.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.input;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.big.data.kettle.plugins.formats.FormatInputFile;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.OrcInputField;\nimport org.pentaho.big.data.kettle.plugins.formats.orc.OrcTypeConverter;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileInputAdditionalField;\nimport org.pentaho.di.trans.steps.file.BaseFileInputMeta;\nimport org.pentaho.di.workarounds.ResolvableResource;\nimport org.pentaho.hadoop.shim.api.format.IOrcInputField;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.util.List;\n\n/**\n * Orc input meta step without Hadoop-dependent classes. Required for read meta in the spark native code.\n *\n * @author Jacob Gminder\n */\n@SuppressWarnings( \"deprecation\" )\npublic abstract class OrcInputMetaBase extends\n    BaseFileInputMeta<BaseFileInputAdditionalField, FormatInputFile, OrcInputField> implements ResolvableResource {\n\n  public OrcInputMetaBase() {\n    additionalOutputFields = new BaseFileInputAdditionalField();\n    inputFiles = new FormatInputFile();\n    inputFields = new OrcInputField[ 0 ];\n  }\n\n  public String getFilename() {\n    if ( inputFiles != null && inputFiles.fileName != null\n        && inputFiles.fileName.length > 0 ) {\n      return inputFiles.fileName[0];\n    } else {\n      return null;\n    }\n  }\n\n  public void setFilename( String filename ) {\n    inputFiles.fileName[0] = filename;\n  }\n\n  public OrcInputField[] getInputFields() {\n    return inputFields;\n  }\n\n  public void setInputFields( OrcInputField[] inputFields ) {\n    this.inputFields = inputFields;\n  }\n\n  public void setInputFields( List<OrcInputField> inputFields ) {\n    this.inputFields = new OrcInputField[inputFields.size()];\n    this.inputFields = inputFields.toArray( this.inputFields );\n  }\n\n  @Override\n  public String getXML() {\n    StringBuilder retval = new StringBuilder( 1500 );\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"passing_through_fields\", inputFiles.passingThruFields ) );\n    retval.append( \"    <file>\" ).append( Const.CR );\n    //we need the equals by size arrays for inputFiles.fileName[i], inputFiles.fileMask[i], inputFiles.fileRequired[i], inputFiles.includeSubFolders[i]\n    //to prevent the ArrayIndexOutOfBoundsException\n    inputFiles.normalizeAllocation( inputFiles.fileName.length );\n    for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"environment\", inputFiles.environment[i] ) );\n\n      if ( parentStepMeta != null && parentStepMeta.getParentTransMeta() != null ) {\n        parentStepMeta.getParentTransMeta().getNamedClusterEmbedManager().registerUrl( inputFiles.fileName[i] );\n      }\n\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"name\", inputFiles.fileName[i] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"filemask\", inputFiles.fileMask[i] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"exclude_filemask\", inputFiles.excludeFileMask[i] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"file_required\", inputFiles.fileRequired[i] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"include_subfolders\",\n          inputFiles.includeSubFolders[i] ) );\n    }\n    retval.append( \"    </file>\" ).append( Const.CR );\n\n    retval.append( \"    <fields>\" ).append( Const.CR );\n    for ( int i = 0; i < inputFields.length; i++ ) {\n      OrcInputField field = inputFields[ i ];\n      retval.append( \"      <field>\" ).append( Const.CR );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"path\", field.getFormatFieldName() ) );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"name\", field.getPentahoFieldName() ) );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"type\", field.getTypeDesc() ) );\n      OrcSpec.DataType orcDataType = field.getOrcType();\n      if ( orcDataType != null && orcDataType.getName() != null && !orcDataType.getName().equalsIgnoreCase( OrcSpec.DataType.NULL.getName() ) ) {\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"orc_type\", orcDataType.getName() ) );\n      } else {\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"orc_type\", OrcTypeConverter.convertToOrcType( field.getTypeDesc() ) ) );\n      }\n\n      if ( field.getStringFormat() != null ) {\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"format\", field.getStringFormat() ) );\n      }\n      retval.append( \"      </field>\" ).append( Const.CR );\n    }\n    retval.append( \"    </fields>\" ).append( Const.CR );\n\n    return retval.toString();\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n    try {\n      rep.saveStepAttribute( id_transformation, id_step, \"passing_through_fields\", inputFiles.passingThruFields );\n      if ( !( inputFiles.fileName.length == 1 && inputFiles.fileName[0].equalsIgnoreCase( \"\" ) ) ) {\n        for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"environment\", inputFiles.environment[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_name\", inputFiles.fileName[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_mask\", inputFiles.fileMask[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"exclude_file_mask\", inputFiles.excludeFileMask[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_required\", inputFiles.fileRequired[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"include_subfolders\", inputFiles.includeSubFolders[i] );\n        }\n      }\n\n      for ( int i = 0; i < inputFields.length; i++ ) {\n        OrcInputField field = inputFields[ i ];\n\n        rep.saveStepAttribute( id_transformation, id_step, i, \"path\", field.getFormatFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"name\", field.getPentahoFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"type\", field.getTypeDesc() );\n        OrcSpec.DataType orcDataType = field.getOrcType();\n        if ( orcDataType != null && orcDataType.getName() != null && !orcDataType.getName().equalsIgnoreCase( OrcSpec.DataType.NULL.getName() ) ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"orc_type\", orcDataType.getName() );\n        } else {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"orc_type\", OrcTypeConverter.convertToOrcType( field.getTypeDesc() ) );\n        }\n\n        if ( field.getStringFormat() != null ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"format\", field.getStringFormat() );\n        }\n      }\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unable to save step information to the repository for id_step=\" + id_step, e );\n    }\n  }\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore ) throws KettleXMLException {\n    Node filenode = XMLHandler.getSubNode( stepnode, \"file\" );\n    Node fields = XMLHandler.getSubNode( stepnode, \"fields\" );\n    int nrfiles = XMLHandler.countNodes( filenode, \"name\" );\n    int nrfields = XMLHandler.countNodes( fields, \"field\" );\n\n    String passThroughFields = XMLHandler.getTagValue( stepnode, \"passing_through_fields\" ) == null ? \"false\"\n            : XMLHandler.getTagValue( stepnode, \"passing_through_fields\" );\n    allocateFiles( nrfiles );\n    inputFiles.passingThruFields = ValueMetaBase.convertStringToBoolean( passThroughFields );\n    for ( int i = 0; i < nrfiles; i++ ) {\n      Node envnode = XMLHandler.getSubNodeByNr( filenode, \"environment\", i );\n      Node filenamenode = XMLHandler.getSubNodeByNr( filenode, \"name\", i );\n      Node filemasknode = XMLHandler.getSubNodeByNr( filenode, \"filemask\", i );\n      Node excludefilemasknode = XMLHandler.getSubNodeByNr( filenode, \"exclude_filemask\", i );\n      Node fileRequirednode = XMLHandler.getSubNodeByNr( filenode, \"file_required\", i );\n      Node includeSubFoldersnode = XMLHandler.getSubNodeByNr( filenode, \"include_subfolders\", i );\n      inputFiles.environment[i] = XMLHandler.getNodeValue( envnode );\n      inputFiles.fileName[i] = XMLHandler.getNodeValue( filenamenode );\n      inputFiles.fileMask[i] = XMLHandler.getNodeValue( filemasknode );\n      inputFiles.excludeFileMask[i] = XMLHandler.getNodeValue( excludefilemasknode );\n      inputFiles.fileRequired[i] = XMLHandler.getNodeValue( fileRequirednode );\n      inputFiles.includeSubFolders[i] = XMLHandler.getNodeValue( includeSubFoldersnode );\n    }\n\n    inputFields = new OrcInputField[ nrfields ];\n    for ( int i = 0; i < nrfields; i++ ) {\n      Node fnode = XMLHandler.getSubNodeByNr( fields, \"field\", i );\n\n      OrcInputField field = new OrcInputField();\n      field.setFormatFieldName( XMLHandler.getTagValue( fnode, \"path\" ) );\n      field.setPentahoFieldName( XMLHandler.getTagValue( fnode, \"name\" ) );\n      field.setPentahoType( ValueMetaFactory.getIdForValueMeta( XMLHandler.getTagValue( fnode, \"type\" ) ) );\n      String orcType = XMLHandler.getTagValue( fnode, \"orc_type\" );\n      if ( orcType != null && !orcType.equalsIgnoreCase( \"null\" ) ) {\n        field.setOrcType( orcType );\n      } else {\n        field.setOrcType( OrcTypeConverter.convertToOrcType( field.getPentahoType() ) );\n      }\n      String stringFormat = XMLHandler.getTagValue( fnode, \"format\" );\n      field.setStringFormat( stringFormat == null ? \"\" : stringFormat );\n      this.inputFields[ i ] = field;\n    }\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    try {\n      int nrfiles = rep.countNrStepAttributes( id_step, \"file_name\" );\n\n      allocateFiles( nrfiles );\n\n      inputFiles.passingThruFields = rep.getStepAttributeBoolean( id_step, \"passing_through_fields\" );\n      for ( int i = 0; i < nrfiles; i++ ) {\n        inputFiles.environment[i] = rep.getStepAttributeString( id_step, i, \"environment\" );\n        inputFiles.fileName[i] = rep.getStepAttributeString( id_step, i, \"file_name\" );\n        inputFiles.fileMask[i] = rep.getStepAttributeString( id_step, i, \"file_mask\" );\n        inputFiles.excludeFileMask[i] = rep.getStepAttributeString( id_step, i, \"exclude_file_mask\" );\n        inputFiles.fileRequired[i] = rep.getStepAttributeString( id_step, i, \"file_required\" );\n        if ( !YES.equalsIgnoreCase( inputFiles.fileRequired[i] ) ) {\n          inputFiles.fileRequired[i] = NO;\n        }\n        inputFiles.includeSubFolders[i] = rep.getStepAttributeString( id_step, i, \"include_subfolders\" );\n        if ( !YES.equalsIgnoreCase( inputFiles.includeSubFolders[i] ) ) {\n          inputFiles.includeSubFolders[i] = NO;\n        }\n      }\n\n      int nrfields = rep.countNrStepAttributes( id_step, \"name\" );\n      inputFields = new OrcInputField[ nrfields ];\n      for ( int i = 0; i < nrfields; i++ ) {\n        OrcInputField field = new OrcInputField();\n        field.setFormatFieldName( rep.getStepAttributeString( id_step, i, \"path\" ) );\n        field.setPentahoFieldName( rep.getStepAttributeString( id_step, i, \"name\" ) );\n        field.setPentahoType( ValueMetaFactory.getIdForValueMeta( rep.getStepAttributeString( id_step, i, \"type\" ) ) );\n        field.setOrcType( rep.getStepAttributeString( id_step, i, \"orc_type\" ) );\n        String stringFormat = rep.getStepAttributeString( id_step, i, \"format\" );\n        field.setStringFormat( stringFormat == null ? \"\" : stringFormat );\n\n        this.inputFields[ i ] = field;\n      }\n\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unexpected error reading step information from the repository\", e );\n    }\n  }\n\n  public void allocateFiles( int nrFiles ) {\n    inputFiles.environment = new String[nrFiles];\n    inputFiles.fileName = new String[nrFiles];\n    inputFiles.fileMask = new String[nrFiles];\n    inputFiles.excludeFileMask = new String[nrFiles];\n    inputFiles.fileRequired = new String[nrFiles];\n    inputFiles.includeSubFolders = new String[nrFiles];\n  }\n\n  /**\n   * TODO: remove from base\n   */\n  @Override\n  public String getEncoding() {\n    return null;\n  }\n\n  @Override\n  public void setDefault() {\n    allocateFiles( 0 );\n    inputFields = new OrcInputField[ 0 ];\n  }\n\n  @Override\n  public void resolve( Bowl bowl ) {\n    if ( inputFiles != null && inputFiles.fileName != null ) {\n      for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n        try {\n          String realFileName = getParentStepMeta().getParentTransMeta().environmentSubstitute( inputFiles.fileName[ i ] );\n          FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( realFileName );\n          if ( AliasedFileObject.isAliasedFile( fileObject ) ) {\n            inputFiles.fileName[ i ] = ( (AliasedFileObject) fileObject ).getAELSafeURIString();\n          }\n        } catch ( KettleFileException e ) {\n          throw new RuntimeException( e );\n        }\n      }\n    }\n  }\n\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n                         VariableSpace space, Repository repository, IMetaStore metaStore ) throws\n    KettleStepException {\n    try {\n      if ( !inputFiles.passingThruFields ) {\n        // all incoming fields are not transmitted !\n        rowMeta.clear();\n      } else {\n        if ( info != null ) {\n          boolean found = false;\n          for ( int i = 0; i < info.length && !found; i++ ) {\n            if ( info[i] != null ) {\n              rowMeta.mergeRowMeta( info[i], origin );\n              found = true;\n            }\n          }\n        }\n      }\n      for ( IOrcInputField field : getInputFields() ) {\n        String value = space.environmentSubstitute( field.getPentahoFieldName() );\n        ValueMetaInterface v = ValueMetaFactory.createValueMeta( value, field.getPentahoType() );\n        v.setOrigin( origin );\n        rowMeta.addValueMeta( v );\n      }\n    } catch ( KettlePluginException e ) {\n      throw new KettleStepException( \"Unable to create value type\", e );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/output/OrcOutputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.output;\n\nimport org.pentaho.big.data.kettle.plugins.formats.BaseFormatOutputField;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.hadoop.shim.api.format.IOrcOutputField;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\npublic class OrcOutputField extends BaseFormatOutputField implements IOrcOutputField {\n  public OrcSpec.DataType getOrcType() {\n    return OrcSpec.DataType.values()[ formatType ];\n  }\n\n  @Override\n  public void setFormatType( OrcSpec.DataType orcType ) {\n    this.formatType = orcType.getId();\n  }\n\n  @Override\n  public void setFormatType( int formatType ) {\n    for ( OrcSpec.DataType orcType : OrcSpec.DataType.values() ) {\n      if ( orcType.getId() == formatType ) {\n        this.formatType = formatType;\n      }\n    }\n  }\n\n  @Injection( name = \"FIELD_TYPE\", group = \"FIELDS\" )\n  public void setFormatType( String typeName ) {\n    try {\n      setFormatType( Integer.parseInt( typeName ) );\n    } catch ( NumberFormatException nfe ) {\n      for ( OrcSpec.DataType orcType : OrcSpec.DataType.values() ) {\n        //Match on Name( for dialog ) or Enum Name ( For metadata injection ), note that the former uses \"Int\" and\n        // the later uses \"INTEGER\"\n        if ( orcType.getName().equalsIgnoreCase( typeName ) || orcType.toString().equalsIgnoreCase( typeName ) ) {\n          this.formatType = orcType.getId();\n          return;\n        }\n      }\n    }\n  }\n\n  public boolean isDecimalType() {\n    return getOrcType().equals( OrcSpec.DataType.DECIMAL );\n  }\n\n  @Override\n  public void setPrecision( String precision ) {\n    if ( ( precision == null ) || ( precision.trim().length() == 0 ) ) {\n      this.precision = isDecimalType() ? OrcSpec.DEFAULT_DECIMAL_PRECISION : 0;\n    } else {\n      this.precision = Integer.valueOf( precision );\n      if ( ( this.precision <= 0 ) && isDecimalType() ) {\n        this.precision = OrcSpec.DEFAULT_DECIMAL_PRECISION;\n      }\n    }\n  }\n\n  @Override\n  public void setScale( String scale ) {\n    if ( ( scale == null ) || ( scale.trim().length() == 0 ) ) {\n      this.scale = isDecimalType() ? OrcSpec.DEFAULT_DECIMAL_SCALE : 0;\n    } else {\n      this.scale = Integer.valueOf( scale );\n      if ( ( this.scale < 0 ) ) {\n        this.scale = isDecimalType() ? OrcSpec.DEFAULT_DECIMAL_SCALE : 0;\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/orc/output/OrcOutputMetaBase.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.output;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.orc.CompressionKind;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.workarounds.ResolvableResource;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.text.SimpleDateFormat;\nimport java.util.ArrayList;\nimport java.util.Date;\nimport java.util.List;\nimport java.util.function.Function;\n\n/**\n * Orc output meta step without Hadoop-dependent classes. Required for read meta in the spark native code.\n *\n * @author Alexander Buloichik@epam.com>\n */\npublic abstract class OrcOutputMetaBase extends BaseStepMeta implements StepMetaInterface, ResolvableResource {\n\n  private static final Class<?> PKG = OrcOutputMetaBase.class;\n  public static final int DEFAULT_ROWS_BETWEEN_ENTRIES = 10000;\n  public static final int DEFAULT_STRIPE_SIZE = 64; // In megabytes\n  public static final int DEFAULT_COMPRESS_SIZE = 256; // In kilobytes\n\n  @Injection( name = \"FILENAME\" )\n  private String filename;\n\n  @InjectionDeep\n  private List<OrcOutputField> outputFields = new ArrayList<>();\n\n  @Injection( name = \"OPTIONS_COMPRESSION\" )\n  protected String compressionType = \"\";\n\n  @Injection( name = \"OPTIONS_STRIPE_SIZE\" )\n  protected int stripeSize = 64;\n\n  @Injection( name = \"OPTIONS_COMPRESS_SIZE\" )\n  protected int compressSize = 256;\n\n  @Injection( name = \"OPTIONS_ROWS_BETWEEN_ENTRIES\" )\n  protected int rowsBetweenEntries = 0;\n\n  @Injection( name = \"OPTIONS_DATE_IN_FILE_NAME\" )\n  protected boolean dateInFileName = false;\n\n  @Injection( name = \"OPTIONS_TIME_IN_FILE_NAME\" )\n  protected boolean timeInFileName = false;\n\n  @Injection( name = \"OPTIONS_DATE_FORMAT\" )\n  protected String dateTimeFormat = \"\";\n\n  @Injection( name = \"OVERRIDE_OUTPUT\" )\n  protected boolean overrideOutput;\n\n  @Override\n  public void setDefault() {\n    // TODO Auto-generated method stub\n  }\n\n  public String getFilename() {\n\n    return filename;\n  }\n\n  public boolean isOverrideOutput() {\n    return overrideOutput;\n  }\n\n  public void setOverrideOutput( boolean overrideOutput ) {\n    this.overrideOutput = overrideOutput;\n  }\n\n  public void setFilename( String filename ) {\n\n    this.filename = filename;\n  }\n\n  public List<OrcOutputField> getOutputFields() {\n\n    return outputFields;\n  }\n\n  public void setOutputFields( List<OrcOutputField> outputFields ) {\n\n    this.outputFields = outputFields;\n  }\n\n  public int getStripeSize() {\n    return stripeSize;\n  }\n\n  public void setStripeSize( int stripeSize ) {\n    this.stripeSize = stripeSize;\n  }\n\n  public int getCompressSize() {\n    return compressSize;\n  }\n\n  public void setCompressSize( int compressSize ) {\n    this.compressSize = compressSize;\n  }\n\n  public int getRowsBetweenEntries() {\n    return rowsBetweenEntries;\n  }\n\n  public void setRowsBetweenEntries( int rowsBetweenEntries ) {\n    this.rowsBetweenEntries = rowsBetweenEntries;\n  }\n\n  public boolean isDateInFileName() {\n    return dateInFileName;\n  }\n\n  public void setDateInFileName( boolean dateInFileName ) {\n    this.dateInFileName = dateInFileName;\n  }\n\n  public boolean isTimeInFileName() {\n    return timeInFileName;\n  }\n\n  public void setTimeInFileName( boolean timeInFileName ) {\n    this.timeInFileName = timeInFileName;\n  }\n\n  public String getDateTimeFormat() {\n    return dateTimeFormat;\n  }\n\n  public void setDateTimeFormat( String dateTimeFormat ) {\n    this.dateTimeFormat = dateTimeFormat;\n  }\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore ) throws KettleXMLException {\n    readData( stepnode, metaStore );\n  }\n\n  private void readData( Node stepnode, IMetaStore metastore ) throws KettleXMLException {\n    try {\n      Node fields = XMLHandler.getSubNode( stepnode, \"fields\" );\n      int nrfields = XMLHandler.countNodes( fields, \"field\" );\n      List<OrcOutputField> orcOutputFields = new ArrayList<>();\n      for ( int i = 0; i < nrfields; i++ ) {\n        Node fnode = XMLHandler.getSubNodeByNr( fields, \"field\", i );\n        OrcOutputField outputField = new OrcOutputField();\n        outputField.setFormatFieldName( XMLHandler.getTagValue( fnode, \"path\" ) );\n        outputField.setPentahoFieldName( XMLHandler.getTagValue( fnode, \"name\" ) );\n        outputField.setFormatType( XMLHandler.getTagValue( fnode, \"type\" ) );\n        outputField.setPrecision( XMLHandler.getTagValue( fnode, \"precision\" ) );\n        outputField.setScale( XMLHandler.getTagValue( fnode, \"scale\" ) );\n        outputField.setAllowNull( XMLHandler.getTagValue( fnode, \"nullable\" ) );\n        outputField.setDefaultValue( XMLHandler.getTagValue( fnode, \"default\" ) );\n        orcOutputFields.add( outputField );\n      }\n      this.outputFields = orcOutputFields;\n\n      filename = XMLHandler.getTagValue( stepnode, FieldNames.FILE_NAME );\n      overrideOutput = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( stepnode, FieldNames.OVERRIDE_OUTPUT ) );\n      compressionType = XMLHandler.getTagValue( stepnode, FieldNames.COMPRESSION );\n      stripeSize = Integer.parseInt( XMLHandler.getTagValue( stepnode, FieldNames.STRIPE_SIZE ), 10 );\n      compressSize = Integer.parseInt( XMLHandler.getTagValue( stepnode, FieldNames.COMPRESS_SIZE ), 10 );\n      rowsBetweenEntries = Integer.parseInt( XMLHandler.getTagValue( stepnode, FieldNames.ROWS_BETWEEN_ENTRIES ), 10 );\n      dateTimeFormat = XMLHandler.getTagValue( stepnode, FieldNames.DATE_FORMAT );\n      dateInFileName = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( stepnode, FieldNames.DATE_IN_FILE_NAME ) );\n      timeInFileName = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( stepnode, FieldNames.TIME_IN_FILE_NAME ) );\n\n    } catch ( Exception e ) {\n      throw new KettleXMLException( \"Unable to load step info from XML\", e );\n    }\n  }\n\n\n  @Override\n  public String getXML() {\n    StringBuffer retval = new StringBuffer( 800 );\n    final String INDENT = \"    \";\n\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.FILE_NAME, filename ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.OVERRIDE_OUTPUT, overrideOutput ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.COMPRESSION, compressionType ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.STRIPE_SIZE, stripeSize ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.COMPRESS_SIZE, compressSize ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.ROWS_BETWEEN_ENTRIES, rowsBetweenEntries ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.DATE_FORMAT, dateTimeFormat ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.DATE_IN_FILE_NAME, dateInFileName ) );\n    retval.append( INDENT ).append( XMLHandler.addTagValue( FieldNames.TIME_IN_FILE_NAME, timeInFileName ) );\n\n    retval.append( \"    <fields>\" ).append( Const.CR );\n    for ( int i = 0; i < outputFields.size(); i++ ) {\n      OrcOutputField field = outputFields.get( i );\n\n      if ( field.getPentahoFieldName() != null && field.getPentahoFieldName().length() != 0 ) {\n        retval.append( \"      <field>\" ).append( Const.CR );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"path\", field.getFormatFieldName() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"name\", field.getPentahoFieldName() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"type\", field.getOrcType().getId() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"precision\", field.getPrecision() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"scale\", field.getScale() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"nullable\", field.getAllowNull() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"default\", field.getDefaultValue() ) );\n        retval.append( \"      </field>\" ).append( Const.CR );\n      }\n    }\n    retval.append( \"    </fields>\" ).append( Const.CR );\n\n    return retval.toString();\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    try {\n      filename = rep.getStepAttributeString( id_step, FieldNames.FILE_NAME );\n      overrideOutput = rep.getStepAttributeBoolean( id_step, FieldNames.OVERRIDE_OUTPUT );\n      compressionType = rep.getStepAttributeString( id_step, FieldNames.COMPRESSION );\n      stripeSize = Math.toIntExact( rep.getStepAttributeInteger( id_step, FieldNames.STRIPE_SIZE ) );\n      compressSize = Math.toIntExact( rep.getStepAttributeInteger( id_step, FieldNames.COMPRESS_SIZE ) );\n      rowsBetweenEntries = Math.toIntExact( rep.getStepAttributeInteger( id_step, FieldNames.ROWS_BETWEEN_ENTRIES ) );\n      dateTimeFormat = rep.getStepAttributeString( id_step, FieldNames.DATE_FORMAT );\n      dateInFileName = rep.getStepAttributeBoolean( id_step, FieldNames.DATE_IN_FILE_NAME );\n      timeInFileName = rep.getStepAttributeBoolean( id_step, FieldNames.TIME_IN_FILE_NAME );\n\n      // using the \"type\" column to get the number of field rows because \"type\" is guaranteed not to be null.\n      int nrfields = rep.countNrStepAttributes( id_step, \"type\" );\n\n      List<OrcOutputField> orcOutputFields = new ArrayList<>();\n      for ( int i = 0; i < nrfields; i++ ) {\n        OrcOutputField outputField = new OrcOutputField();\n\n        outputField.setFormatFieldName( rep.getStepAttributeString( id_step, i, \"path\" ) );\n        outputField.setPentahoFieldName( rep.getStepAttributeString( id_step, i, \"name\" ) );\n        outputField.setFormatType( rep.getStepAttributeString( id_step, i, \"type\" ) );\n        outputField.setPrecision( rep.getStepAttributeString( id_step, i, \"precision\" ) );\n        outputField.setScale( rep.getStepAttributeString( id_step, i, \"scale\" ) );\n        outputField.setAllowNull( rep.getStepAttributeString( id_step, i, \"nullable\" ) );\n        outputField.setDefaultValue( rep.getStepAttributeString( id_step, i, \"default\" ) );\n\n        orcOutputFields.add( outputField );\n      }\n      this.outputFields = orcOutputFields;\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unexpected error reading step information from the repository\", e );\n    }\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n    try {\n      super.saveRep( rep, metaStore, id_transformation, id_step );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.FILE_NAME, filename );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.OVERRIDE_OUTPUT, overrideOutput );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.COMPRESSION, compressionType );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.STRIPE_SIZE, stripeSize );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.COMPRESS_SIZE, compressSize );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.ROWS_BETWEEN_ENTRIES, rowsBetweenEntries );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.DATE_FORMAT, dateTimeFormat );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.DATE_IN_FILE_NAME, dateInFileName );\n      rep.saveStepAttribute( id_transformation, id_step, FieldNames.TIME_IN_FILE_NAME, timeInFileName );\n\n      for ( int i = 0; i < outputFields.size(); i++ ) {\n        OrcOutputField field = outputFields.get( i );\n\n        rep.saveStepAttribute( id_transformation, id_step, i, \"path\", field.getFormatFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"name\", field.getPentahoFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"type\", field.getOrcType().getId() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"precision\", field.getPrecision() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"scale\", field.getScale() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"nullable\", field.getAllowNull() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"default\", field.getDefaultValue() );\n      }\n\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unable to save step information to the repository for id_step=\" + id_step, e );\n    }\n  }\n\n  @Override\n  public void resolve( Bowl bowl ) {\n    if ( filename != null && !filename.isEmpty() ) {\n      try {\n        String realFileName = getParentStepMeta().getParentTransMeta().environmentSubstitute( filename );\n        FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( realFileName );\n        if ( AliasedFileObject.isAliasedFile( fileObject ) ) {\n          filename = ( (AliasedFileObject) fileObject ).getAELSafeURIString();\n        }\n      } catch ( KettleFileException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n  }\n\n  public String getCompressionType() {\n    return StringUtil.isVariable( compressionType ) ? compressionType : getCompressionType( null ).toString();\n  }\n\n  public void setCompressionType( String value ) {\n    compressionType = StringUtil.isVariable( value ) ? value : parseFromToString( value, CompressionKind.values(), CompressionKind.NONE ).toString();\n  }\n\n  public CompressionKind getCompressionType(VariableSpace vspace ) {\n    return parseReplace( compressionType, vspace, str -> findCompressionType( str ), CompressionKind.NONE );\n  }\n\n  public String[] getCompressionTypes() {\n    return getStrings( CompressionKind.values() );\n  }\n\n  private CompressionKind findCompressionType( String str ) {\n    try {\n      return CompressionKind.valueOf( str );\n    } catch ( Throwable th ) {\n      return parseFromToString( str, CompressionKind.values(), CompressionKind.NONE );\n    }\n  }\n\n  protected static <T> String[] getStrings( T[] objects ) {\n    String[] names = new String[objects.length];\n    int i = 0;\n    for ( T obj : objects ) {\n      names[i++] = obj.toString();\n    }\n    return names;\n  }\n\n  protected static <T> T parseFromToString( String str, T[] values, T defaultValue ) {\n    if ( !Utils.isEmpty( str ) ) {\n      for ( T type : values ) {\n        if ( str.equalsIgnoreCase( type.toString() ) ) {\n          return type;\n        }\n      }\n    }\n    return defaultValue;\n  }\n\n  private <T> T parseReplace( String value, VariableSpace vspace, Function<String, T> parser, T defaultValue ) {\n    String replaced = vspace != null ? vspace.environmentSubstitute( value ) : value;\n    if ( !Utils.isEmpty( replaced ) ) {\n      try {\n        return parser.apply( replaced );\n      } catch ( Exception e ) {\n        // ignored\n      }\n    }\n    return defaultValue;\n  }\n\n  public String constructOutputFilename() {\n    String outputFileName = filename;\n    if ( dateTimeFormat != null && !dateTimeFormat.isEmpty() ) {\n      String dateTimeFormatPattern = getParentStepMeta().getParentTransMeta().environmentSubstitute( dateTimeFormat );\n      outputFileName += new SimpleDateFormat( dateTimeFormatPattern ).format( new Date() );\n    } else {\n      if ( dateInFileName ) {\n        outputFileName += '_' + new SimpleDateFormat( \"yyyyMMdd\" ).format( new Date() );\n      }\n      if ( timeInFileName ) {\n        outputFileName += '_' + new SimpleDateFormat( \"HHmmss\" ).format( new Date() );\n      }\n    }\n    return outputFileName;\n  }\n\n  protected static class FieldNames {\n    public static final String FILE_NAME = \"filename\";\n    public static final String OVERRIDE_OUTPUT = \"overrideOutput\";\n    public static final String COMPRESSION = \"compression\";\n    public static final String COMPRESS_SIZE = \"compressSize\";\n    public static final String INLINE_INDEXES = \"inlineIndexes\";\n    public static final String ROWS_BETWEEN_ENTRIES = \"rowsBetweenEntries\";\n    public static final String DATE_IN_FILE_NAME = \"dateInFileName\";\n    public static final String TIME_IN_FILE_NAME = \"timeInFileName\";\n    public static final String DATE_FORMAT = \"dateTimeFormat\";\n    public static final String STRIPE_SIZE = \"stripeSize\";\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/parquet/ParquetTypeConverter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet;\n\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\n/**\n * Created by rmansoor on 8/8/2018.\n */\npublic class ParquetTypeConverter {\n\n\n  public static String convertToParquetType( String pdiType ) {\n    int pdiTypeId = -1;\n    for ( int i = 0; i < ValueMetaInterface.typeCodes.length; i++ ) {\n      if ( ValueMetaInterface.typeCodes[ i ].equals( pdiType ) ) {\n        pdiTypeId = i;\n        break;\n      }\n    }\n    return convertToParquetType( pdiTypeId );\n  }\n\n\n  public static String convertToParquetType( int pdiType ) {\n    switch ( pdiType ) {\n      case ValueMetaInterface.TYPE_INET:\n      case ValueMetaInterface.TYPE_STRING:\n        return ParquetSpec.DataType.UTF8.getName();\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        return ParquetSpec.DataType.TIMESTAMP_MILLIS.getName();\n      case ValueMetaInterface.TYPE_BINARY:\n        return ParquetSpec.DataType.BINARY.getName();\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        return ParquetSpec.DataType.DECIMAL.getName();\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        return ParquetSpec.DataType.BOOLEAN.getName();\n      case ValueMetaInterface.TYPE_DATE:\n        return ParquetSpec.DataType.DATE.getName();\n      case ValueMetaInterface.TYPE_INTEGER:\n        return ParquetSpec.DataType.INT_64.getName();\n      case ValueMetaInterface.TYPE_NUMBER:\n        return ParquetSpec.DataType.DOUBLE.getName();\n      default:\n        return ParquetSpec.DataType.NULL.getName();\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/parquet/input/ParquetInputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.input;\n\nimport org.pentaho.big.data.kettle.plugins.formats.BaseFormatInputField;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.hadoop.shim.api.format.IParquetInputField;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\npublic class ParquetInputField extends BaseFormatInputField implements IParquetInputField {\n  @Override\n  public void setParquetType( ParquetSpec.DataType parquetType ) {\n    setFormatType( parquetType.getId() );\n  }\n\n  @Injection( name = \"PARQUET_TYPE\", group = \"FIELDS\" )\n  @Override\n  public void setParquetType( String parquetType ) {\n    for ( ParquetSpec.DataType tmpType : ParquetSpec.DataType.values() ) {\n      if ( tmpType.getName().equalsIgnoreCase( parquetType ) ) {\n        setFormatType( tmpType.getId() );\n        break;\n      }\n    }\n  }\n\n  @Override\n  public ParquetSpec.DataType getParquetType() {\n    return ParquetSpec.DataType.getDataType( getFormatType() );\n  }\n\n  public String getTypeDesc() {\n    return ValueMetaFactory.getValueMetaName( getPentahoType() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/parquet/input/ParquetInputMetaBase.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.input;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.big.data.kettle.plugins.formats.FormatInputFile;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.ParquetTypeConverter;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.row.value.ValueMetaFactory;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileInputAdditionalField;\nimport org.pentaho.di.trans.steps.file.BaseFileInputMeta;\nimport org.pentaho.di.workarounds.ResolvableResource;\nimport org.pentaho.hadoop.shim.api.format.IParquetInputField;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.util.List;\n\n/**\n * Parquet input meta step without Hadoop-dependent classes. Required for read meta in the spark native code.\n *\n * @author <alexander_buloichik@epam.com>\n */\n@SuppressWarnings( \"deprecation\" )\npublic abstract class ParquetInputMetaBase extends\n  BaseFileInputMeta<BaseFileInputAdditionalField, FormatInputFile, ParquetInputField> implements ResolvableResource {\n\n  /** If receiving input rows, should we pass through existing fields? */\n  @Injection( name = \"IGNORE_EMPTY_FOLDER\" )\n  boolean ignoreEmptyFolder = false;\n\n  public ParquetInputMetaBase() {\n    additionalOutputFields = new BaseFileInputAdditionalField();\n    inputFiles = new FormatInputFile();\n    inputFields = new ParquetInputField[ 0 ];\n  }\n\n  public boolean isIgnoreEmptyFolder() {\n    return ignoreEmptyFolder;\n  }\n\n  public void setIgnoreEmptyFolder( boolean ignoreEmptyFolder ) {\n    this.ignoreEmptyFolder = ignoreEmptyFolder;\n  }\n\n  public String getFilename() {\n    if ( inputFiles != null && inputFiles.fileName != null && inputFiles.fileName.length > 0 ) {\n      return inputFiles.fileName[0];\n    } else {\n      return null;\n    }\n  }\n\n  public String[] getFileNames() {\n    if ( inputFiles != null && inputFiles.fileName != null && inputFiles.fileName.length > 0 ) {\n      return inputFiles.fileName;\n    } else {\n      return null;\n    }\n  }\n\n  public void setFilename( String filename ) {\n    inputFiles.fileName[0] = filename;\n  }\n\n  public void setFilenames( String[] filenames ) {\n    inputFiles.fileName = filenames;\n  }\n\n  public ParquetInputField[] getInputFields() {\n    return inputFields;\n  }\n\n  public void setInputFields( ParquetInputField[] inputFields ) {\n    this.inputFields = inputFields;\n  }\n\n  public void setInputFields( List<ParquetInputField> inputFields ) {\n    this.inputFields = new ParquetInputField[ inputFields.size() ];\n    this.inputFields = inputFields.toArray( this.inputFields );\n  }\n\n  @Override\n  public String getXML() {\n    StringBuilder retval = new StringBuilder( 1500 );\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"passing_through_fields\", inputFiles.passingThruFields ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"ignore_empty_folder\", ignoreEmptyFolder ) );\n    retval.append( \"    <file>\" ).append( Const.CR );\n    //we need the equals by size arrays for inputFiles.fileName[i], inputFiles.fileMask[i], inputFiles.fileRequired[i], inputFiles.includeSubFolders[i]\n    //to prevent the ArrayIndexOutOfBoundsException\n    inputFiles.normalizeAllocation( inputFiles.fileName.length );\n    for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"environment\", inputFiles.environment[ i ] ) );\n\n      if ( parentStepMeta != null && parentStepMeta.getParentTransMeta() != null ) {\n        parentStepMeta.getParentTransMeta().getNamedClusterEmbedManager().registerUrl( inputFiles.fileName[ i ] );\n      }\n\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"name\", inputFiles.fileName[ i ] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"filemask\", inputFiles.fileMask[ i ] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"exclude_filemask\", inputFiles.excludeFileMask[ i ] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"file_required\", inputFiles.fileRequired[ i ] ) );\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( \"include_subfolders\",\n        inputFiles.includeSubFolders[ i ] ) );\n    }\n    retval.append( \"    </file>\" ).append( Const.CR );\n\n    retval.append( \"    <fields>\" ).append( Const.CR );\n    for ( int i = 0; i < inputFields.length; i++ ) {\n      ParquetInputField field = inputFields[ i ];\n      retval.append( \"      <field>\" ).append( Const.CR );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"path\", field.getFormatFieldName() ) );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"name\", field.getPentahoFieldName() ) );\n      retval.append( \"        \" ).append( XMLHandler.addTagValue( \"type\", field.getTypeDesc() ) );\n      ParquetSpec.DataType parquetType = field.getParquetType();\n      if ( parquetType != null  && !parquetType.equals( ParquetSpec.DataType.NULL ) ) {\n        retval.append( \"        \" )\n          .append( XMLHandler.addTagValue( \"parquet_type\", parquetType.getName() ) );\n      } else {\n        retval.append( \"        \" )\n            .append( XMLHandler.addTagValue( \"parquet_type\", ParquetTypeConverter.convertToParquetType( field.getTypeDesc() ) ) );\n      }\n      if ( field.getStringFormat() != null ) {\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"format\", field.getStringFormat() ) );\n      }\n      retval.append( \"      </field>\" ).append( Const.CR );\n    }\n    retval.append( \"    </fields>\" ).append( Const.CR );\n\n    return retval.toString();\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n    try {\n      rep.saveStepAttribute( id_transformation, id_step, \"ignore_empty_folder\", ignoreEmptyFolder );\n      rep.saveStepAttribute( id_transformation, id_step, \"passing_through_fields\", inputFiles.passingThruFields );\n      if ( !( inputFiles.fileName.length == 1 && inputFiles.fileName[0].equalsIgnoreCase( \"\" ) ) ) {\n        for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"environment\", inputFiles.environment[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_name\", inputFiles.fileName[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_mask\", inputFiles.fileMask[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"exclude_file_mask\", inputFiles.excludeFileMask[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"file_required\", inputFiles.fileRequired[i] );\n          rep.saveStepAttribute( id_transformation, id_step, i, \"include_subfolders\", inputFiles.includeSubFolders[i] );\n        }\n      }\n\n      for ( int i = 0; i < inputFields.length; i++ ) {\n        ParquetInputField field = inputFields[ i ];\n\n        rep.saveStepAttribute( id_transformation, id_step, i, \"path\", field.getFormatFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"field_name\", field.getPentahoFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"field_type\", field.getTypeDesc() );\n        ParquetSpec.DataType parquetType = field.getParquetType();\n        if ( parquetType != null  && !parquetType.equals( ParquetSpec.DataType.NULL ) ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"parquet_type\", parquetType.getName() );\n        } else {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"parquet_type\", ParquetTypeConverter.convertToParquetType( field.getTypeDesc() ) );\n        }\n        if ( field.getStringFormat() != null ) {\n          rep.saveStepAttribute( id_transformation, id_step, i, \"format\", field.getStringFormat() );\n        }\n      }\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unable to save step information to the repository for id_step=\" + id_step, e );\n    }\n  }\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore ) throws KettleXMLException {\n    Node filenode = XMLHandler.getSubNode( stepnode, \"file\" );\n    Node fields = XMLHandler.getSubNode( stepnode, \"fields\" );\n    int nrfiles = XMLHandler.countNodes( filenode, \"name\" );\n    int nrfields = XMLHandler.countNodes( fields, \"field\" );\n\n    String passThroughFields = XMLHandler.getTagValue( stepnode, \"passing_through_fields\" ) == null ? \"false\"\n      : XMLHandler.getTagValue( stepnode, \"passing_through_fields\" );\n    String skipIfNoFile = XMLHandler.getTagValue( stepnode, \"ignore_empty_folder\" ) == null ? \"false\"\n      : XMLHandler.getTagValue( stepnode, \"ignore_empty_folder\" );\n    allocateFiles( nrfiles );\n    inputFiles.passingThruFields = ValueMetaBase.convertStringToBoolean( passThroughFields );\n    ignoreEmptyFolder = ValueMetaBase.convertStringToBoolean( skipIfNoFile );\n    for ( int i = 0; i < nrfiles; i++ ) {\n      Node envnode = XMLHandler.getSubNodeByNr( filenode, \"environment\", i );\n      Node filenamenode = XMLHandler.getSubNodeByNr( filenode, \"name\", i );\n      Node filemasknode = XMLHandler.getSubNodeByNr( filenode, \"filemask\", i );\n      Node excludefilemasknode = XMLHandler.getSubNodeByNr( filenode, \"exclude_filemask\", i );\n      Node fileRequirednode = XMLHandler.getSubNodeByNr( filenode, \"file_required\", i );\n      Node includeSubFoldersnode = XMLHandler.getSubNodeByNr( filenode, \"include_subfolders\", i );\n      inputFiles.environment[ i ] = XMLHandler.getNodeValue( envnode );\n      inputFiles.fileName[ i ] = XMLHandler.getNodeValue( filenamenode );\n      inputFiles.fileMask[ i ] = XMLHandler.getNodeValue( filemasknode );\n      inputFiles.excludeFileMask[ i ] = XMLHandler.getNodeValue( excludefilemasknode );\n      inputFiles.fileRequired[ i ] = XMLHandler.getNodeValue( fileRequirednode );\n      inputFiles.includeSubFolders[ i ] = XMLHandler.getNodeValue( includeSubFoldersnode );\n    }\n\n    this.inputFields = new ParquetInputField[ nrfields ];\n    for ( int i = 0; i < nrfields; i++ ) {\n      Node fnode = XMLHandler.getSubNodeByNr( fields, \"field\", i );\n\n      ParquetInputField field = new ParquetInputField();\n      field.setFormatFieldName( XMLHandler.getTagValue( fnode, \"path\" ) );\n      field.setPentahoFieldName( XMLHandler.getTagValue( fnode, \"name\" ) );\n      field.setPentahoType( ValueMetaFactory.getIdForValueMeta( XMLHandler.getTagValue( fnode, \"type\" ) ) );\n      String parquetType = XMLHandler.getTagValue( fnode, \"parquet_type\" );\n      if ( parquetType != null && !parquetType.equalsIgnoreCase( \"null\" ) ) {\n        field.setParquetType( parquetType );\n      } else {\n        field.setParquetType( ParquetTypeConverter.convertToParquetType( field.getPentahoType() ) );\n      }\n\n      String stringFormat = XMLHandler.getTagValue( fnode, \"format\" );\n      field.setStringFormat( stringFormat == null ? \"\" : stringFormat );\n      this.inputFields[ i ] = field;\n    }\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    try {\n      int nrfiles = rep.countNrStepAttributes( id_step, \"file_name\" );\n\n      allocateFiles( nrfiles );\n\n      inputFiles.passingThruFields = rep.getStepAttributeBoolean( id_step, \"passing_through_fields\" );\n      ignoreEmptyFolder = rep.getStepAttributeBoolean( id_step, \"ignore_empty_folder\" );\n      for ( int i = 0; i < nrfiles; i++ ) {\n        inputFiles.environment[ i ] = rep.getStepAttributeString( id_step, i, \"environment\" );\n        inputFiles.fileName[ i ] = rep.getStepAttributeString( id_step, i, \"file_name\" );\n        inputFiles.fileMask[ i ] = rep.getStepAttributeString( id_step, i, \"file_mask\" );\n        inputFiles.excludeFileMask[ i ] = rep.getStepAttributeString( id_step, i, \"exclude_file_mask\" );\n        inputFiles.fileRequired[ i ] = rep.getStepAttributeString( id_step, i, \"file_required\" );\n        if ( !YES.equalsIgnoreCase( inputFiles.fileRequired[ i ] ) ) {\n          inputFiles.fileRequired[ i ] = NO;\n        }\n        inputFiles.includeSubFolders[ i ] = rep.getStepAttributeString( id_step, i, \"include_subfolders\" );\n        if ( !YES.equalsIgnoreCase( inputFiles.includeSubFolders[ i ] ) ) {\n          inputFiles.includeSubFolders[ i ] = NO;\n        }\n      }\n\n      int nrfields = rep.countNrStepAttributes( id_step, \"field_name\" );\n      this.inputFields = new ParquetInputField[ nrfields ];\n      for ( int i = 0; i < nrfields; i++ ) {\n        ParquetInputField field = new ParquetInputField();\n        field.setFormatFieldName( rep.getStepAttributeString( id_step, i, \"path\" ) );\n        field.setPentahoFieldName( rep.getStepAttributeString( id_step, i, \"field_name\" ) );\n        field.setPentahoType( rep.getStepAttributeString( id_step, i, \"field_type\" ) );\n        String parquetType = rep.getStepAttributeString( id_step, i, \"parquet_type\" );\n        if ( parquetType != null && !parquetType.equalsIgnoreCase( \"null\" ) ) {\n          field.setParquetType( parquetType );\n        } else {\n          field.setParquetType( ParquetTypeConverter.convertToParquetType( field.getPentahoType() ) );\n        }\n        String stringFormat = rep.getStepAttributeString( id_step, i, \"format\" );\n        field.setStringFormat( stringFormat == null ? \"\" : stringFormat );\n        this.inputFields[ i ] = field;\n      }\n\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unexpected error reading step information from the repository\", e );\n    }\n  }\n\n  public void allocateFiles( int nrFiles ) {\n    inputFiles.environment = new String[ nrFiles ];\n    inputFiles.fileName = new String[ nrFiles ];\n    inputFiles.fileMask = new String[ nrFiles ];\n    inputFiles.excludeFileMask = new String[ nrFiles ];\n    inputFiles.fileRequired = new String[ nrFiles ];\n    inputFiles.includeSubFolders = new String[ nrFiles ];\n  }\n  /**\n   * TODO: remove from base\n   */\n  @Override\n  public String getEncoding() {\n    return null;\n  }\n\n  @Override\n  public void setDefault() {\n    allocateFiles( 0 );\n    inputFields = new ParquetInputField[ 0 ];\n  }\n\n  @Override\n  public void resolve( Bowl bowl ) {\n    if ( inputFiles != null && inputFiles.fileName != null ) {\n      for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n        try {\n          String realFileName =\n            getParentStepMeta().getParentTransMeta().environmentSubstitute( inputFiles.fileName[ i ] );\n          FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( realFileName );\n          if ( AliasedFileObject.isAliasedFile( fileObject ) ) {\n            inputFiles.fileName[ i ] = ( (AliasedFileObject) fileObject ).getAELSafeURIString();\n          }\n        } catch ( KettleFileException e ) {\n          throw new RuntimeException( e );\n        }\n      }\n    }\n  }\n\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n                         VariableSpace space, Repository repository, IMetaStore metaStore ) throws\n    KettleStepException {\n    try {\n      for ( int i = 0; i < inputFields.length; i++ ) {\n        IParquetInputField field = inputFields[ i ];\n        String value = space.environmentSubstitute( field.getPentahoFieldName() );\n        ValueMetaInterface v = ValueMetaFactory.createValueMeta( value,\n          field.getPentahoType() );\n        v.setOrigin( origin );\n        rowMeta.addValueMeta( v );\n      }\n    } catch ( KettlePluginException e ) {\n      throw new KettleStepException( \"Unable to create value type\", e );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/parquet/output/ParquetOutputField.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.output;\n\nimport org.pentaho.big.data.kettle.plugins.formats.BaseFormatOutputField;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.ParquetTypeConverter;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.format.IParquetOutputField;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\n\npublic class ParquetOutputField extends BaseFormatOutputField implements IParquetOutputField {\n\n  @Override\n  public ParquetSpec.DataType getParquetType() {\n    for ( ParquetSpec.DataType type : ParquetSpec.DataType.values() ) {\n      if ( type.getId() == formatType ) {\n        return type;\n      }\n    }\n    return null;\n  }\n\n  public void setFormatType( ParquetSpec.DataType formatType ) {\n    this.formatType = formatType.getId();\n  }\n\n  @Injection( name = \"FIELD_PARQUET_TYPE\", group = \"FIELDS\" )\n  public void setFormatType( String typeName ) {\n    try  {\n      setFormatType( Integer.parseInt( typeName ) );\n    } catch ( NumberFormatException nfe ) {\n      for ( ParquetSpec.DataType parquetType : ParquetSpec.DataType.values() ) {\n        if ( parquetType.getName().equals( typeName ) ) {\n          this.formatType = parquetType.getId();\n          break;\n        }\n      }\n    }\n  }\n\n  @Injection( name = \"FIELD_TYPE\", group = \"FIELDS\" )\n  @Deprecated\n  public void setPentahoType( String typeName ) {\n    for ( int i = 0; i < ValueMetaInterface.typeCodes.length; i++ ) {\n      if ( typeName.equals( ValueMetaInterface.typeCodes[ i ] ) ) {\n        setFormatType( ParquetTypeConverter.convertToParquetType( i ) );\n        break;\n      }\n    }\n  }\n\n  public boolean isDecimalType() {\n    return getParquetType().equals( ParquetSpec.DataType.DECIMAL );\n  }\n\n\n  @Override\n  public void setPrecision( String precision ) {\n    if ( ( precision == null ) || ( precision.trim().length() == 0 ) ) {\n      this.precision = isDecimalType() ? ParquetSpec.DEFAULT_DECIMAL_PRECISION : 0;\n    } else {\n      this.precision = Integer.valueOf( precision );\n      if ( ( this.precision <= 0 ) && isDecimalType() ) {\n        this.precision = ParquetSpec.DEFAULT_DECIMAL_PRECISION;\n      }\n    }\n  }\n\n  @Override\n  public void setScale( String scale ) {\n    if ( ( scale == null ) || ( scale.trim().length() == 0 ) ) {\n      this.scale = isDecimalType() ? ParquetSpec.DEFAULT_DECIMAL_SCALE : 0;\n    } else {\n      this.scale = Integer.valueOf( scale );\n      if ( ( this.scale < 0 ) ) {\n        this.scale = isDecimalType() ? ParquetSpec.DEFAULT_DECIMAL_SCALE : 0;\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/java/org/pentaho/big/data/kettle/plugins/formats/parquet/output/ParquetOutputMetaBase.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.output;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.parquet.hadoop.metadata.CompressionCodecName;\nimport org.pentaho.big.data.kettle.plugins.formats.parquet.ParquetTypeConverter;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.di.workarounds.ResolvableResource;\nimport org.pentaho.hadoop.shim.api.format.ParquetSpec;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.text.SimpleDateFormat;\nimport java.util.ArrayList;\nimport java.util.Date;\nimport java.util.List;\nimport java.util.function.Function;\n\n/**\n * Parquet output meta step without Hadoop-dependent classes. Required for read meta in the spark native code.\n *\n * @author <alexander_buloichik@epam.com>\n */\npublic abstract class ParquetOutputMetaBase extends BaseStepMeta implements StepMetaInterface, ResolvableResource {\n\n  private static final Class<?> PKG = ParquetOutputMetaBase.class;\n\n  @Injection( name = \"COMPRESSION\" )\n  public String compressionType;\n  @Injection( name = \"PARQUET_VERSION\" )\n  public String parquetVersion;\n  @Injection( name = \"ROW_GROUP_SIZE\" )\n  public String rowGroupSize;\n  @Injection( name = \"DATA_PAGE_SIZE\" )\n  public String dataPageSize;\n  @Injection( name = \"ENABLE_DICTIONARY\" )\n  public boolean enableDictionary;\n  @Injection( name = \"DICT_PAGE_SIZE\" )\n  public String dictPageSize;\n  @Injection( name = \"OVERRIDE_OUTPUT\" )\n  public boolean overrideOutput;\n\n  /** Flag: add the date in the filename */\n  @Injection( name = \"INC_DATE_IN_FILENAME\" )\n  private boolean dateInFilename;\n\n  /** Flag: add the time in the filename */\n  @Injection( name = \"INC_TIME_IN_FILENAME\" )\n  private boolean timeInFilename;\n\n  @Injection( name = \"DATE_FORMAT\" )\n  private String dateTimeFormat;\n\n  /** The file extention in case of a generated filename */\n  @Injection( name = \"EXTENSION\" )\n  private String extension;\n\n  @Injection( name = \"FILENAME\", group = \"FILENAME_LINES\" )\n  public String filename;\n\n  @InjectionDeep\n  private List<ParquetOutputField> outputFields = new ArrayList<ParquetOutputField>();\n\n  @Override\n  public void setDefault() {\n    outputFields = new ArrayList<ParquetOutputField>();\n    dictPageSize = String.valueOf( 1024 );\n    extension = \"parquet\";\n  }\n\n  public String getFilename() {\n    return filename;\n  }\n\n  public void setFilename( String filename ) {\n    this.filename = filename;\n  }\n\n  public boolean isEnableDictionary() {\n    return enableDictionary;\n  }\n\n  public void setEnableDictionary( boolean enableDictionary ) {\n    this.enableDictionary = enableDictionary;\n  }\n\n  public boolean isOverrideOutput() {\n    return overrideOutput;\n  }\n\n  public void setOverrideOutput( boolean overrideOutput ) {\n    this.overrideOutput = overrideOutput;\n  }\n\n  public boolean isDateInFilename() {\n    return dateInFilename;\n  }\n\n  public void setDateInFilename( boolean dateInFilename ) {\n    this.dateInFilename = dateInFilename;\n  }\n\n  public boolean isTimeInFilename() {\n    return timeInFilename;\n  }\n\n  public void setTimeInFilename( boolean timeInFilename ) {\n    this.timeInFilename = timeInFilename;\n  }\n\n  public String getDateTimeFormat() {\n    return dateTimeFormat;\n  }\n\n  public void setDateTimeFormat( String dateTimeFormat ) {\n    this.dateTimeFormat = dateTimeFormat;\n  }\n\n  public String getExtension() {\n    return extension;\n  }\n\n  public void setExtension( String extension ) {\n    this.extension = extension;\n  }\n\n  public List<ParquetOutputField> getOutputFields() {\n    return outputFields;\n  }\n\n  public void setOutputFields( List<ParquetOutputField> outputFields ) {\n    this.outputFields = outputFields;\n  }\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore ) throws KettleXMLException {\n    readData( stepnode, metaStore );\n  }\n\n  private void readData( Node stepnode, IMetaStore metastore ) throws KettleXMLException {\n    try {\n      filename = XMLHandler.getTagValue( stepnode, \"filename\" );\n      overrideOutput = \"Y\".equalsIgnoreCase( ( XMLHandler.getTagValue( stepnode, \"overrideOutput\" ) ) );\n      enableDictionary = \"Y\".equalsIgnoreCase( ( XMLHandler.getTagValue( stepnode, \"enableDictionary\" ) ) );\n      compressionType = XMLHandler.getTagValue( stepnode, \"compression\" );\n      parquetVersion = XMLHandler.getTagValue( stepnode, \"parquetVersion\" );\n      rowGroupSize = XMLHandler.getTagValue( stepnode, \"rowGroupSize\" );\n      dataPageSize = XMLHandler.getTagValue( stepnode, \"dataPageSize\" );\n      dictPageSize = XMLHandler.getTagValue( stepnode, \"dictPageSize\" );\n      extension = XMLHandler.getTagValue( stepnode, \"extension\" );\n      dateInFilename = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( stepnode, \"dateInFilename\" ) );\n      timeInFilename = \"Y\".equalsIgnoreCase( ( XMLHandler.getTagValue( stepnode, \"timeInFilename\" ) ) );\n      dateTimeFormat = XMLHandler.getTagValue( stepnode, \"dateTimeFormat\" );\n\n      Node fields = XMLHandler.getSubNode( stepnode, \"fields\" );\n      int nrfields = XMLHandler.countNodes( fields, \"field\" );\n      List<ParquetOutputField> parquetOutputFields = new ArrayList<>();\n      for ( int i = 0; i < nrfields; i++ ) {\n        Node fnode = XMLHandler.getSubNodeByNr( fields, \"field\", i );\n        ParquetOutputField outputField = new ParquetOutputField();\n        outputField.setFormatFieldName( XMLHandler.getTagValue( fnode, \"path\" ) );\n        outputField.setPentahoFieldName( XMLHandler.getTagValue( fnode, \"name\" ) );\n        int parquetTypeId = getParquetTypeId( XMLHandler.getTagValue( fnode, \"type\" ) );\n        outputField.setFormatType( parquetTypeId );\n        outputField.setPrecision( XMLHandler.getTagValue( fnode, \"precision\" ) );\n        outputField.setScale( XMLHandler.getTagValue( fnode, \"scale\" ) );\n        outputField.setAllowNull( \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( fnode, \"nullable\" ) ) );\n        outputField.setDefaultValue( XMLHandler.getTagValue( fnode, \"default\" ) );\n        parquetOutputFields.add( outputField );\n      }\n      this.outputFields = parquetOutputFields;\n    } catch ( Exception e ) {\n      throw new KettleXMLException( \"Unable to load step info from XML\", e );\n    }\n  }\n\n  @Override\n  public String getXML() {\n    StringBuffer retval = new StringBuffer( 800 );\n\n    if ( parentStepMeta != null && parentStepMeta.getParentTransMeta() != null ) {\n      parentStepMeta.getParentTransMeta().getNamedClusterEmbedManager().registerUrl( filename );\n    }\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"filename\", filename ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"overrideOutput\", overrideOutput ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"compression\", compressionType ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"parquetVersion\", parquetVersion ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"enableDictionary\", enableDictionary ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"dictPageSize\", dictPageSize ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"rowGroupSize\", rowGroupSize ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"dataPageSize\", dataPageSize ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"extension\", extension ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"dateInFilename\", dateInFilename ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"timeInFilename\", timeInFilename ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"dateTimeFormat\", dateTimeFormat ) );\n\n    retval.append( \"    <fields>\" ).append( Const.CR );\n    for ( int i = 0; i < outputFields.size(); i++ ) {\n      ParquetOutputField field = outputFields.get( i );\n\n      if ( field.getPentahoFieldName() != null && field.getPentahoFieldName().length() != 0 ) {\n        retval.append( \"      <field>\" ).append( Const.CR );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"path\", field.getFormatFieldName() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"name\", field.getPentahoFieldName() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"type\", field.getFormatType() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"precision\", field.getPrecision() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"scale\", field.getScale() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"nullable\", field.getAllowNull() ) );\n        retval.append( \"        \" ).append( XMLHandler.addTagValue( \"default\", field.getDefaultValue() ) );\n        retval.append( \"      </field>\" ).append( Const.CR );\n      }\n    }\n    retval.append( \"    </fields>\" ).append( Const.CR );\n\n    return retval.toString();\n  }\n\n  private int getParquetTypeId( String savedType ) {\n    int parquetTypeId = 0;\n    try {\n      parquetTypeId = Integer.parseInt( savedType );\n    } catch ( NumberFormatException e ) {\n      String parquetTypeName = ParquetTypeConverter.convertToParquetType( savedType );\n      for ( ParquetSpec.DataType parquetType : ParquetSpec.DataType.values() ) {\n        if ( parquetType.getName().equals( parquetTypeName ) ) {\n          parquetTypeId = parquetType.getId();\n          break;\n        }\n      }\n    }\n    return parquetTypeId;\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    try {\n      filename = rep.getStepAttributeString( id_step, \"filename\" );\n      overrideOutput = rep.getStepAttributeBoolean( id_step, \"overrideOutput\" );\n      compressionType = rep.getStepAttributeString( id_step, \"compression\" );\n      parquetVersion = rep.getStepAttributeString( id_step, \"parquetVersion\" );\n      enableDictionary = rep.getStepAttributeBoolean( id_step, \"enableDictionary\" );\n      dictPageSize = rep.getStepAttributeString( id_step, \"dictPageSize\" );\n      rowGroupSize = rep.getStepAttributeString( id_step, \"rowGroupSize\" );\n      dataPageSize = rep.getStepAttributeString( id_step, \"dataPageSize\" );\n      extension = rep.getStepAttributeString( id_step, \"extension\" );\n      dateInFilename = rep.getStepAttributeBoolean( id_step, \"dateInFilename\" );\n      timeInFilename = rep.getStepAttributeBoolean( id_step, \"timeInFilename\" );\n      dateTimeFormat = rep.getStepAttributeString( id_step, \"dateTimeFormat\" );\n\n      // using the \"type\" column to get the number of field rows because \"type\" is guaranteed not to be null.\n      int nrfields = rep.countNrStepAttributes( id_step, \"type\" );\n\n      List<ParquetOutputField> parquetOutputFields = new ArrayList<>();\n      for ( int i = 0; i < nrfields; i++ ) {\n        ParquetOutputField outputField = new ParquetOutputField();\n        outputField.setFormatFieldName( rep.getStepAttributeString( id_step, i, \"path\" ) );\n        outputField.setPentahoFieldName( rep.getStepAttributeString( id_step, i, \"name\" ) );\n        int parquetTypeId = getParquetTypeId( rep.getStepAttributeString( id_step, i, \"type\" ) );\n        outputField.setFormatType( parquetTypeId );\n        outputField.setPrecision( rep.getStepAttributeString( id_step, i, \"precision\" ) );\n        outputField.setScale( rep.getStepAttributeString( id_step, i, \"scale\" ) );\n        outputField.setAllowNull( rep.getStepAttributeBoolean( id_step, i, \"nullable\" ) );\n        outputField.setDefaultValue( rep.getStepAttributeString( id_step, i, \"default\" ) );\n        parquetOutputFields.add( outputField );\n      }\n      this.outputFields = parquetOutputFields;\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unexpected error reading step information from the repository\", e );\n    }\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n    try {\n      rep.saveStepAttribute( id_transformation, id_step, \"filename\", filename );\n      rep.saveStepAttribute( id_transformation, id_step, \"overrideOutput\", overrideOutput );\n      rep.saveStepAttribute( id_transformation, id_step, \"compression\", compressionType );\n      rep.saveStepAttribute( id_transformation, id_step, \"parquetVersion\", parquetVersion );\n      rep.saveStepAttribute( id_transformation, id_step, \"enableDictionary\", enableDictionary );\n      rep.saveStepAttribute( id_transformation, id_step, \"dictPageSize\", dictPageSize );\n      rep.saveStepAttribute( id_transformation, id_step, \"rowGroupSize\", rowGroupSize );\n      rep.saveStepAttribute( id_transformation, id_step, \"dataPageSize\", dataPageSize );\n      rep.saveStepAttribute( id_transformation, id_step, \"extension\", extension );\n      rep.saveStepAttribute( id_transformation, id_step, \"dateInFilename\", dateInFilename );\n      rep.saveStepAttribute( id_transformation, id_step, \"timeInFilename\", timeInFilename );\n      rep.saveStepAttribute( id_transformation, id_step, \"dateTimeFormat\", dateTimeFormat );\n      for ( int i = 0; i < outputFields.size(); i++ ) {\n        ParquetOutputField field = outputFields.get( i );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"path\", field.getFormatFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"name\", field.getPentahoFieldName() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"type\", field.getFormatType() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"precision\", field.getPrecision() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"scale\", field.getScale() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"nullable\", field.getAllowNull() );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"default\", field.getDefaultValue() );\n      }\n    } catch ( Exception e ) {\n      throw new KettleException( \"Unable to save step information to the repository for id_step=\" + id_step, e );\n    }\n  }\n\n  @Override\n  public void resolve( Bowl bowl ) {\n    if ( filename != null && !filename.isEmpty() ) {\n      try {\n        String realFileName = getParentStepMeta().getParentTransMeta().environmentSubstitute( filename );\n        FileObject fileObject = KettleVFS.getInstance( bowl ).getFileObject( realFileName );\n        if ( AliasedFileObject.isAliasedFile( fileObject ) ) {\n          filename = ( (AliasedFileObject) fileObject ).getAELSafeURIString();\n        }\n      } catch ( KettleFileException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n  }\n\n  public String constructOutputFilename() {\n    String outputFileName = filename;\n    if ( dateTimeFormat != null && !dateTimeFormat.isEmpty() ) {\n      String dateTimeFormatPattern = getParentStepMeta().getParentTransMeta().environmentSubstitute( dateTimeFormat );\n      outputFileName += new SimpleDateFormat( dateTimeFormatPattern ).format( new Date() );\n    } else {\n      if ( dateInFilename ) {\n        outputFileName += '_' + new SimpleDateFormat( \"yyyyMMdd\" ).format( new Date() );\n      }\n      if ( timeInFilename ) {\n        outputFileName += '_' + new SimpleDateFormat( \"HHmmss\" ).format( new Date() );\n      }\n    }\n    if ( extension != null && !extension.isEmpty() ) {\n      outputFileName += '.' + extension;\n    }\n    return outputFileName;\n  }\n\n  public int getRowGroupSize( VariableSpace vspace ) {\n    return parseReplace( rowGroupSize, vspace, str -> Integer.parseInt( str ), 0 );\n  }\n\n  protected <T> T parseReplace( String value, VariableSpace vspace, Function<String, T> parser, T defaultValue ) {\n    String replaced = vspace != null ? vspace.environmentSubstitute( value ) : value;\n    if ( !Utils.isEmpty( replaced ) ) {\n      try {\n        return parser.apply( replaced );\n      } catch ( Exception e ) {\n        // ignored\n      }\n    }\n    return defaultValue;\n  }\n\n  public String getRowGroupSize() {\n    return rowGroupSize;\n  }\n\n  public void setRowGroupSize( String value ) {\n    rowGroupSize = value;\n  }\n\n  public String getCompressionType() {\n    return StringUtil.isVariable( compressionType ) ? compressionType : getCompressionType( null ).toString();\n  }\n\n  public void setCompressionType( String value ) {\n    compressionType =\n      StringUtil.isVariable( value ) ? value : parseFromToString( value, CompressionCodecName.values(), CompressionCodecName.UNCOMPRESSED ).name();\n  }\n\n  public CompressionCodecName getCompressionType(VariableSpace vspace ) {\n    return parseReplace( compressionType, vspace, str -> findCompressionType( str ), CompressionCodecName.UNCOMPRESSED );\n  }\n\n  public String getParquetVersion() {\n    return StringUtil.isVariable( parquetVersion ) ? parquetVersion : getParquetVersion( null ).toString();\n  }\n\n  public void setParquetVersion( String value ) {\n    parquetVersion =\n      StringUtil.isVariable( value ) ? value : parseFromToString( value, ParquetVersion.values(), CompressionCodecName.UNCOMPRESSED ).name();\n  }\n\n  public ParquetVersion getParquetVersion( VariableSpace vspace ) {\n    return parseReplace( parquetVersion, vspace, str -> findParquetVersion( str ), ParquetVersion.PARQUET_1 );\n  }\n\n  public int getDataPageSize( VariableSpace vspace ) {\n    return parseReplace( dataPageSize, vspace, s -> Integer.parseInt( s ), 0 );\n  }\n\n  public String getDataPageSize() {\n    return dataPageSize;\n  }\n\n  public void setDataPageSize( String dataPageSize ) {\n    this.dataPageSize = dataPageSize;\n  }\n\n  public int getDictPageSize( VariableSpace vspace ) {\n    return parseReplace( dictPageSize, vspace, s -> Integer.parseInt( s ), 0 );\n  }\n\n  public String getDictPageSize() {\n    return dictPageSize;\n  }\n\n  public void setDictPageSize( String dictPageSize ) {\n    this.dictPageSize = dictPageSize;\n  }\n\n  public String[] getCompressionTypes() {\n    return getStrings( CompressionCodecName.values() );\n  }\n\n  public String[] getVersionTypes() {\n    return getStrings( ParquetVersion.values() );\n  }\n\n  private  CompressionCodecName findCompressionType( String str ) {\n    try {\n      return CompressionCodecName.valueOf( str );\n    } catch ( Throwable th ) {\n      return parseFromToString( str, CompressionCodecName.values(), CompressionCodecName.UNCOMPRESSED );\n    }\n  }\n\n  private  ParquetVersion findParquetVersion( String str ) {\n    try {\n      return ParquetVersion.valueOf( str );\n    } catch ( Throwable th ) {\n      return parseFromToString( str, ParquetVersion.values(), ParquetVersion.PARQUET_1 );\n    }\n  }\n\n  public static enum ParquetVersion {\n    PARQUET_1( \"Parquet 1.0\" ), PARQUET_2( \"Parquet 2.0\" );\n\n    private final String uiName;\n\n    private ParquetVersion( String name ) {\n      this.uiName = name;\n    }\n\n    @Override\n    public String toString() {\n      return uiName;\n    }\n  }\n\n  protected static <T> String[] getStrings( T[] objects ) {\n    String[] names = new String[ objects.length ];\n    int i = 0;\n    for ( T obj : objects ) {\n      names[ i++ ] = obj.toString();\n    }\n    return names;\n  }\n\n  protected static <T> T parseFromToString( String str, T[] values, T defaultValue ) {\n    if ( !Utils.isEmpty( str ) ) {\n      for ( T type : values ) {\n        if ( str.equalsIgnoreCase( type.toString() ) ) {\n          return type;\n        }\n      }\n    }\n    return defaultValue;\n  }\n\n  private static String getMsg( String key ) {\n    return BaseMessages.getString( PKG, key );\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/main/resources/org/pentaho/big/data/kettle/plugins/formats/parquet/output/messages/messages_en_US.properties",
    "content": "ParquetOutput.EncodingType.PLAIN=Plain\nParquetOutput.EncodingType.DICTIONARY=Dictionary\nParquetOutput.EncodingType.BIT_PACKED=Bit packed\nParquetOutput.EncodingType.RLE=RLE\n\nParquetOutput.CompressionType.NONE=None\n\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/orc/OrcInputFieldTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc;\n\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.fail;\n\npublic class OrcInputFieldTest {\n\n  @Test\n  public void testSetOrcTypeByEnumToString() {\n    // Test setting ORC type using enum.toString() format\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"BIGINT\" );\n    assertEquals( \"BIGINT type should be set correctly\", OrcSpec.DataType.BIGINT, field.getOrcType() );\n    assertEquals( \"Format type ID should match BIGINT\", OrcSpec.DataType.BIGINT.getId(), field.getFormatType() );\n  }\n\n  @Test\n  public void testSetOrcTypeByName() {\n    // Test setting ORC type using getName() format\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"BigInt\" );\n    assertEquals( \"BigInt name should be set correctly\", OrcSpec.DataType.BIGINT, field.getOrcType() );\n    assertEquals( \"Format type ID should match BIGINT\", OrcSpec.DataType.BIGINT.getId(), field.getFormatType() );\n  }\n\n  @Test\n  public void testSetOrcTypeCaseInsensitive() {\n    // Test that the method is case-insensitive\n    OrcInputField field1 = new OrcInputField();\n    field1.setOrcType( \"string\" );\n    assertEquals( \"Lowercase 'string' should work\", OrcSpec.DataType.STRING, field1.getOrcType() );\n\n    OrcInputField field2 = new OrcInputField();\n    field2.setOrcType( \"STRING\" );\n    assertEquals( \"Uppercase 'STRING' should work\", OrcSpec.DataType.STRING, field2.getOrcType() );\n\n    OrcInputField field3 = new OrcInputField();\n    field3.setOrcType( \"StRiNg\" );\n    assertEquals( \"Mixed case 'StRiNg' should work\", OrcSpec.DataType.STRING, field3.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeAllDataTypes() {\n    // Test all ORC data types can be set correctly\n    for ( OrcSpec.DataType dataType : OrcSpec.DataType.values() ) {\n      // Test by enum toString()\n      OrcInputField field1 = new OrcInputField();\n      field1.setOrcType( dataType.toString() );\n      assertEquals( \"Setting by toString: \" + dataType, dataType, field1.getOrcType() );\n\n      // Test by name\n      OrcInputField field2 = new OrcInputField();\n      field2.setOrcType( dataType.getName() );\n      assertEquals( \"Setting by name: \" + dataType.getName(), dataType, field2.getOrcType() );\n    }\n  }\n\n  @Test\n  public void testSetOrcTypeInvalidValue() {\n    // Test that invalid type doesn't change the field\n    OrcInputField field = new OrcInputField();\n    int initialFormatType = field.getFormatType();\n\n    field.setOrcType( \"INVALID_TYPE\" );\n    assertEquals( \"Invalid type should not change formatType\", initialFormatType, field.getFormatType() );\n  }\n\n  @Test\n  public void testSetOrcTypeEmptyString() {\n    // Test that empty string doesn't change the field\n    OrcInputField field = new OrcInputField();\n    int initialFormatType = field.getFormatType();\n\n    field.setOrcType( \"\" );\n    assertEquals( \"Empty string should not change formatType\", initialFormatType, field.getFormatType() );\n  }\n\n  @Test\n  public void testSetOrcTypeNull() {\n    // Test that null doesn't throw exception and doesn't change the field\n    OrcInputField field = new OrcInputField();\n    int initialFormatType = field.getFormatType();\n\n    try {\n      field.setOrcType( (String) null );\n      assertEquals( \"Null should not change formatType\", initialFormatType, field.getFormatType() );\n    } catch ( NullPointerException e ) {\n      fail( \"Null should not throw NullPointerException\" );\n    }\n  }\n\n  @Test\n  public void testSetOrcTypeWithWhitespace() {\n    // Test that types with leading/trailing whitespace don't match\n    OrcInputField field = new OrcInputField();\n    int initialFormatType = field.getFormatType();\n\n    field.setOrcType( \" STRING \" );\n    assertEquals( \"Type with whitespace should not match\", initialFormatType, field.getFormatType() );\n  }\n\n  @Test\n  public void testSetOrcTypeTimestamp() {\n    // Test specific type - TIMESTAMP\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"Timestamp\" );\n    assertEquals( \"Timestamp type should be set correctly\", OrcSpec.DataType.TIMESTAMP, field.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeDate() {\n    // Test specific type - DATE\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"DATE\" );\n    assertEquals( \"Date type should be set correctly\", OrcSpec.DataType.DATE, field.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeDecimal() {\n    // Test specific type - DECIMAL\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"decimal\" );\n    assertEquals( \"Decimal type should be set correctly\", OrcSpec.DataType.DECIMAL, field.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeBoolean() {\n    // Test specific type - BOOLEAN\n    OrcInputField field = new OrcInputField();\n    field.setOrcType( \"Boolean\" );\n    assertEquals( \"Boolean type should be set correctly\", OrcSpec.DataType.BOOLEAN, field.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeOverwrite() {\n    // Test that setting type multiple times overwrites previous value\n    OrcInputField field = new OrcInputField();\n\n    field.setOrcType( \"STRING\" );\n    assertEquals( \"First type should be STRING\", OrcSpec.DataType.STRING, field.getOrcType() );\n\n    field.setOrcType( \"INTEGER\" );\n    assertEquals( \"Second type should overwrite to INTEGER\", OrcSpec.DataType.INTEGER, field.getOrcType() );\n\n    field.setOrcType( \"DOUBLE\" );\n    assertEquals( \"Third type should overwrite to DOUBLE\", OrcSpec.DataType.DOUBLE, field.getOrcType() );\n  }\n\n  @Test\n  public void testSetOrcTypeIntegerBothFormats() {\n    // Test that INTEGER type can be set using both \"Int\" (display name) and \"INTEGER\" (enum name)\n\n    // Test with \"Int\" (display name from getName())\n    OrcInputField field1 = new OrcInputField();\n    field1.setOrcType( \"Int\" );\n    assertEquals( \"Setting with 'Int' should result in INTEGER type\", OrcSpec.DataType.INTEGER, field1.getOrcType() );\n    assertEquals( \"Format type ID should match INTEGER\", OrcSpec.DataType.INTEGER.getId(), field1.getFormatType() );\n\n    // Test with \"INTEGER\" (enum toString())\n    OrcInputField field2 = new OrcInputField();\n    field2.setOrcType( \"INTEGER\" );\n    assertEquals( \"Setting with 'INTEGER' should result in INTEGER type\", OrcSpec.DataType.INTEGER, field2.getOrcType() );\n    assertEquals( \"Format type ID should match INTEGER\", OrcSpec.DataType.INTEGER.getId(), field2.getFormatType() );\n\n    // Test with lowercase \"int\"\n    OrcInputField field3 = new OrcInputField();\n    field3.setOrcType( \"int\" );\n    assertEquals( \"Setting with lowercase 'int' should result in INTEGER type\", OrcSpec.DataType.INTEGER, field3.getOrcType() );\n    assertEquals( \"Format type ID should match INTEGER\", OrcSpec.DataType.INTEGER.getId(), field3.getFormatType() );\n\n    // Test with lowercase \"integer\"\n    OrcInputField field4 = new OrcInputField();\n    field4.setOrcType( \"integer\" );\n    assertEquals( \"Setting with lowercase 'integer' should result in INTEGER type\", OrcSpec.DataType.INTEGER, field4.getOrcType() );\n    assertEquals( \"Format type ID should match INTEGER\", OrcSpec.DataType.INTEGER.getId(), field4.getFormatType() );\n\n    // Test with mixed case \"InTeGeR\"\n    OrcInputField field5 = new OrcInputField();\n    field5.setOrcType( \"InTeGeR\" );\n    assertEquals( \"Setting with mixed case 'InTeGeR' should result in INTEGER type\", OrcSpec.DataType.INTEGER, field5.getOrcType() );\n    assertEquals( \"Format type ID should match INTEGER\", OrcSpec.DataType.INTEGER.getId(), field5.getFormatType() );\n\n    // Verify both result in the same type\n    assertEquals( \"Both 'Int' and 'INTEGER' should result in the same type\", field1.getOrcType(), field2.getOrcType() );\n    assertEquals( \"Both should have the same format type ID\", field1.getFormatType(), field2.getFormatType() );\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/orc/input/OrcInputMetaBaseTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.input;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Matchers.anyString;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.formats.FormatInputFile;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\n\npublic class OrcInputMetaBaseTest {\n\n  private static final String FILE_NAME_VALID_PATH = \"path/to/file\";\n\n  private OrcInputMetaBase inputMeta;\n  private VariableSpace variableSpace;\n\n  @Before\n  public void setUp() throws Exception {\n    NamedClusterEmbedManager  manager = mock( NamedClusterEmbedManager.class );\n\n    TransMeta parentTransMeta = mock( TransMeta.class );\n    doReturn( manager ).when( parentTransMeta ).getNamedClusterEmbedManager();\n\n    StepMeta parentStepMeta = mock( StepMeta.class );\n    doReturn( parentTransMeta ).when( parentStepMeta ).getParentTransMeta();\n\n    inputMeta = new OrcInputMetaBase() {\n      @Override\n      public StepDataInterface getStepData() {\n        return null;\n      }     \n      @Override\n      public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta, Trans trans ) {\n        return null;\n      }\n    };\n\n    inputMeta.setParentStepMeta( parentStepMeta );\n    inputMeta = spy( inputMeta );\n    variableSpace = mock( VariableSpace.class );\n\n    doReturn( \"<def>\" ).when( variableSpace ).environmentSubstitute( anyString() );\n    doReturn( FILE_NAME_VALID_PATH ).when( variableSpace ).environmentSubstitute( FILE_NAME_VALID_PATH );\n  }\n\n  @Test\n  public void testGetXmlWorksIfWeUpdateOnlyPartOfInputFilesInformation() throws Exception {\n    inputMeta.inputFiles = new FormatInputFile();\n    inputMeta.inputFiles.fileName = new String[] { FILE_NAME_VALID_PATH };\n\n    inputMeta.getXML();\n\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.fileMask.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.excludeFileMask.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.fileRequired.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.includeSubFolders.length );\n    //specific for bigdata format\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.environment.length );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/orc/output/OrcOutputFieldTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.output;\n\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.format.OrcSpec;\n\nimport static org.junit.Assert.*;\n\npublic class OrcOutputFieldTest {\n\n  @Test\n  public void setFormatTypeTest() {\n    //Names must be unique to each data type and should be addressable like the id\n    OrcOutputField f;\n    for ( OrcSpec.DataType dataType : OrcSpec.DataType.values() ) {\n\n      //Set by Name\n      f = new OrcOutputField();\n      f.setFormatType( dataType.getName() );\n      assertEquals( \"Checking setting of \\\"\" + dataType.getName() + \"\\\"\", dataType, f.getOrcType() );\n\n      //Set by Id\n      f = new OrcOutputField();\n      f.setFormatType( String.valueOf( dataType.getId() ) );\n      assertEquals( \"Checking setting of \\\"\" + dataType.getId() + \"\\\"\", dataType, f.getOrcType() );\n\n      //Set by Enum\n      f = new OrcOutputField();\n      f.setFormatType( String.valueOf( dataType.toString() ) );\n      assertEquals( \"Checking setting of \\\"\" + dataType.toString() + \"\\\"\", dataType, f.getOrcType() );\n    }\n  }\n}"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/orc/output/OrcOutputMetabaseTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.orc.output;\n\nimport org.apache.orc.CompressionKind;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.runners.MockitoJUnitRunner;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\n\nimport static org.mockito.Mockito.spy;\n\n/**\n * Created by rmansoor on 4/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class OrcOutputMetabaseTest {\n  private OrcOutputMetaBase metaBase;\n\n  @Before\n  public void setUp() throws Exception {\n    metaBase = spy( new OrcOutputMetaBase() {\n      @Override\n      public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n          TransMeta transMeta,\n          Trans trans ) {\n        return null;\n      }\n\n      @Override public StepDataInterface getStepData() {\n        return null;\n      }\n    } );\n  }\n\n  @Test\n  public void setCompressionType() {\n    metaBase.setCompressionType( \"snappy\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"Snappy\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"SNAPPY\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"zlib\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZLIB.toString() ) );\n    metaBase.setCompressionType( \"Zlib\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZLIB.toString() ) );\n    metaBase.setCompressionType( \"ZLIB\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZLIB.toString() ) );\n    metaBase.setCompressionType( \"lzo\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZO.toString() ) );\n    metaBase.setCompressionType( \"Lzo\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZO.toString() ) );\n    metaBase.setCompressionType( \"LZO\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZO.toString() ) );\n    metaBase.setCompressionType( \"None\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.NONE.toString() ) );\n    metaBase.setCompressionType( \"none\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.NONE.toString() ) );\n    metaBase.setCompressionType( \"NONE\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.NONE.toString() ) );\n    metaBase.setCompressionType( \"lz4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZ4.toString() ) );\n    metaBase.setCompressionType( \"Lz4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZ4.toString() ) );\n    metaBase.setCompressionType( \"LZ4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.LZ4.toString() ) );\n    metaBase.setCompressionType( \"zstd\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZSTD.toString() ) );\n    metaBase.setCompressionType( \"Zstd\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZSTD.toString() ) );\n    metaBase.setCompressionType( \"ZSTD\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionKind.ZSTD.toString() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/parquet/input/ParquetInputMetaBaseTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.input;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Matchers.anyString;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.formats.FormatInputFile;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\n\npublic class ParquetInputMetaBaseTest {\n\n  private static final String FILE_NAME_VALID_PATH = \"path/to/file\";\n\n  private ParquetInputMetaBase inputMeta;\n  private VariableSpace variableSpace;\n\n  @Before\n  public void setUp() throws Exception {\n    NamedClusterEmbedManager  manager = mock( NamedClusterEmbedManager.class );\n\n    TransMeta parentTransMeta = mock( TransMeta.class );\n    doReturn( manager ).when( parentTransMeta ).getNamedClusterEmbedManager();\n\n    StepMeta parentStepMeta = mock( StepMeta.class );\n    doReturn( parentTransMeta ).when( parentStepMeta ).getParentTransMeta();\n\n    inputMeta = new ParquetInputMetaBase() {\n      @Override\n      public StepDataInterface getStepData() {\n        return null;\n      }     \n      @Override\n      public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta, Trans trans ) {\n        return null;\n      }\n    };\n\n    inputMeta.setParentStepMeta( parentStepMeta );\n    inputMeta = spy( inputMeta );\n    variableSpace = mock( VariableSpace.class );\n\n    doReturn( \"<def>\" ).when( variableSpace ).environmentSubstitute( anyString() );\n    doReturn( FILE_NAME_VALID_PATH ).when( variableSpace ).environmentSubstitute( FILE_NAME_VALID_PATH );\n  }\n\n  @Test\n  public void testGetXmlWorksIfWeUpdateOnlyPartOfInputFilesInformation() throws Exception {\n    inputMeta.inputFiles = new FormatInputFile();\n    inputMeta.inputFiles.fileName = new String[] { FILE_NAME_VALID_PATH };\n\n    inputMeta.getXML();\n\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.fileMask.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.excludeFileMask.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.fileRequired.length );\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.includeSubFolders.length );\n    //specific for bigdata format\n    assertEquals( inputMeta.inputFiles.fileName.length, inputMeta.inputFiles.environment.length );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/java/org/pentaho/big/data/kettle/plugins/formats/parquet/output/ParquetOutputMetaBaseTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.formats.parquet.output;\n\nimport org.apache.parquet.hadoop.metadata.CompressionCodecName;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.runners.MockitoJUnitRunner;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\n\nimport static org.mockito.Matchers.eq;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class ParquetOutputMetaBaseTest {\n\n  @Mock StepMeta parentStepMeta;\n  @Mock TransMeta parentTransMeta;\n  @Mock NamedClusterEmbedManager namedClusterEmbedManager;\n  private ParquetOutputMetaBase metaBase;\n\n  @Before\n  public void setUp() throws Exception {\n    metaBase = spy( new ParquetOutputMetaBase() {\n      @Override\n      public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n                                    TransMeta transMeta,\n                                    Trans trans ) {\n        return null;\n      }\n\n      @Override public StepDataInterface getStepData() {\n        return null;\n      }\n    } );\n  }\n\n  @Test\n  public void getXMLShouldCallRegisterUrl() {\n    metaBase.setFilename( \"hc://HC/fileName\" );\n    when( parentStepMeta.getParentTransMeta() ).thenReturn( parentTransMeta );\n    when( parentTransMeta.getNamedClusterEmbedManager() ).thenReturn( namedClusterEmbedManager );\n    metaBase.setParentStepMeta( parentStepMeta );\n\n    metaBase.getXML();\n    verify( namedClusterEmbedManager ).registerUrl( eq( \"hc://HC/fileName\" ) );\n  }\n\n  @Test\n  public void setCompressionType() {\n    metaBase.setCompressionType( \"snappy\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"Snappy\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"SNAPPY\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.SNAPPY.toString() ) );\n    metaBase.setCompressionType( \"gzip\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.GZIP.toString() ) );\n    metaBase.setCompressionType( \"Gzip\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.GZIP.toString() ) );\n    metaBase.setCompressionType( \"GZIP\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.GZIP.toString() ) );\n    metaBase.setCompressionType( \"lzo\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZO.toString() ) );\n    metaBase.setCompressionType( \"Lzo\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZO.toString() ) );\n    metaBase.setCompressionType( \"LZO\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZO.toString() ) );\n    metaBase.setCompressionType( \"brotli\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.BROTLI.toString() ) );\n    metaBase.setCompressionType( \"Brotli\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.BROTLI.toString() ) );\n    metaBase.setCompressionType( \"BROTLI\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.BROTLI.toString() ) );\n    metaBase.setCompressionType( \"lz4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4.toString() ) );\n    metaBase.setCompressionType( \"Lz4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4.toString() ) );\n    metaBase.setCompressionType( \"LZ4\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4.toString() ) );\n    metaBase.setCompressionType( \"zstd\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.ZSTD.toString() ) );\n    metaBase.setCompressionType( \"Zstd\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.ZSTD.toString() ) );\n    metaBase.setCompressionType( \"ZSTD\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.ZSTD.toString() ) );\n    metaBase.setCompressionType( \"lz4_raw\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4_RAW.toString() ) );\n    metaBase.setCompressionType( \"Lz4_raw\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4_RAW.toString() ) );\n    metaBase.setCompressionType( \"LZ4_RAW\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.LZ4_RAW.toString() ) );\n    metaBase.setCompressionType( \"uncompressed\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.UNCOMPRESSED.toString() ) );\n    metaBase.setCompressionType( \"Uncompressed\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.UNCOMPRESSED.toString() ) );\n    metaBase.setCompressionType( \"UNCOMPRESSED\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( CompressionCodecName.UNCOMPRESSED.toString() ) );\n  }\n\n  public void setParquetVersion() {\n    metaBase.setParquetVersion( \"Parquet 1.0\" );\n    Assert.assertTrue( metaBase.getParquetVersion().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_1.toString() ) );\n    metaBase.setCompressionType( \"PARQUET_1\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_1.toString() ) );\n    metaBase.setCompressionType( \"Parquet 2.0\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_2.toString() ) );\n    metaBase.setCompressionType( \"PARQUET_2\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_2.toString() ) );\n    metaBase.setCompressionType( \"1235\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_1.toString() ) );\n    metaBase.setCompressionType( \"ABC\" );\n    Assert.assertTrue( metaBase.getCompressionType().equals( ParquetOutputMetaBase.ParquetVersion.PARQUET_1.toString() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/formats-meta/src/test/resources/org/pentaho/big/data/kettle/plugins/formats/orc/input/OrcInput.xml",
    "content": "<step>\n  <name>Orc Input</name>\n  <type>OrcInputNew</type>\n  <description/>\n  <distribute>Y</distribute>\n  <custom_distribution/>\n  <copies>1</copies>\n  <partitioning>\n    <method>none</method>\n    <schema_name/>\n  </partitioning>\n  <passing_through_fields>N</passing_through_fields>\n  <filename>SampleFileName</filename>\n  <fields>\n    <field>\n      <path>SamplePath</path>\n      <name>SampleName</name>\n      <type>String</type>\n      <nullable>false</nullable>\n      <default>SampleDefault</default>\n      <sourcetype>String</sourcetype>\n    </field>\n  </fields>\n  <cluster_schema/>\n  <remotesteps>\n    <input>\n    </input>\n    <output>\n    </output>\n  </remotesteps>\n  <GUI>\n    <xloc>416</xloc>\n    <yloc>112</yloc>\n    <draw>Y</draw>\n  </GUI>\n</step>\n"
  },
  {
    "path": "kettle-plugins/guiTestActionHandlers/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins-guiTestActionHandlers</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/guiTestActionHandlers/src/main/java/org/pentaho/big/data/plugins/gui/test/actionHandlers/ShowHelpDialogActionHandler.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.plugins.gui.test.actionHandlers;\n\nimport org.eclipse.swt.widgets.Display;\nimport org.pentaho.di.ui.core.dialog.ShowHelpDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.runtime.test.action.RuntimeTestAction;\nimport org.pentaho.runtime.test.action.RuntimeTestActionHandler;\nimport org.pentaho.runtime.test.action.impl.HelpUrlPayload;\n\n/**\n * Created by bryan on 9/9/15.\n */\npublic class ShowHelpDialogActionHandler implements RuntimeTestActionHandler {\n\n  @Override public boolean canHandle( RuntimeTestAction runtimeTestAction ) {\n    return runtimeTestAction.getPayload() instanceof HelpUrlPayload;\n  }\n\n  @Override public void handle( RuntimeTestAction runtimeTestAction ) {\n    // Cast checked in canHandle()\n    final HelpUrlPayload helpUrlPayload = (HelpUrlPayload) runtimeTestAction.getPayload();\n    final Spoon spoon = Spoon.getInstance();\n    Display display = spoon.getDisplay();\n    Runnable showRunnable = new Runnable() {\n      @Override public void run() {\n        new ShowHelpDialog( spoon.getShell(),\n          helpUrlPayload.getTitle(),\n          helpUrlPayload.getUrl().toString(),\n          helpUrlPayload.getHeader() ).open();\n      }\n    };\n    if ( Thread.currentThread() == display.getThread() ) {\n      showRunnable.run();\n    } else {\n      display.asyncExec( showRunnable );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/guiTestActionHandlers/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\">\n  <bean id=\"showHelpDialogActionHandler\"\n        class=\"org.pentaho.big.data.plugins.gui.test.actionHandlers.ShowHelpDialogActionHandler\" scope=\"singleton\"/>\n\n  <service ref=\"showHelpDialogActionHandler\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionHandler\"/>\n</blueprint>"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-hadoop-cluster</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Hadoop Cluster Plugin</name>\n\n  <modules>\n    <module>ui</module>\n  </modules>\n\n  <properties>\n    <pdi.version>11.1.0.0-SNAPSHOT</pdi.version>\n    <dependency.karaf.version>3.0.3</dependency.karaf.version>\n    <dependency.javax.servlet-api.version>3.0.1</dependency.javax.servlet-api.version>\n    <dependency.javax.ws.rs-api.version>2.0</dependency.javax.ws.rs-api.version>\n    <version.for.license>${project.version}</version.for.license>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n    <swt.version>4.6</swt.version>\n    <jface.version>3.34.0</jface.version>\n  </properties>\n  <dependencyManagement>\n    <dependencies>\n      <dependency>\n        <groupId>pentaho-kettle</groupId>\n        <artifactId>kettle-core</artifactId>\n        <version>${pdi.version}</version>\n        <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n      </dependency>\n      <dependency>\n        <groupId>pentaho-kettle</groupId>\n        <artifactId>kettle-ui-swt</artifactId>\n        <version>${pdi.version}</version>\n        <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n      </dependency>\n      <dependency>\n        <groupId>pentaho-kettle</groupId>\n        <artifactId>kettle-engine</artifactId>\n        <version>${pdi.version}</version>\n        <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n      </dependency>\n      <dependency>\n        <groupId>pentaho</groupId>\n        <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n        <version>${project.version}</version>\n        <scope>provided</scope>\n      </dependency>\n      <dependency>\n        <groupId>pentaho</groupId>\n        <artifactId>pentaho-platform-extensions</artifactId>\n        <version>${platform.version}</version>\n        <scope>provided</scope>\n        <exclusions>\n          <exclusion>\n            <artifactId>*</artifactId>\n            <groupId>*</groupId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>javax.ws.rs</groupId>\n        <artifactId>javax.ws.rs-api</artifactId>\n        <version>${dependency.javax.ws.rs-api.version}</version>\n        <scope>provided</scope>\n      </dependency>\n      <dependency>\n        <groupId>javax.servlet</groupId>\n        <artifactId>javax.servlet-api</artifactId>\n        <version>${dependency.javax.servlet-api.version}</version>\n        <scope>provided</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.eclipse.swt</groupId>\n        <artifactId>org.eclipse.swt.gtk.linux.x86_64</artifactId>\n        <version>${swt.version}</version>\n        <!--scope>provided</scope--><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n      </dependency>\n      <dependency>\n        <groupId>org.eclipse.platform</groupId>\n        <artifactId>org.eclipse.jface</artifactId>\n        <version>${jface.version}</version>\n        <scope>provided</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.pentaho.di.plugins</groupId>\n        <artifactId>core-ui</artifactId>\n        <version>${pdi.version}</version>\n      </dependency>\n    </dependencies>\n  </dependencyManagement>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hadoop-cluster</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>hadoop-cluster-ui</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <name>Hadoop Cluster Plugin UI</name>\n\n  <properties>\n    <maven-replacer-plugin.version>1.5.2</maven-replacer-plugin.version>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n    <mockito4.version>4.0.0</mockito4.version>\n    <!--FOR UI EXECUTION AS A STANDALONE (uncomment \"exec.mainClass\" property)-->\n    <exec.mainClass>org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog</exec.mainClass>\n    </properties>\n\n  <dependencies>\n    <!--FOR UI EXECUTION AS A STANDALONE (uncomment \"batik-bridge\" dependency)-->\n    <!--dependency>\n      <groupId>org.apache.xmlgraphics</groupId>\n      <artifactId>batik-bridge</artifactId>\n      <version>1.17</version>\n    </dependency-->\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-extensions</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.swt</groupId>\n      <artifactId>org.eclipse.swt.gtk.linux.x86_64</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.platform</groupId>\n      <artifactId>org.eclipse.jface</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.platform</groupId>\n      <artifactId>org.eclipse.jface</artifactId>\n    </dependency>\n    <dependency>\n      <!--this is required by OSGi bean declared in src/main/resources-filtered/OSGI-INF/blueprint/blueprint.xml-->\n      <groupId>com.fasterxml.jackson.jaxrs</groupId>\n      <artifactId>jackson-jaxrs-json-provider</artifactId>\n      <version>${fasterxml-jackson.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.ops4j.pax.web</groupId>\n      <artifactId>pax-web-spi</artifactId>\n      <version>${pax-web.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.ops4j.pax.swissbox</groupId>\n      <artifactId>pax-swissbox-core</artifactId>\n      <version>1.7.1</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>1.6</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-collections4</artifactId>\n      <version>4.1</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpmime</artifactId>\n      <version>4.5.14</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>1.6</version>\n      <scope>provided</scope><!--FOR UI EXECUTION AS A STANDALONE (comment scope)-->\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpmime</artifactId>\n      <version>4.5.14</version>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>1.6</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpmime</artifactId>\n      <version>4.5.14</version>\n    </dependency>\n    <!--END OF CHANGES-->\n    <dependency>\n      <groupId>javax.ws.rs</groupId>\n      <artifactId>javax.ws.rs-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>javax.servlet</groupId>\n      <artifactId>javax.servlet-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>core-ui</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.commons</groupId>\n      <artifactId>commons-fileupload2-core</artifactId>\n      <version>${commons-fileupload.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito4.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-inline</artifactId>\n      <version>${mockito4.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>1.6</version>\n    </dependency>\n  </dependencies>\n  <build>\n    <resources>\n      <resource>\n        <filtering>false</filtering>\n        <directory>src/main/resources</directory>\n        <includes>\n          <include>**/*</include>\n        </includes>\n        <excludes>\n          <exclude>META-INF/**/*</exclude>\n          <exclude>OSGI-INF/**/*</exclude>\n        </excludes>\n      </resource>\n      <resource>\n        <filtering>true</filtering>\n        <directory>src/main/resources</directory>\n        <includes>\n          <include>META-INF/**/*</include>\n          <include>OSGI-INF/**/*</include>\n        </includes>\n      </resource>\n    </resources>\n    <plugins>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/HadoopClusterDelegate.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog;\n\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.CustomWizardDialog;\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\n\nimport java.util.Collection;\nimport java.util.Map;\nimport java.util.function.Supplier;\n\npublic class HadoopClusterDelegate {\n\n  private static final Class<?> PKG = HadoopClusterDelegate.class;\n  private final Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n\n  private final RuntimeTester runtimeTester;\n  private final NamedClusterService namedClusterService;\n\n  private static final LogChannelInterface log =\n    KettleLogStore.getLogChannelInterfaceFactory().create( \"HadoopClusterDelegate\" );\n\n  public HadoopClusterDelegate( NamedClusterService clusterService, RuntimeTester tester ) {\n    namedClusterService = clusterService;\n    runtimeTester = tester;\n  }\n\n  public void openDialog( String dialogState, Map<String, String> urlParams ) {\n    try {\n      Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n      IMetaStore metastore = metastoreLocators.stream().findFirst().get().getMetastore();\n        CustomWizardDialog wizardDialog = new CustomWizardDialog( spoonSupplier.get().getShell(),\n          new NamedClusterDialog( namedClusterService, metastore,\n            spoonSupplier.get().getActiveMeta() == null  ? spoonSupplier.get().getManagementBowl().getADefaultVariableSpace() :\n            (AbstractMeta)spoonSupplier.get().getActiveMeta(),\n            runtimeTester, urlParams, dialogState ) );\n        wizardDialog.open();\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/HadoopClusterDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.browser.BrowserFunction;\nimport org.eclipse.swt.graphics.Image;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.dialog.ThinDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.platform.settings.ServerPort;\nimport org.pentaho.platform.settings.ServerPortRegistry;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.function.Supplier;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.HadoopClusterManager.STRING_NAMED_CLUSTERS;\n\npublic class HadoopClusterDialog extends ThinDialog {\n\n  private static final Image LOGO = GUIResource.getInstance().getImageLogoSmall();\n  private static final String OSGI_SERVICE_PORT = \"OSGI_SERVICE_PORT\";\n  private static final int OPTIONS = SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MAX;\n  private static final String THIN_CLIENT_HOST = \"THIN_CLIENT_HOST\";\n  private static final String THIN_CLIENT_PORT = \"THIN_CLIENT_PORT\";\n  private static final String LOCALHOST = \"127.0.0.1\";\n\n  private Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n\n  private final LogChannelInterface log = spoonSupplier.get().getLog();\n\n  HadoopClusterDialog( Shell shell, int width, int height ) {\n    super( shell, width, height, true );\n  }\n\n  void open( String title, String thinAppState, Map<String, String> urlParams ) {\n\n    StringBuilder clientPath = new StringBuilder();\n    clientPath.append( getClientPath() );\n    clientPath.append( \"#!/\" );\n    if ( thinAppState != null ) {\n      clientPath.append( thinAppState );\n    }\n\n    //Convert map into url params string\n    HashMap<String, String> params = new HashMap<>( urlParams );\n    params.put( \"connectedToRepo\", Boolean.toString( connectedToRepo() ) );\n    final String paramString = params.entrySet().stream()\n      .map( p -> p.getKey() + \"=\" + p.getValue() )\n      .reduce( ( p1, p2 ) -> p1 + \"&\" + p2 )\n      .map( s -> \"?\" + s )\n      .orElse( \"\" );\n\n    clientPath.append( paramString );\n    String endpointURL = getEndpointURL( clientPath.toString() );\n    log.logDebug( \"Thin endpoint URL:  \" + endpointURL );\n    super.createDialog( title, endpointURL, OPTIONS, LOGO );\n    super.dialog.setMinimumSize( 640, 630 );\n\n    new BrowserFunction( browser, \"open\" ) {\n      @Override public Object function( Object[] arguments ) {\n        HelpUtils.openHelpDialog( spoonSupplier.get().getDisplay().getActiveShell(), \"\", (String) arguments[ 0 ], \"\" );\n        return true;\n      }\n    };\n\n    new BrowserFunction( browser, \"close\" ) {\n      @Override public Object function( Object[] arguments ) {\n        Runnable execute = () -> {\n          browser.dispose();\n          dialog.close();\n          dialog.dispose();\n        };\n        display.asyncExec( execute );\n        return true;\n      }\n    };\n\n    new BrowserFunction( browser, \"setTitle\" ) {\n      @Override public Object function( Object[] arguments ) {\n        Runnable execute = () -> {\n          dialog.setText( (String) arguments[ 0 ] );\n        };\n        display.asyncExec( execute );\n        return true;\n      }\n    };\n\n    while ( !dialog.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    Spoon spoon = spoonSupplier.get();\n    if ( spoon != null && spoon.getShell() != null ) {\n      spoon.getShell().getDisplay().asyncExec( () -> spoon.refreshTree( STRING_NAMED_CLUSTERS ) );\n    }\n  }\n\n  private String getClientPath() {\n    Properties properties = new Properties();\n    try {\n      InputStream inputStream = HadoopClusterDialog.class.getClassLoader().getResourceAsStream( \"project.properties\" );\n      properties.load( inputStream );\n    } catch ( IOException e ) {\n      log.logError( e.getMessage(), e );\n    }\n    return properties.getProperty( \"CLIENT_PATH\" );\n  }\n\n  private int getOsgiServicePort() {\n    // if no service port is specified try getting it from\n    ServerPort osgiServicePort = ServerPortRegistry.getPort( OSGI_SERVICE_PORT );\n    if ( osgiServicePort != null ) {\n      return osgiServicePort.getAssignedPort();\n    }\n    throw new IllegalStateException( \"No osgi service port defined\" );\n  }\n\n  private String getEndpointURL( String path ) {\n    if ( connectedToRepo() ) {\n      return getRepo().getUri()\n        .orElseThrow( () -> new IllegalStateException( \"Repo URI not defined\" ) )\n        .toString() + \"/osgi\" + path;\n    }\n    if ( Const.isRunningOnWebspoonMode() ) {\n      return System.getProperty( \"KETTLE_CONTEXT_PATH\", \"\" ) + \"/osgi\" + path;\n    }\n    String host;\n    int port;\n    try {\n      host = getKettleProperty( THIN_CLIENT_HOST );\n      port = Integer.parseInt( getKettleProperty( THIN_CLIENT_PORT ) );\n    } catch ( Exception e ) {\n      host = LOCALHOST;\n      port = getOsgiServicePort();\n    }\n    return \"http://\" + host + \":\" + port + path;\n  }\n\n  private boolean connectedToRepo() {\n    Repository repo = getRepo();\n    return repo != null && repo.getUri().isPresent();\n  }\n\n  private Repository getRepo() {\n    return spoonSupplier.get().getRepository();\n  }\n\n  private String getKettleProperty( String propertyName ) {\n    // loaded in system properties at startup\n    return System.getProperty( propertyName );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/NamedClusterDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard;\n\nimport org.eclipse.jface.wizard.Wizard;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.ClusterSettingsPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.KerberosSettingsPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.KnoxSettingsPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.ReportPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.SecuritySettingsPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.TestResultsPage;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.BadSiteFilesException;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.CustomWizardDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.HadoopClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.big.data.plugins.common.ui.ClusterTestDialog;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.function.Supplier;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages.SecuritySettingsPage.NamedClusterSecurityType.NONE;\n\n/*\n * To run this dialog as stand alone for development purposes under UBUNTU do the following:\n * 1.Look for the following comment in the module:\n *   FOR UI EXECUTION AS A STANDALONE\n *   And either comment or uncomment the referred section as requested\n * 2.Execute running the following command at the root of the \"ui\" submodule:\n *   mvn clean compile exec:java\n *\n * TO DEBUG\n * mvn clean compile exec:exec -Dexec.executable=\"java\" -Dexec.args=\"-classpath %classpath -Xdebug\n * -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 org.pentaho.big.data.kettle.plugins.hadoopcluster.ui\n * .dialog.wizard.NamedClusterDialog\"\n * */\n\npublic class NamedClusterDialog extends Wizard {\n\n  private String dialogState;\n  private boolean isEditMode;\n  private boolean isDuplicating;\n  private ClusterSettingsPage clusterSettingsPage;\n  private SecuritySettingsPage securitySettingsPage;\n  private KerberosSettingsPage kerberosSettingsPage;\n  private KnoxSettingsPage knoxSettingsPage;\n  private ReportPage reportPage;\n  private TestResultsPage testResultsPage;\n  private final HadoopClusterManager hadoopClusterManager;\n  private ThinNameClusterModel thinNameClusterModel;\n  private boolean isDevMode = false;\n  private final RuntimeTester runtimeTester;\n  private final VariableSpace variableSpace;\n  private final Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n  private static final Class<?> PKG = ClusterSettingsPage.class;\n  private static final LogChannelInterface log =\n    KettleLogStore.getLogChannelInterfaceFactory().create( \"NamedClusterDialog\" );\n\n  public NamedClusterDialog( NamedClusterService namedClusterService, IMetaStore metastore, VariableSpace variables,\n                             RuntimeTester tester, Map<String, String> params, String dialogState ) {\n    setWindowTitle( BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster\" ) );\n    variableSpace = variables;\n    runtimeTester = tester;\n    hadoopClusterManager = new HadoopClusterManager( spoonSupplier.get(), namedClusterService, metastore, \"\" );\n    String namedClusterNameParam = params.get( \"name\" );\n    isEditMode = namedClusterNameParam != null;\n    thinNameClusterModel = createModel( hadoopClusterManager.getNamedCluster( namedClusterNameParam ) );\n    String duplicateNamedClusterParam = params.get( \"duplicateName\" );\n    if ( duplicateNamedClusterParam != null ) {\n      thinNameClusterModel.setOldName( thinNameClusterModel.getName() );\n      thinNameClusterModel.setName( \"copy-of-\" + thinNameClusterModel.getName() );\n      isEditMode = false;\n      isDuplicating = true;\n    }\n    this.dialogState = dialogState;\n  }\n\n  public boolean isConnectedToRepo() {\n    boolean isConnectedToRepo = false;\n    if ( spoonSupplier.get() != null ) {\n      Repository repo = spoonSupplier.get().getRepository();\n      isConnectedToRepo = repo != null && repo.getUri().isPresent();\n    }\n    return isConnectedToRepo;\n  }\n\n  public String getShimIdentifier() {\n    return hadoopClusterManager.getShimIdentifier();\n  }\n\n  private ThinNameClusterModel createModel( ThinNameClusterModel model ) {\n    boolean isCreatingCluster = false;\n    if ( model == null ) {\n      model = new ThinNameClusterModel();\n      isCreatingCluster = true;\n    }\n    model.setName( model.getName() == null ? \"\" : model.getName() );\n    model.setShimIdentifier( model.getShimIdentifier() == null ? \"\" : model.getShimIdentifier() );\n    model.setHdfsHost( model.getHdfsHost() == null ? \"\" : model.getHdfsHost() );\n    model.setHdfsPort( model.getHdfsPort() == null && isCreatingCluster? \"8020\" : model.getHdfsPort() == null ? \"\" : model.getHdfsPort() );\n    model.setHdfsUsername( model.getHdfsUsername() == null ? \"\" : model.getHdfsUsername() );\n    model.setHdfsPassword( model.getHdfsPassword() == null ? \"\" : model.getHdfsPassword() );\n    model.setJobTrackerHost( model.getJobTrackerHost() == null ? \"\" : model.getJobTrackerHost() );\n    model.setJobTrackerPort( model.getJobTrackerPort() == null && isCreatingCluster? \"8032\" : model.getJobTrackerPort() == null ? \"\" : model.getJobTrackerPort() );\n    model.setZooKeeperHost( model.getZooKeeperHost() == null ? \"\" : model.getZooKeeperHost() );\n    model.setZooKeeperPort( model.getZooKeeperPort() == null && isCreatingCluster? \"2181\" : model.getZooKeeperPort() == null ? \"\" : model.getZooKeeperPort() );\n    model.setOozieUrl( model.getOozieUrl() == null ? \"\" : model.getOozieUrl() );\n    model.setKafkaBootstrapServers( model.getKafkaBootstrapServers() == null ? \"\" : model.getKafkaBootstrapServers() );\n    model.setOldName( model.getName() );\n    model.setSecurityType( model.getSecurityType() == null ? \"None\" : model.getSecurityType() );\n    model.setKerberosSubType( model.getKerberosSubType() == null ? \"Password\" : model.getKerberosSubType() );\n    model.setKerberosAuthenticationUsername(\n      model.getKerberosAuthenticationUsername() == null ? \"\" : model.getKerberosAuthenticationUsername() );\n    model.setKerberosAuthenticationPassword(\n      model.getKerberosAuthenticationPassword() == null ? \"\" : model.getKerberosAuthenticationPassword() );\n    model.setKerberosImpersonationUsername(\n      model.getKerberosImpersonationUsername() == null ? \"\" : model.getKerberosImpersonationUsername() );\n    model.setKerberosImpersonationPassword(\n      model.getKerberosImpersonationPassword() == null ? \"\" : model.getKerberosImpersonationPassword() );\n    model.setGatewayUrl( model.getGatewayUrl() == null ? \"\" : model.getGatewayUrl() );\n    model.setGatewayUsername( model.getGatewayUsername() == null ? \"\" : model.getGatewayUsername() );\n    model.setGatewayPassword( model.getGatewayPassword() == null ? \"\" : model.getGatewayPassword() );\n    model.setKeytabImpFile( model.getKeytabImpFile() == null ? \"\" : model.getKeytabImpFile() );\n    model.setKeytabAuthFile(\n      model.getKeytabAuthFile() == null ? BaseMessages.getString( PKG, \"NamedClusterDialog.noFileSelected\" ) :\n        model.getKeytabAuthFile() );\n    model.setSiteFiles( model.getSiteFiles() == null ? new ArrayList<>() : model.getSiteFiles() );\n    return model;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    if ( !dialogState.equals( \"testing\" ) ) {\n      thinNameClusterModel = model == null ? createModel( null ) : createModel( model );\n      clusterSettingsPage.initialize( thinNameClusterModel );\n      securitySettingsPage.initialize( thinNameClusterModel );\n      knoxSettingsPage.initialize( thinNameClusterModel );\n      kerberosSettingsPage.initialize( thinNameClusterModel );\n      reportPage.initialize( thinNameClusterModel );\n      testResultsPage.initialize( thinNameClusterModel );\n    } else {\n      try {\n        testResultsPage.initialize( model );\n        testResultsPage.setTestResults( getTestResults() );\n      } catch ( KettleException e ) {\n        log.logError( e.getMessage() );\n      }\n    }\n  }\n\n  public void addPages() {\n    if ( !dialogState.equals( \"testing\" ) ) {\n      clusterSettingsPage =\n        new ClusterSettingsPage( variableSpace, thinNameClusterModel );\n      addPage( clusterSettingsPage );\n      securitySettingsPage = new SecuritySettingsPage( thinNameClusterModel );\n      addPage( securitySettingsPage );\n      knoxSettingsPage = new KnoxSettingsPage( variableSpace, thinNameClusterModel );\n      addPage( knoxSettingsPage );\n      kerberosSettingsPage = new KerberosSettingsPage( variableSpace, thinNameClusterModel );\n      addPage( kerberosSettingsPage );\n      reportPage = new ReportPage( thinNameClusterModel );\n      addPage( reportPage );\n      testResultsPage = new TestResultsPage( variableSpace, thinNameClusterModel );\n      addPage( testResultsPage );\n    } else {\n      testResultsPage = new TestResultsPage( variableSpace, thinNameClusterModel );\n      addPage( testResultsPage );\n    }\n  }\n\n  public void editCluster() {\n    dialogState = \"new-edit\";\n    ThinNameClusterModel model = hadoopClusterManager.getNamedCluster( thinNameClusterModel.getName() );\n    if ( model != null ) {\n      isEditMode = true;\n      isDuplicating = false;\n      initialize( model );\n    } else {\n      isEditMode = false;\n      isDuplicating = false;\n    }\n    getContainer().showPage( getPage( ClusterSettingsPage.class.getSimpleName() ) );\n  }\n\n  public void createNewCluster() {\n    isEditMode = false;\n    isDuplicating = false;\n    initialize( null );\n    getContainer().showPage( getPage( ClusterSettingsPage.class.getSimpleName() ) );\n  }\n\n  public boolean performFinish() {\n    boolean finish = false;\n    // We are about to either create or edit hadoop cluster. Send shim identifier to the currently loaded driver\n    String shimIdentifier = getShimIdentifier();\n    if ( shimIdentifier != null ) {\n      thinNameClusterModel.setShimIdentifier( shimIdentifier );\n    }\n\n    String currentPage = super.getContainer().getCurrentPage().getName();\n    if ( reportPage != null && !currentPage.equals( reportPage.getClass().getSimpleName() ) &&\n      !currentPage.equals( testResultsPage.getClass().getSimpleName() ) ) {\n      if ( isEditMode || isDuplicating ) {\n        saveEditedNamedCluster();\n      } else {\n        saveNewNamedCluster();\n      }\n    } else {\n      finish = true;\n    }\n    if ( spoonSupplier.get() != null ) {\n      spoonSupplier.get().refreshTree( BaseMessages.getString( PKG, \"HadoopClusterTree.Title\" ) );\n    }\n    return finish;\n  }\n\n  private void saveNewNamedCluster() {\n    try {\n      hadoopClusterManager.saveNewNamedCluster( thinNameClusterModel, dialogState );\n      reportPage.setTestResults( getTestResults() );\n    } catch ( BadSiteFilesException e ) {\n      reportPage.setTestResult( BaseMessages.getString( PKG, \"NamedClusterDialog.test.importFailed\" ) );\n    } catch ( IOException | KettleException e ) {\n      log.logError( e.getMessage() );\n    }\n    getContainer().showPage( reportPage );\n  }\n\n  private void saveEditedNamedCluster() {\n    try {\n      hadoopClusterManager.saveEditedNamedCluster( thinNameClusterModel, isEditMode );\n      reportPage.setTestResults( getTestResults() );\n    } catch ( BadSiteFilesException e ) {\n      reportPage.setTestResult( BaseMessages.getString( PKG, \"NamedClusterDialog.test.importFailed\" ) );\n    } catch ( IOException | KettleException e ) {\n      log.logError( e.getMessage() );\n    }\n    getContainer().showPage( reportPage );\n  }\n\n  private Object[] getTestResults() throws KettleException {\n    NamedCluster namedCluster = hadoopClusterManager.getNamedClusterByName( thinNameClusterModel.getName() );\n    if ( isDevMode() ) {\n      if ( !dialogState.equals( \"testing\" ) ) {\n        return (Object[]) hadoopClusterManager.runTests( runtimeTester, thinNameClusterModel.getName() );\n      } else {\n        return new Object[] {};\n      }\n    } else {\n      RuntimeTestStatus runtimeTestStatus =\n        ClusterTestDialog.create( spoonSupplier.get().getShell(), namedCluster, runtimeTester ).open();\n      return hadoopClusterManager.produceTestCategories( runtimeTestStatus, namedCluster );\n    }\n  }\n\n  public boolean canFinish() {\n    // Hack to style the CustomWizardDialog.\n    ( (CustomWizardDialog) getContainer() ).style();\n    // Couldn't be done elsewhere because the \"TestResultsPage\" was not initialized by the wizard.\n\n    String currentPage = super.getContainer().getCurrentPage().getName();\n    if ( !dialogState.equals( \"testing\" ) ) {\n      if ( currentPage.equals( clusterSettingsPage.getClass().getSimpleName() ) ) {\n        ( (CustomWizardDialog) getContainer() ).enableCancelButton( true );\n      }\n      return\n        ( currentPage.equals( securitySettingsPage.getClass().getSimpleName() )\n          && securitySettingsPage.getSecurityType()\n          .equals( NONE ) ) || ( currentPage.equals( kerberosSettingsPage.getClass().getSimpleName() )\n          && kerberosSettingsPage.isPageComplete() || (\n          currentPage.equals( knoxSettingsPage.getClass().getSimpleName() )\n            && knoxSettingsPage.isPageComplete() )\n            || currentPage.equals( reportPage.getClass().getSimpleName() )\n            || currentPage.equals( testResultsPage.getClass().getSimpleName() ) );\n    } else {\n      // Set to Initialize \"TestResultsPage\" when \"dialogState\" is \"testing\" and disable its \"Finish\" button.\n      // Couldn't be done elsewhere because the \"TestResultsPage\" was not initialized by the wizard.\n      initialize( thinNameClusterModel );\n      return true;\n    }\n  }\n\n  public String getDialogState() {\n    return dialogState;\n  }\n\n  public boolean clusterNameExists( String clusterName ) {\n    return hadoopClusterManager.getNamedCluster( clusterName ) != null;\n  }\n\n  public void setDevMode( boolean devMode ) {\n    this.isDevMode = devMode;\n  }\n\n  public boolean isDevMode() {\n    return isDevMode;\n  }\n\n  public boolean isEditMode() {\n    return isEditMode;\n  }\n\n  public static void main( String[] args ) {\n    try {\n      PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n      PluginRegistry.init( false );\n      Encr.init( \"Kettle\" );\n      KettleLogStore.init();\n      Display display = new Display();\n      Shell shell = new Shell( display );\n      PropsUI.init( display, Props.TYPE_PROPERTIES_SPOON );\n      NamedClusterDialog namedClusterDialog =\n        new NamedClusterDialog( NamedClusterManager.getInstance(), MetaStoreConst.openLocalPentahoMetaStore(),\n          new Variables(), RuntimeTesterImpl.getInstance(), new HashMap<String, String>(), \"new-edit\" );\n      namedClusterDialog.setDevMode( true );\n      CustomWizardDialog namedClusterWizardDialog = new CustomWizardDialog( shell, namedClusterDialog );\n      namedClusterWizardDialog.open();\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/ClusterSettingsPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.pentaho.di.core.util.StringUtil;\nimport org.eclipse.jface.wizard.IWizardPage;\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.ScrolledComposite;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableColumn;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport java.io.File;\nimport java.util.AbstractMap.SimpleImmutableEntry;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.TWO_COLUMNS;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabel;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createText;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.decodePassword;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.getVersionForDriver;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.getVendorForDriver;\nimport static org.pentaho.di.ui.core.PropsUI.getDisplay;\n\npublic class ClusterSettingsPage extends WizardPage {\n  private PropsUI props;\n  private Composite parent;\n  private Composite mainPanel;\n  private ScrolledComposite clusterScrollPanel;\n  private TextVar hostNameTextFieldHdfsGroup;\n  private TextVar portTextFieldHdfsGroup;\n  private TextVar userNameTextFieldHdfsGroup;\n  private TextVar passwordTextFieldHdfsGroup;\n  private TextVar hostNameTextFieldJobTrackerGroup;\n  private TextVar portTextFieldJobTrackerGroup;\n  private TextVar hostNameTextFieldZooKeeperGroup;\n  private TextVar portTextFieldZooKeeperGroup;\n  private TextVar hostNameTextFieldOozieGroup;\n  private TextVar hostNameTextFieldKafkaGroup;\n  private Button deleteSiteFilesButton;\n  private Text nameOfNamedCluster;\n  private Table siteFilesTable;\n  private Group hdfsGroup;\n  private Group jobTrackerGroup;\n  private Group zooKeeperGroup;\n  private Group oozieGroup;\n  private Group kafkaGroup;\n  private Composite fillerComposite;\n  private Map<String, String> siteFilesPath;\n  private ThinNameClusterModel thinNameClusterModel;\n  private final Listener clusterListener = e -> validate();\n  private final VariableSpace variableSpace;\n  private static final Class<?> PKG = ClusterSettingsPage.class;\n  private String loadedShimVendor = BaseMessages.getString( PKG, \"NamedClusterDialog.noDriver\" );\n  private String loadedShimVersion = \"\";\n  private String shimIdentifier;\n\n  public ClusterSettingsPage( VariableSpace variables, ThinNameClusterModel model ) {\n    super( ClusterSettingsPage.class.getSimpleName() );\n    variableSpace = variables;\n    thinNameClusterModel = model;\n    setPageComplete( false );\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n    Composite basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout basePanelGridLayout = new GridLayout( ONE_COLUMN, false );\n    basePanelGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    basePanelGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    basePanelGridLayout.marginBottom = 30;\n    basePanelGridLayout.marginLeft = 20;\n    basePanel.setLayout( basePanelGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    //START OF HEADER\n    Composite headerPanel = new Composite( basePanel, SWT.NONE );\n    headerPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData headerPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    headerPanel.setLayoutData( headerPanelGridData );\n    props.setLook( headerPanel );\n\n    GridData clusterNameLabelGridData = new GridData();\n    clusterNameLabelGridData.widthHint = 400; // Label width\n    createLabel( headerPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.clusterName\" ),\n      clusterNameLabelGridData, props );\n\n    GridData clusterNameTextFieldGridData = new GridData();\n    clusterNameTextFieldGridData.widthHint = Const.isLinux() ? 395 : 409; // TextField width\n    nameOfNamedCluster = new Text( headerPanel, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    nameOfNamedCluster.setText( \"\" );\n    nameOfNamedCluster.setLayoutData( clusterNameTextFieldGridData );\n    nameOfNamedCluster.addListener( SWT.CHANGED, clusterListener );\n    nameOfNamedCluster.addListener( SWT.MouseExit, clusterListener );\n    props.setLook( nameOfNamedCluster );\n    //END OF HEADER\n\n\n    //START OF CLUSTER SCROLLABLE PANEL\n    clusterScrollPanel = new ScrolledComposite( basePanel, SWT.V_SCROLL | SWT.NONE );\n    clusterScrollPanel.setExpandHorizontal( true );\n    clusterScrollPanel.setExpandVertical( true );\n    clusterScrollPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData clusterScrollPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    clusterScrollPanelGridData.heightHint = 490; //Height of the scrollable panel (WILL NEED TO ADJUST)\n    clusterScrollPanel.setLayoutData( clusterScrollPanelGridData );\n    props.setLook( clusterScrollPanel );\n\n    //START MAIN PANEL\n    mainPanel = new Composite( clusterScrollPanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n    //END MAIN PANEL\n\n    createDriverGroup();\n    createSiteXMLFilesGroup();\n    //END OF CLUSTER SCROLLABLE PANEL\n\n    clusterScrollPanel.setContent( mainPanel );\n    initialize( thinNameClusterModel );\n    setControl( parent );\n  }\n\n  private void createDriverGroup() {\n    try {\n      loadedShimVendor = getLoadedDriverVendor();\n      loadedShimVersion = getLoadedDriverVersion();\n    } catch ( Exception e ) {\n      // Do nothing go with defined loaded shim vendor and version\n    }\n    String loadedDriverText = loadedShimVendor + \" \" + loadedShimVersion;\n    String originalDriverText;\n    String shimIdentifier = thinNameClusterModel.getShimIdentifier();\n    if ( StringUtil.isEmpty( shimIdentifier ) ) {\n      originalDriverText = BaseMessages.getString( PKG, \"NamedClusterDialog.noDriver\" );\n    } else {\n      String vendor  = getVendorForDriver( shimIdentifier );\n      String version = getVersionForDriver( shimIdentifier );\n      if ( StringUtil.isEmpty( vendor ) || StringUtil.isEmpty( version ) ) {\n        originalDriverText = BaseMessages.getString( PKG, \"NamedClusterDialog.noDriver\" );\n      } else {\n        originalDriverText = vendor + \" \" + version;\n      }\n    }\n\n    Composite driverGroupPanel = new Composite( mainPanel, SWT.NONE );\n    GridLayout driverGroupGridLayout = new GridLayout( ONE_COLUMN, true );\n    driverGroupGridLayout.marginWidth = 0;\n    driverGroupPanel.setLayout( driverGroupGridLayout );\n    GridData driverGroupPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    driverGroupPanel.setLayoutData( driverGroupPanelGridData );\n    props.setLook( driverGroupPanel );\n\n    GridData driverInfoGroupGridData = new GridData();\n    Label loadedDriverLabel = new Label( driverGroupPanel, SWT.NONE );\n    loadedDriverLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.activeDriver\" ) + \" \" + loadedDriverText );\n    loadedDriverLabel.setLayoutData( driverInfoGroupGridData );\n\n    if ( ( (NamedClusterDialog) getWizard() ).isEditMode() ) {\n      Label originalDriverLabel = new Label( driverGroupPanel, SWT.NONE );\n      originalDriverLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.originalDriver\" ) + \" \" + originalDriverText );\n      originalDriverLabel.setLayoutData( driverInfoGroupGridData );\n      if ( !originalDriverText.equals( loadedDriverText ) ) {\n        Label driverMismatchLabel = new Label( driverGroupPanel, SWT.NONE );\n        driverMismatchLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.mismatchedDriver\" ) );\n        driverMismatchLabel.setForeground( getDisplay().getSystemColor( SWT.COLOR_RED ) );\n        driverMismatchLabel.setLayoutData( driverInfoGroupGridData );\n      }\n    }\n  }\n\n  private String getLoadedDriverVersion() {\n    if( shimIdentifier == null ) {\n      NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n      shimIdentifier = namedClusterDialog.getShimIdentifier();\n    }\n    String version = \"\";\n    if( shimIdentifier != null ) {\n      version = getVersionForDriver( shimIdentifier );\n    }\n    return version;\n  }\n\n  private String getLoadedDriverVendor() {\n    if( shimIdentifier == null ) {\n      NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n      shimIdentifier = namedClusterDialog.getShimIdentifier();\n    }\n    String vendor = \"\";\n    if( shimIdentifier != null ) {\n      vendor = getVendorForDriver( shimIdentifier );\n    }\n    return vendor;\n  }\n\n  private void createSiteXMLFilesGroup() {\n    Group siteXmlFilesGroup = new Group( mainPanel, SWT.NONE );\n    siteXmlFilesGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.siteXmlFiles\" ) );\n    siteXmlFilesGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData jobTrackerGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    siteXmlFilesGroup.setLayoutData( jobTrackerGroupGridData );\n    props.setLook( siteXmlFilesGroup );\n\n    Composite buttonsPanel = new Composite( siteXmlFilesGroup, SWT.NONE );\n    GridLayout buttonsPanelGridLayout = new GridLayout( TWO_COLUMNS, false );\n    buttonsPanel.setLayout( buttonsPanelGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    buttonsPanel.setLayoutData( basePanelGridData );\n    props.setLook( buttonsPanel );\n\n    Button browseButton = new Button( buttonsPanel, SWT.PUSH );\n    GridData browserButtonGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    browseButton.setLayoutData( browserButtonGridData );\n    browseButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.browseButton\" ) );\n    Listener browseListener = e -> browse();\n    browseButton.addListener( SWT.Selection, browseListener );\n    props.setLook( browseButton );\n    deleteSiteFilesButton = new Button( buttonsPanel, SWT.PUSH );\n    deleteSiteFilesButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.remove\" ) );\n    deleteSiteFilesButton.setToolTipText( BaseMessages.getString( PKG, \"NamedClusterDialog.removeSiteFile\" ) );\n    deleteSiteFilesButton.setEnabled( false );\n    GridData deleteButtonGridData = new GridData( SWT.END, SWT.FILL, true, false );\n    deleteSiteFilesButton.setLayoutData( deleteButtonGridData );\n\n    Listener removeSiteFileListener = e -> removeSelectedSiteFiles();\n    deleteSiteFilesButton.addListener( SWT.Selection, removeSiteFileListener );\n    props.setLook( deleteSiteFilesButton );\n\n    siteFilesTable = new Table( siteXmlFilesGroup, SWT.BORDER | SWT.CHECK );\n    siteFilesTable.setHeaderVisible( true );\n    siteFilesTable.setLinesVisible( true );\n    GridData data = new GridData( SWT.FILL, SWT.FILL, true, true );\n    data.heightHint = 100;\n    siteFilesTable.setLayoutData( data );\n    Listener tableListener = e -> processTableSelection();\n    siteFilesTable.addListener( SWT.Selection, tableListener );\n    props.setLook( siteFilesTable );\n\n    TableColumn fileNameColumn = new TableColumn( siteFilesTable, SWT.NONE );\n    fileNameColumn.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.file\" ) );\n    fileNameColumn.setWidth( 330 );\n    fileNameColumn.setResizable( false );\n  }\n\n  private void processTableSelection() {\n    List<TableItem> selectedSiteFiles = getSelectedSiteFiles();\n    deleteSiteFilesButton.setEnabled( !selectedSiteFiles.isEmpty() );\n  }\n\n  private void removeSelectedSiteFiles() {\n    MessageBox warning = new MessageBox( mainPanel.getShell(), SWT.YES | SWT.NO );\n    warning.setMessage( BaseMessages.getString( PKG, \"NamedClusterDialog.siteFileAlert\" ) );\n    int buttonClicked = warning.open();\n    if ( buttonClicked == SWT.YES ) {\n      List<TableItem> selectedSiteFiles = getSelectedSiteFiles();\n      for ( TableItem selectedSiteFile : selectedSiteFiles ) {\n        siteFilesPath.remove( selectedSiteFile.getText() );\n        siteFilesTable.remove( siteFilesTable.indexOf( selectedSiteFile ) );\n      }\n      deleteSiteFilesButton.setEnabled( false );\n      validate();\n    }\n  }\n\n  private List<TableItem> getSelectedSiteFiles() {\n    List<TableItem> selectedSiteFiles = new ArrayList<>();\n    for ( TableItem siteFile : siteFilesTable.getItems() ) {\n      if ( siteFile.getChecked() ) {\n        selectedSiteFiles.add( siteFile );\n      }\n    }\n    return selectedSiteFiles;\n  }\n\n  private void createHdfsGroup() {\n    hdfsGroup = new Group( mainPanel, SWT.NONE );\n    hdfsGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.hdfs\" ) );\n    hdfsGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData hdfsGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    hdfsGroup.setLayoutData( hdfsGroupGridData );\n    props.setLook( hdfsGroup );\n\n    if ( ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"new-edit\" ) ) {\n      GridData hostNameLabelHdfsGroupGridData = new GridData();\n      hostNameLabelHdfsGroupGridData.widthHint = 400; // Label width\n      createLabel( hdfsGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.hostname\" ),\n        hostNameLabelHdfsGroupGridData, props );\n\n      GridData hostNameTextFieldHdfsGroupdGridData = new GridData();\n      hostNameTextFieldHdfsGroupdGridData.widthHint = 400; // TextField width\n      hostNameTextFieldHdfsGroup =\n        createText( hdfsGroup, \"\", hostNameTextFieldHdfsGroupdGridData, props, variableSpace, clusterListener );\n\n      GridData portLabelHdfsGroupGridData = new GridData();\n      portLabelHdfsGroupGridData.widthHint = 400; // Label width\n      createLabel( hdfsGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.port\" ), portLabelHdfsGroupGridData,\n        props );\n\n      GridData portTextFieldHdfsGroupGridData = new GridData();\n      portTextFieldHdfsGroupGridData.widthHint = 400; // TextField width\n      portTextFieldHdfsGroup =\n        createText( hdfsGroup, \"\", portTextFieldHdfsGroupGridData, props, variableSpace, clusterListener );\n    }\n\n    Composite userPasswordHdfsGroupPanel = new Composite( hdfsGroup, SWT.NONE );\n    GridLayout userPasswordHdfsGroupGridLayout = new GridLayout( TWO_COLUMNS, true );\n    userPasswordHdfsGroupGridLayout.marginWidth = 0;\n    userPasswordHdfsGroupPanel.setLayout( userPasswordHdfsGroupGridLayout );\n    GridData userPasswordHdfsGroupPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    userPasswordHdfsGroupPanel.setLayoutData( userPasswordHdfsGroupPanelGridData );\n    props.setLook( userPasswordHdfsGroupPanel );\n\n    GridData userNameLabelHdfsGroupGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    createLabel( userPasswordHdfsGroupPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.username\" ),\n      userNameLabelHdfsGroupGridData, props );\n\n    GridData passwordLabelHdfsGroupGridData = new GridData();\n    createLabel( userPasswordHdfsGroupPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.password\" ),\n      passwordLabelHdfsGroupGridData, props );\n\n    GridData userNameTextFieldHdfsGroupGridData = new GridData();\n    userNameTextFieldHdfsGroupGridData.widthHint = 197; // TextField width\n    userNameTextFieldHdfsGroup =\n      createText( userPasswordHdfsGroupPanel, \"\", userNameTextFieldHdfsGroupGridData, props, variableSpace,\n        clusterListener );\n\n    GridData passwordTextFieldHdfsGroupGridData = new GridData();\n    passwordTextFieldHdfsGroupGridData.widthHint = 197; // TextField width\n    passwordTextFieldHdfsGroup =\n      createText( userPasswordHdfsGroupPanel, \"\", passwordTextFieldHdfsGroupGridData, props, variableSpace,\n        clusterListener );\n    passwordTextFieldHdfsGroup.setEchoChar( '*' );\n    mainPanel.pack();\n  }\n\n  private void createJobTrackerGroup() {\n    jobTrackerGroup = new Group( mainPanel, SWT.NONE );\n    jobTrackerGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.jobTracker\" ) );\n    jobTrackerGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData jobTrackerGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    jobTrackerGroup.setLayoutData( jobTrackerGroupGridData );\n    props.setLook( jobTrackerGroup );\n\n    GridData hostNameLabelJobTrackerGroupGridData = new GridData();\n    hostNameLabelJobTrackerGroupGridData.widthHint = 400; // Label width\n    createLabel( jobTrackerGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.hostname\" ),\n      hostNameLabelJobTrackerGroupGridData, props );\n\n    GridData hostNameTextFieldJobTrackerGroupdGridData = new GridData();\n    hostNameTextFieldJobTrackerGroupdGridData.widthHint = 400; // TextField width\n    hostNameTextFieldJobTrackerGroup =\n      createText( jobTrackerGroup, \"\", hostNameTextFieldJobTrackerGroupdGridData, props, variableSpace,\n        clusterListener );\n\n    GridData portLabelJobTrackerGroupGridData = new GridData();\n    portLabelJobTrackerGroupGridData.widthHint = 400; // Label width\n    createLabel( jobTrackerGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.port\" ),\n      portLabelJobTrackerGroupGridData, props );\n\n    GridData portTextFieldJobTrackerGroupGridData = new GridData();\n    portTextFieldJobTrackerGroupGridData.widthHint = 400; // TextField width\n    portTextFieldJobTrackerGroup =\n      createText( jobTrackerGroup, \"\", portTextFieldJobTrackerGroupGridData, props, variableSpace, clusterListener );\n    mainPanel.pack();\n  }\n\n  private void createZooKeeperGroup() {\n    zooKeeperGroup = new Group( mainPanel, SWT.NONE );\n    zooKeeperGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.zooKeeper\" ) );\n    zooKeeperGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData zooKeeperGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    zooKeeperGroup.setLayoutData( zooKeeperGroupGridData );\n    props.setLook( zooKeeperGroup );\n\n    GridData hostNameLabelZooKeeperGroupGridData = new GridData();\n    hostNameLabelZooKeeperGroupGridData.widthHint = 400; // Label width\n    createLabel( zooKeeperGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.hostname\" ),\n      hostNameLabelZooKeeperGroupGridData, props );\n\n    GridData hostNameTextFieldZooKeeperGroupdGridData = new GridData();\n    hostNameTextFieldZooKeeperGroupdGridData.widthHint = 400; // TextField width\n    hostNameTextFieldZooKeeperGroup =\n      createText( zooKeeperGroup, \"\", hostNameTextFieldZooKeeperGroupdGridData, props, variableSpace, clusterListener );\n\n    GridData portLabelZooKeeperGroupGridData = new GridData();\n    portLabelZooKeeperGroupGridData.widthHint = 400; // Label width\n    createLabel( zooKeeperGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.port\" ),\n      portLabelZooKeeperGroupGridData, props );\n\n    GridData portTextFieldZooKeeperGroupGridData = new GridData();\n    portTextFieldZooKeeperGroupGridData.widthHint = 400; // TextField width\n    portTextFieldZooKeeperGroup =\n      createText( zooKeeperGroup, \"\", portTextFieldZooKeeperGroupGridData, props, variableSpace, clusterListener );\n    mainPanel.pack();\n  }\n\n  private void createOozieGroup() {\n    oozieGroup = new Group( mainPanel, SWT.NONE );\n    oozieGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.oozie\" ) );\n    oozieGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData oozieGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    oozieGroup.setLayoutData( oozieGroupGridData );\n    props.setLook( oozieGroup );\n\n    GridData hostNameLabelOozieGroupGridData = new GridData();\n    hostNameLabelOozieGroupGridData.widthHint = 400; // Label width\n    createLabel( oozieGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.hostname\" ),\n      hostNameLabelOozieGroupGridData, props );\n\n    GridData hostNameTextFieldOozieGroupdGridData = new GridData();\n    hostNameTextFieldOozieGroupdGridData.widthHint = 400; // TextField width\n    hostNameTextFieldOozieGroup =\n      createText( oozieGroup, \"\", hostNameTextFieldOozieGroupdGridData, props, variableSpace, clusterListener );\n    mainPanel.pack();\n  }\n\n  private void createKafkaGroup() {\n    kafkaGroup = new Group( mainPanel, SWT.NONE );\n    kafkaGroup.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.kafka\" ) );\n    kafkaGroup.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData kafkaGroupGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    kafkaGroup.setLayoutData( kafkaGroupGridData );\n    props.setLook( kafkaGroup );\n\n    GridData hostNameLabelKafkaGroupGridData = new GridData();\n    hostNameLabelKafkaGroupGridData.widthHint = 400; // Label width\n    createLabel( kafkaGroup, BaseMessages.getString( PKG, \"NamedClusterDialog.bootstrapServers\" ),\n      hostNameLabelKafkaGroupGridData, props );\n\n    GridData hostNameTextFieldKafkaGroupdGridData = new GridData();\n    hostNameTextFieldKafkaGroupdGridData.widthHint = 400; // TextField width\n    hostNameTextFieldKafkaGroup =\n      createText( kafkaGroup, \"\", hostNameTextFieldKafkaGroupdGridData, props, variableSpace, clusterListener );\n    mainPanel.pack();\n  }\n\n  private void createFiller() {\n    fillerComposite = new Composite( mainPanel, SWT.NONE );\n    props.setLook( fillerComposite );\n    mainPanel.pack();\n  }\n\n  private void browse() {\n    FileDialog dialog = new FileDialog( mainPanel.getShell(), SWT.MULTI );\n    dialog.open();\n    for ( String fileName : dialog.getFileNames() ) {\n      addSiteFileToTable( fileName );\n      siteFilesPath.put( fileName, dialog.getFilterPath() + File.separator );\n    }\n    if ( dialog.getFileNames().length > 0 ) {\n      validate();\n    }\n  }\n\n\n  private void validate() {\n    thinNameClusterModel.setName( nameOfNamedCluster.getText() );\n    thinNameClusterModel.setHdfsUsername( userNameTextFieldHdfsGroup.getText() );\n    thinNameClusterModel.setHdfsPassword( passwordTextFieldHdfsGroup.getText() );\n    thinNameClusterModel.setSiteFiles( getTableItems( siteFilesTable.getItems() ) );\n\n    if ( ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"new-edit\" ) ) {\n      thinNameClusterModel.setHdfsHost( hostNameTextFieldHdfsGroup.getText() );\n      thinNameClusterModel.setHdfsPort( portTextFieldHdfsGroup.getText() );\n      thinNameClusterModel.setJobTrackerPort( portTextFieldJobTrackerGroup.getText() );\n      thinNameClusterModel.setZooKeeperPort( portTextFieldZooKeeperGroup.getText() );\n      thinNameClusterModel.setJobTrackerHost( hostNameTextFieldJobTrackerGroup.getText() );\n      thinNameClusterModel.setZooKeeperHost( hostNameTextFieldZooKeeperGroup.getText() );\n      thinNameClusterModel.setOozieUrl( hostNameTextFieldOozieGroup.getText() );\n      thinNameClusterModel.setKafkaBootstrapServers( hostNameTextFieldKafkaGroup.getText() );\n      setPageComplete( !thinNameClusterModel.getName().isBlank() && !thinNameClusterModel.getHdfsHost().isBlank()\n              && thinNameClusterModel.getName().matches( \"^[a-zA-Z0-9-]+$\" ) );\n    }\n    if ( ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ) {\n      setPageComplete( !thinNameClusterModel.getName().isBlank()\n        && !thinNameClusterModel.getSiteFiles().isEmpty()\n        && thinNameClusterModel.getName().matches( \"^[a-zA-Z0-9-]+$\" ) );\n    }\n  }\n\n  public IWizardPage getNextPage() {\n    boolean nextButtonPressed =\n      \"nextPressed\".equalsIgnoreCase( Thread.currentThread().getStackTrace()[ 2 ].getMethodName() );\n    boolean clusterNameExists =\n      ( (NamedClusterDialog) getWizard() ).clusterNameExists( thinNameClusterModel.getName() );\n    boolean notEditingUsingSameName =\n      !( ( (NamedClusterDialog) getWizard() ).isEditMode() && thinNameClusterModel.getName()\n        .equals( thinNameClusterModel.getOldName() ) );\n    if ( nextButtonPressed && clusterNameExists && notEditingUsingSameName ) {\n      MessageBox box = new MessageBox( mainPanel.getShell(), SWT.YES | SWT.NO | SWT.ICON_QUESTION );\n      box.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.clusterOverwriteTitle\" ) );\n      box.setMessage( BaseMessages.getString( PKG, \"NamedClusterDialog.clusterOverwrite\", thinNameClusterModel.getName() ) );\n      int result = box.open();\n      if ( result != SWT.YES ) {\n        return null;\n      }\n    }\n\n    SecuritySettingsPage securitySettingsPage =\n      (SecuritySettingsPage) getWizard().getPage( SecuritySettingsPage.class.getSimpleName() );\n    securitySettingsPage.initialize( thinNameClusterModel );\n    return securitySettingsPage;\n  }\n\n  private boolean isConnectedToRepo() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    boolean isConnectedToRepo = namedClusterDialog.isConnectedToRepo();\n    if ( isDevMode() ) {\n      isConnectedToRepo = true;\n    }\n    return isConnectedToRepo;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    setTitle( ( (NamedClusterDialog) getWizard() ).isEditMode() ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster.title\" ) :\n      ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n        BaseMessages.getString( PKG, \"NamedClusterDialog.importCluster.title\" ) :\n        BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster.title\" ) );\n\n    if ( isConnectedToRepo() ) {\n      setDescription( BaseMessages.getString( PKG, \"NamedClusterDialog.repositoryNotification\" ) );\n    }\n\n    thinNameClusterModel = model;\n    siteFilesPath = new HashMap<>();\n    nameOfNamedCluster.setText( model.getName() );\n\n    setTableItems( model.getSiteFiles() );\n    disposeComponents();\n    createHdfsGroup();\n    userNameTextFieldHdfsGroup.setText( model.getHdfsUsername() );\n    passwordTextFieldHdfsGroup.setText( decodePassword( model.getHdfsPassword() ) );\n    if ( ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"new-edit\" ) ) {\n      createJobTrackerGroup();\n      createZooKeeperGroup();\n      createOozieGroup();\n      createKafkaGroup();\n      createFiller();\n      hostNameTextFieldHdfsGroup.setText( model.getHdfsHost() );\n      portTextFieldHdfsGroup.setText( model.getHdfsPort() );\n      portTextFieldJobTrackerGroup.setText( model.getJobTrackerPort() );\n      portTextFieldZooKeeperGroup.setText( model.getZooKeeperPort() );\n      hostNameTextFieldJobTrackerGroup.setText( model.getJobTrackerHost() );\n      hostNameTextFieldZooKeeperGroup.setText( model.getZooKeeperHost() );\n      hostNameTextFieldOozieGroup.setText( model.getOozieUrl() );\n      hostNameTextFieldKafkaGroup.setText( model.getKafkaBootstrapServers() );\n    }\n    clusterScrollPanel.setMinSize( mainPanel.computeSize( SWT.DEFAULT, SWT.DEFAULT ) );\n    mainPanel.pack();\n    validate();\n  }\n\n  private void disposeComponents() {\n    if ( hdfsGroup != null ) {\n      hdfsGroup.dispose();\n      hdfsGroup = null;\n    }\n    if ( jobTrackerGroup != null ) {\n      jobTrackerGroup.dispose();\n      jobTrackerGroup = null;\n    }\n    if ( zooKeeperGroup != null ) {\n      zooKeeperGroup.dispose();\n      zooKeeperGroup = null;\n    }\n    if ( oozieGroup != null ) {\n      oozieGroup.dispose();\n      oozieGroup = null;\n    }\n    if ( kafkaGroup != null ) {\n      kafkaGroup.dispose();\n      kafkaGroup = null;\n    }\n    if ( fillerComposite != null ) {\n      fillerComposite.dispose();\n      fillerComposite = null;\n    }\n    mainPanel.pack();\n  }\n\n  private List<SimpleImmutableEntry<String, String>> getTableItems( TableItem[] tableItems ) {\n    List<SimpleImmutableEntry<String, String>> siteFiles = new ArrayList<>();\n    for ( TableItem tableItem : tableItems ) {\n      String path = siteFilesPath.get( tableItem.getText() );\n      path = path == null ? \"\" : path;\n      siteFiles.add( new SimpleImmutableEntry<>( path, tableItem.getText() ) );\n    }\n    return siteFiles;\n  }\n\n  public IWizardPage getPreviousPage() {\n    return null;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\", BaseMessages.getString( PKG, \"NamedClusterDialog.help\" ), \"\" );\n  }\n\n  private void setTableItems( List<SimpleImmutableEntry<String, String>> siteFiles ) {\n    siteFilesTable.removeAll();\n    for ( SimpleImmutableEntry<String, String> siteFile : siteFiles ) {\n      addSiteFileToTable( siteFile.getValue() );\n    }\n  }\n\n  private void addSiteFileToTable( String fileName ) {\n    TableItem item = new TableItem( siteFilesTable, SWT.NONE );\n    item.setText( 0, fileName );\n  }\n\n  private boolean isDevMode() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    return namedClusterDialog.isDevMode();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/KerberosSettingsPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.eclipse.jface.wizard.IWizardPage;\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport java.io.File;\nimport java.util.AbstractMap.SimpleImmutableEntry;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.TWO_COLUMNS;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabel;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createText;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.decodePassword;\n\npublic class KerberosSettingsPage extends WizardPage {\n\n  private PropsUI props;\n  private Composite parent;\n  private Composite mainPanel;\n  private Composite passwordAuthenticationPanel;\n  private Composite keytabAuthenticationPanel;\n  private CCombo securityMethodCombo;\n  private TextVar authenticationUserNameTextField;\n  private TextVar authenticationPasswordTextField;\n  private Text authenticationKeytabText;\n  private TextVar impersonationUserNameTextField;\n  private TextVar impersonationPasswordTextField;\n  private Text impersonationKeytabText;\n  private ThinNameClusterModel thinNameClusterModel;\n  private final Listener clusterListener = e -> validate();\n  private final VariableSpace variableSpace;\n  private final String password = \"Password\";\n  private final String keytab = \"Keytab\";\n  private final String NO_FILE_SELECTED = BaseMessages.getString( PKG, \"NamedClusterDialog.noFileSelected\" );\n  private final String fileSeparator = System.getProperty( \"file.separator\" );\n\n  private static final Class<?> PKG = KerberosSettingsPage.class;\n\n  public KerberosSettingsPage( VariableSpace variables, ThinNameClusterModel model ) {\n    super( KerberosSettingsPage.class.getSimpleName() );\n    variableSpace = variables;\n    thinNameClusterModel = model;\n    setPageComplete( false );\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n\n    Composite basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout baseGridLayout = new GridLayout( ONE_COLUMN, false );\n    baseGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    baseGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    baseGridLayout.marginBottom = 30;\n    baseGridLayout.marginLeft = 20;\n    basePanel.setLayout( baseGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    mainPanel = new Composite( basePanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanelGridData.heightHint = 510; //Height of the panel (WILL NEED TO ADJUST)\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n\n    GridData securityMethodLableGridData = new GridData();\n    securityMethodLableGridData.widthHint = 400; // Label width\n    createLabel( mainPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.securityMethod\" ),\n      securityMethodLableGridData,\n      props );\n\n    GridData securityMethodComboGridData = new GridData();\n    securityMethodComboGridData.widthHint = 400; // TextField width\n    securityMethodCombo = new CCombo( mainPanel, SWT.SINGLE | SWT.READ_ONLY | SWT.BORDER );\n    securityMethodCombo.setLayoutData( securityMethodComboGridData );\n    securityMethodCombo.add( password );\n    securityMethodCombo.add( keytab );\n\n    Listener securityMethodComboListener = e -> displaySecurityMethodFields();\n    securityMethodCombo.addListener( SWT.Selection, securityMethodComboListener );\n    props.setLook( securityMethodCombo );\n    setControl( parent );\n\n    initialize( thinNameClusterModel );\n  }\n\n  private void displaySecurityMethodFields() {\n    if ( securityMethodCombo.getText().equals( password ) ) {\n      createPasswordAuthenticationFields();\n      updatePasswordFields( thinNameClusterModel );\n    }\n    if ( securityMethodCombo.getText().equals( keytab ) ) {\n      createKeytabAuthenticationFields();\n      updateKeytabFields( thinNameClusterModel );\n    }\n    validate();\n  }\n\n  private void disposeComponents() {\n    if ( passwordAuthenticationPanel != null ) {\n      passwordAuthenticationPanel.dispose();\n      passwordAuthenticationPanel = null;\n    }\n    if ( keytabAuthenticationPanel != null ) {\n      keytabAuthenticationPanel.dispose();\n      keytabAuthenticationPanel = null;\n    }\n    mainPanel.pack();\n  }\n\n  private void createPasswordAuthenticationFields() {\n    disposeComponents();\n    passwordAuthenticationPanel = new Composite( mainPanel, SWT.NONE );\n    GridLayout authenticationPanelGridLayout = new GridLayout( TWO_COLUMNS, true );\n    authenticationPanelGridLayout.marginWidth = 0;\n    passwordAuthenticationPanel.setLayout( authenticationPanelGridLayout );\n    GridData authenticationPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    passwordAuthenticationPanel.setLayoutData( authenticationPanelGridData );\n    props.setLook( passwordAuthenticationPanel );\n\n    GridData authenticationUsernameGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    createLabel( passwordAuthenticationPanel,\n      BaseMessages.getString( PKG, \"NamedClusterDialog.authenticationUsername\" ),\n      authenticationUsernameGridData, props );\n\n    GridData authenticationPasswordGridData = new GridData();\n    createLabel( passwordAuthenticationPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.password\" ),\n      authenticationPasswordGridData, props );\n\n    GridData authenticationUserNameTextFieldGridData = new GridData();\n    authenticationUserNameTextFieldGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n    authenticationUserNameTextField =\n      createText( passwordAuthenticationPanel, \"\", authenticationUserNameTextFieldGridData, props, variableSpace,\n        clusterListener );\n\n    GridData authenticationPasswordTextFieldGroupGridData = new GridData();\n    authenticationPasswordTextFieldGroupGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n    authenticationPasswordTextField =\n      createText( passwordAuthenticationPanel, \"\", authenticationPasswordTextFieldGroupGridData, props, variableSpace,\n        clusterListener );\n    authenticationPasswordTextField.setEchoChar( '*' );\n\n    if ( isConnectedToRepo() ) {\n      GridData impersonationUsernameGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n      createLabel( passwordAuthenticationPanel,\n        BaseMessages.getString( PKG, \"NamedClusterDialog.impersonationUsername\" ),\n        impersonationUsernameGridData, props );\n\n      GridData impersonationPasswordGridData = new GridData();\n      createLabel( passwordAuthenticationPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.password\" ),\n        impersonationPasswordGridData, props );\n\n      GridData impersonationUserNameTextFieldGridData = new GridData();\n      impersonationUserNameTextFieldGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n      impersonationUserNameTextField =\n        createText( passwordAuthenticationPanel, \"\", impersonationUserNameTextFieldGridData, props, variableSpace,\n          clusterListener );\n\n      GridData impersonationPasswordTextFieldGroupGridData = new GridData();\n      impersonationPasswordTextFieldGroupGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n      impersonationPasswordTextField =\n        createText( passwordAuthenticationPanel, \"\", impersonationPasswordTextFieldGroupGridData, props, variableSpace,\n          clusterListener );\n      impersonationPasswordTextField.setEchoChar( '*' );\n    }\n\n    mainPanel.pack();\n  }\n\n  private void createKeytabAuthenticationFields() {\n    disposeComponents();\n    keytabAuthenticationPanel = new Composite( mainPanel, SWT.NONE );\n    GridLayout authenticationPanelGridLayout = new GridLayout( ONE_COLUMN, true );\n    authenticationPanelGridLayout.marginWidth = 0;\n    keytabAuthenticationPanel.setLayout( authenticationPanelGridLayout );\n    GridData authenticationPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    keytabAuthenticationPanel.setLayoutData( authenticationPanelGridData );\n    props.setLook( keytabAuthenticationPanel );\n\n    GridData authenticationUsernameGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    createLabel( keytabAuthenticationPanel,\n      BaseMessages.getString( PKG, \"NamedClusterDialog.authenticationUsername\" ),\n      authenticationUsernameGridData, props );\n\n    GridData authenticationUserNameTextFieldGridData = new GridData();\n    authenticationUserNameTextFieldGridData.widthHint = Const.isLinux() ? 400 : 405; // TextField width\n    authenticationUserNameTextField =\n      createText( keytabAuthenticationPanel, \"\", authenticationUserNameTextFieldGridData, props, variableSpace,\n        clusterListener );\n\n    GridData authenticationPasswordGridData = new GridData();\n    createLabel( keytabAuthenticationPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.authenticationKeytab\" ),\n      authenticationPasswordGridData, props );\n\n    Composite authenticationKeytabPanel = new Composite( keytabAuthenticationPanel, SWT.NONE );\n    GridLayout authenticationKeytabPanelGridLayout = new GridLayout( TWO_COLUMNS, false );\n    authenticationKeytabPanelGridLayout.marginWidth = 0;\n    authenticationKeytabPanel.setLayout( authenticationKeytabPanelGridLayout );\n    GridData authenticationKeytabPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    authenticationKeytabPanel.setLayoutData( authenticationKeytabPanelGridData );\n    props.setLook( authenticationKeytabPanel );\n\n    authenticationKeytabText = new Text( authenticationKeytabPanel, SWT.BORDER );\n    authenticationKeytabText.setEditable( false );\n    GridData authenticationKeytabTextGridData = new GridData();\n    authenticationKeytabTextGridData.widthHint = Const.isLinux() ? 310 : 341;\n    authenticationKeytabText.setLayoutData( authenticationKeytabTextGridData );\n    props.setLook( authenticationKeytabText );\n\n    Button authenticationBrowseButton = new Button( authenticationKeytabPanel, SWT.PUSH );\n    authenticationBrowseButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.browse\" ) );\n    props.setLook( authenticationBrowseButton );\n    Listener authenticationBrowseListener = e -> authenticationBrowse();\n    authenticationBrowseButton.addListener( SWT.Selection, authenticationBrowseListener );\n\n    if ( isConnectedToRepo() ) {\n      GridData impersonationUsernameGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n      createLabel( keytabAuthenticationPanel,\n        BaseMessages.getString( PKG, \"NamedClusterDialog.impersonationUsername\" ),\n        impersonationUsernameGridData, props );\n\n      GridData impersonationUserNameTextFieldGridData = new GridData();\n      impersonationUserNameTextFieldGridData.widthHint = Const.isLinux() ? 400 : 405; // TextField width\n      impersonationUserNameTextField =\n        createText( keytabAuthenticationPanel, \"\", impersonationUserNameTextFieldGridData, props, variableSpace,\n          clusterListener );\n\n      GridData impersonationPasswordGridData = new GridData();\n      createLabel( keytabAuthenticationPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.impersonationKeytab\" ),\n        impersonationPasswordGridData, props );\n\n      Composite impersonationKeytabPanel = new Composite( keytabAuthenticationPanel, SWT.NONE );\n      GridLayout impersonationKeytabPanelGridLayout = new GridLayout( TWO_COLUMNS, false );\n      impersonationKeytabPanelGridLayout.marginWidth = 0;\n      impersonationKeytabPanel.setLayout( impersonationKeytabPanelGridLayout );\n      GridData impersonationKeytabPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n      impersonationKeytabPanel.setLayoutData( impersonationKeytabPanelGridData );\n      props.setLook( impersonationKeytabPanel );\n\n      impersonationKeytabText = new Text( impersonationKeytabPanel, SWT.BORDER );\n      impersonationKeytabText.setEditable( false );\n      GridData impersonationKeytabTextGridData = new GridData();\n      impersonationKeytabTextGridData.widthHint = Const.isLinux() ? 310 : 341;\n      impersonationKeytabText.setLayoutData( impersonationKeytabTextGridData );\n      props.setLook( impersonationKeytabText );\n\n      Button impersonationBrowseButton = new Button( impersonationKeytabPanel, SWT.PUSH );\n      impersonationBrowseButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.browse\" ) );\n      props.setLook( impersonationBrowseButton );\n      Listener impersonationBrowseListener = e -> impersonationBrowse();\n      impersonationBrowseButton.addListener( SWT.Selection, impersonationBrowseListener );\n\n      Button clearImpersonationButton = new Button( impersonationKeytabPanel, SWT.PUSH );\n      clearImpersonationButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.clear\" ) );\n      props.setLook( clearImpersonationButton );\n      Listener clearImpersonationListener = e -> clearImpersonation();\n      clearImpersonationButton.addListener( SWT.Selection, clearImpersonationListener );\n    }\n    mainPanel.pack();\n  }\n\n  private void validate() {\n    if ( securityMethodCombo.getText().equals( password ) ) {\n      thinNameClusterModel.setKerberosSubType( password );\n      thinNameClusterModel.setKerberosAuthenticationUsername( authenticationUserNameTextField.getText() );\n      thinNameClusterModel.setKerberosAuthenticationPassword( authenticationPasswordTextField.getText() );\n      if ( isConnectedToRepo() ) {\n        thinNameClusterModel.setKerberosImpersonationUsername( impersonationUserNameTextField.getText() );\n        thinNameClusterModel.setKerberosImpersonationPassword( impersonationPasswordTextField.getText() );\n        setPageComplete( ( !thinNameClusterModel.getKerberosAuthenticationUsername().isBlank()\n          && !thinNameClusterModel.getKerberosAuthenticationPassword().isBlank() )\n          ||\n          ( !thinNameClusterModel.getKerberosImpersonationUsername().isBlank()\n            && !thinNameClusterModel.getKerberosImpersonationPassword().isBlank() ) );\n      } else {\n        setPageComplete( !thinNameClusterModel.getKerberosAuthenticationUsername().isBlank()\n          && !thinNameClusterModel.getKerberosAuthenticationPassword().isBlank() );\n      }\n    }\n    if ( securityMethodCombo.getText().equals( keytab ) ) {\n      thinNameClusterModel.setKerberosSubType( keytab );\n      thinNameClusterModel.setKerberosAuthenticationUsername( authenticationUserNameTextField.getText() );\n      thinNameClusterModel.setKeytabAuthFile( authenticationKeytabText.getData()\n        .equals( NO_FILE_SELECTED ) ? \"\" :\n        (String) authenticationKeytabText.getData() );\n      if ( !thinNameClusterModel.getKeytabAuthFile().isBlank() ) {\n        List<SimpleImmutableEntry<String, String>> siteFiles = thinNameClusterModel.getSiteFiles();\n        List<SimpleImmutableEntry<String, String>> result =\n          siteFiles.stream().filter( siteFile -> siteFile.getValue().equals( \"keytabAuthFile\" ) ).collect(\n            Collectors.toList() );\n        if ( !result.isEmpty() ) {\n          siteFiles.remove( result.get( 0 ) );\n        }\n        siteFiles.add( new SimpleImmutableEntry<>( thinNameClusterModel.getKeytabAuthFile(), \"keytabAuthFile\" ) );\n      }\n      if ( isConnectedToRepo() ) {\n        thinNameClusterModel.setKerberosImpersonationUsername( impersonationUserNameTextField.getText() );\n        thinNameClusterModel.setKeytabImpFile( impersonationKeytabText.getData()\n          .equals( NO_FILE_SELECTED ) ? \"\" :\n          (String) impersonationKeytabText.getData() );\n        List<SimpleImmutableEntry<String, String>> siteFiles = thinNameClusterModel.getSiteFiles();\n        List<SimpleImmutableEntry<String, String>> result =\n          siteFiles.stream().filter( siteFile -> siteFile.getValue().equals( \"keytabImpFile\" ) ).collect(\n            Collectors.toList() );\n        if ( !result.isEmpty() ) {\n          siteFiles.remove( result.get( 0 ) );\n        }\n        if ( !thinNameClusterModel.getKeytabImpFile().isBlank() ) {\n          siteFiles.add( new SimpleImmutableEntry<>( thinNameClusterModel.getKeytabImpFile(), \"keytabImpFile\" ) );\n        }\n        setPageComplete( !thinNameClusterModel.getKeytabAuthFile().isBlank() );\n      } else {\n        setPageComplete( !thinNameClusterModel.getKeytabAuthFile().isBlank() );\n      }\n    }\n  }\n\n  // FOR DEV MODE ONLY\n  private boolean isDevMode() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    return namedClusterDialog.isDevMode();\n  }\n  // FOR DEV MODE ONLY\n\n  private boolean isConnectedToRepo() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    boolean isConnectedToRepo = namedClusterDialog.isConnectedToRepo();\n    if ( isDevMode() ) {\n      isConnectedToRepo = true;\n    }\n    return isConnectedToRepo;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    setTitle( ( (NamedClusterDialog) getWizard() ).isEditMode() ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster.title\" ) :\n      ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n        BaseMessages.getString( PKG, \"NamedClusterDialog.importCluster.title\" ) :\n        BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster.title\" ) );\n\n    if ( isConnectedToRepo() ) {\n      setDescription( BaseMessages.getString( PKG, \"NamedClusterDialog.repositoryNotification\" ) );\n    }\n\n    thinNameClusterModel = model;\n    securityMethodCombo.setText( model.getKerberosSubType() );\n    if ( securityMethodCombo.getText().equals( password ) ) {\n      createPasswordAuthenticationFields();\n      updatePasswordFields( model );\n    }\n    if ( securityMethodCombo.getText().equals( keytab ) ) {\n      createKeytabAuthenticationFields();\n      updateKeytabFields( model );\n    }\n    validate();\n  }\n\n  private void updatePasswordFields( ThinNameClusterModel model ) {\n    authenticationUserNameTextField.setText( model.getKerberosAuthenticationUsername() );\n    authenticationPasswordTextField.setText( decodePassword( model.getKerberosAuthenticationPassword() ) );\n    if ( isConnectedToRepo() ) {\n      impersonationUserNameTextField.setText( model.getKerberosImpersonationUsername() );\n      impersonationPasswordTextField.setText( decodePassword( model.getKerberosImpersonationPassword() ) );\n    }\n  }\n\n  private void updateKeytabFields( ThinNameClusterModel model ) {\n    authenticationKeytabText.setText(\n      model.getKeytabAuthFile().isBlank() ? NO_FILE_SELECTED :\n        model.getKeytabAuthFile().substring( model.getKeytabAuthFile().lastIndexOf( fileSeparator ) + 1 ) );\n    authenticationKeytabText.setData(\n      model.getKeytabAuthFile().isBlank() ? NO_FILE_SELECTED : model.getKeytabAuthFile() );\n    authenticationUserNameTextField.setText( model.getKerberosAuthenticationUsername() );\n    if ( isConnectedToRepo() ) {\n      impersonationKeytabText.setText(\n        model.getKeytabImpFile().isBlank() ? NO_FILE_SELECTED :\n          model.getKeytabImpFile().substring( model.getKeytabImpFile().lastIndexOf( fileSeparator ) + 1 ) );\n      impersonationKeytabText.setData(\n        model.getKeytabImpFile().isBlank() ? NO_FILE_SELECTED : model.getKeytabImpFile() );\n      impersonationUserNameTextField.setText( model.getKerberosImpersonationUsername() );\n    }\n  }\n\n  private void authenticationBrowse() {\n    FileDialog dialog = new FileDialog( mainPanel.getShell(), SWT.OPEN );\n    String path = dialog.open();\n    if ( path != null ) {\n      File file = new File( path );\n      if ( file.isFile() ) {\n        authenticationKeytabText.setText( file.toString() );\n        authenticationKeytabText.setData( file.toString() );\n        validate();\n      }\n    }\n  }\n\n  private void impersonationBrowse() {\n    FileDialog dialog = new FileDialog( mainPanel.getShell(), SWT.OPEN );\n    String path = dialog.open();\n    if ( path != null ) {\n      File file = new File( path );\n      if ( file.isFile() ) {\n        impersonationKeytabText.setText( file.toString() );\n        impersonationKeytabText.setData( file.toString() );\n        validate();\n      }\n    }\n  }\n\n  private void clearImpersonation() {\n    impersonationKeytabText.setText( \"\" );\n    impersonationKeytabText.setData( NO_FILE_SELECTED );\n    validate();\n  }\n\n  public IWizardPage getNextPage() {\n    return null;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\", BaseMessages.getString( PKG, \"NamedClusterDialog.help\" ), \"\" );\n  }\n}"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/KnoxSettingsPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.eclipse.jface.wizard.IWizardPage;\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.TWO_COLUMNS;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabel;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createText;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.decodePassword;\n\npublic class KnoxSettingsPage extends WizardPage {\n\n  private PropsUI props;\n  private Composite basePanel;\n  private Composite parent;\n  private Composite mainPanel;\n  private Text gatewayURLTextField;\n  private TextVar gatewayUsernameTextfield;\n  private TextVar gatewayPasswordTextField;\n  private final VariableSpace variableSpace;\n  private final ThinNameClusterModel thinNameClusterModel;\n  private final Listener clusterListener = e -> validate();\n\n  private static final Class<?> PKG = KnoxSettingsPage.class;\n\n  public KnoxSettingsPage( VariableSpace variables, ThinNameClusterModel model ) {\n    super( KnoxSettingsPage.class.getSimpleName() );\n    thinNameClusterModel = model;\n    variableSpace = variables;\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n    basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout baseGridLayout = new GridLayout( ONE_COLUMN, false );\n    baseGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    baseGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    baseGridLayout.marginBottom = 30;\n    baseGridLayout.marginLeft = 20;\n    basePanel.setLayout( baseGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    mainPanel = new Composite( basePanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanelGridData.heightHint = 510; //Height of the panel (WILL NEED TO ADJUST)\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n\n    GridData gatewayUrlLabelGridData = new GridData();\n    gatewayUrlLabelGridData.widthHint = 400; // Label width\n    createLabel( mainPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.gatewayURL\" ),\n      gatewayUrlLabelGridData,\n      props );\n\n    GridData gatewayUrlTextfieldGridData = new GridData();\n    gatewayUrlTextfieldGridData.widthHint = Const.isLinux() ? 380 : 390; // TextField width\n    gatewayURLTextField =\n      new Text( mainPanel, SWT.SINGLE | SWT.LEFT | SWT.BORDER | SWT.PASSWORD );\n    gatewayURLTextField.setText( \"\" );\n    gatewayURLTextField.setLayoutData( gatewayUrlTextfieldGridData );\n    gatewayURLTextField.addListener( SWT.CHANGED, clusterListener );\n    gatewayURLTextField.addListener( SWT.MouseExit, clusterListener );\n    props.setLook( gatewayURLTextField );\n\n    Composite gatewayAuthenticationPanel = new Composite( mainPanel, SWT.NONE );\n    GridLayout authenticationPanelGridLayout = new GridLayout( TWO_COLUMNS, true );\n    authenticationPanelGridLayout.marginWidth = 0;\n    gatewayAuthenticationPanel.setLayout( authenticationPanelGridLayout );\n    GridData authenticationPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    gatewayAuthenticationPanel.setLayoutData( authenticationPanelGridData );\n    props.setLook( gatewayAuthenticationPanel );\n\n    GridData gatewayUsernameLabel = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    createLabel( gatewayAuthenticationPanel,\n      BaseMessages.getString( PKG, \"NamedClusterDialog.gatewayUsername\" ),\n      gatewayUsernameLabel, props );\n\n    GridData gatewayPasswordLabel = new GridData();\n    createLabel( gatewayAuthenticationPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.gatewayPassword\" ),\n      gatewayPasswordLabel, props );\n\n    GridData gatewayUsernameTextFieldGridData = new GridData();\n    gatewayUsernameTextFieldGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n    gatewayUsernameTextfield =\n      createText( gatewayAuthenticationPanel, \"\", gatewayUsernameTextFieldGridData, props, variableSpace,\n        clusterListener );\n\n    GridData gatewayPasswordTextFieldGridData = new GridData();\n    gatewayPasswordTextFieldGridData.widthHint = Const.isLinux() ? 197 : 200; // TextField width\n    gatewayPasswordTextField =\n      createText( gatewayAuthenticationPanel, \"\", gatewayPasswordTextFieldGridData, props, variableSpace,\n        clusterListener );\n    gatewayPasswordTextField.setEchoChar( '*' );\n\n    setControl( parent );\n    initialize( thinNameClusterModel );\n  }\n\n  // FOR DEV MODE ONLY\n  private boolean isDevMode() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    return namedClusterDialog.isDevMode();\n  }\n  // FOR DEV MODE ONLY\n\n  private boolean isConnectedToRepo() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    boolean isConnectedToRepo = namedClusterDialog.isConnectedToRepo();\n    if ( isDevMode() ) {\n      isConnectedToRepo = true;\n    }\n    return isConnectedToRepo;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    setTitle( ( (NamedClusterDialog) getWizard() ).isEditMode() ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster.title\" ) :\n      ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n        BaseMessages.getString( PKG, \"NamedClusterDialog.importCluster.title\" ) :\n        BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster.title\" ) );\n\n    if ( isConnectedToRepo() ) {\n      setDescription( BaseMessages.getString( PKG, \"NamedClusterDialog.repositoryNotification\" ) );\n    }\n\n    gatewayURLTextField.setText( decodePassword( model.getGatewayUrl() ) );\n    gatewayUsernameTextfield.setText( model.getGatewayUsername() );\n    gatewayPasswordTextField.setText( decodePassword( model.getGatewayPassword() ) );\n\n    validate();\n  }\n\n  private void validate() {\n    thinNameClusterModel.setGatewayUrl( gatewayURLTextField.getText() );\n    thinNameClusterModel.setGatewayUsername( gatewayUsernameTextfield.getText() );\n    thinNameClusterModel.setGatewayPassword( gatewayPasswordTextField.getText() );\n    setPageComplete(\n      !thinNameClusterModel.getGatewayUrl().isBlank() && !thinNameClusterModel.getGatewayUsername().isBlank()\n        && !thinNameClusterModel.getGatewayPassword().isBlank() );\n  }\n\n  public IWizardPage getNextPage() {\n    return null;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\", BaseMessages.getString( PKG, \"NamedClusterDialog.help\" ), \"\" );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/ReportPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.eclipse.jface.wizard.IWizardPage;\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.graphics.Font;\nimport org.eclipse.swt.graphics.FontData;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.TestCategory;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabel;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabelWithStyle;\n\npublic class ReportPage extends WizardPage {\n\n  private PropsUI props;\n  private Composite basePanel;\n  private Composite parent;\n  private Composite mainPanel;\n  private Label statusLabel;\n  private Label statusDescriptionLabel;\n  private Label iconLabel;\n  private Button viewTestResultsButton;\n  private Object[] testResults;\n  private ThinNameClusterModel thinNameClusterModel;\n  private static final Class<?> PKG = ReportPage.class;\n  private static final String SUCCESS_IMG = \"images/success.svg\";\n  private static final String FAIL_IMG = \"images/fail.svg\";\n\n  public ReportPage( ThinNameClusterModel model ) {\n    super( ReportPage.class.getSimpleName() );\n    thinNameClusterModel = model;\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n    basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout baseGridLayout = new GridLayout( ONE_COLUMN, false );\n    baseGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    baseGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    baseGridLayout.marginBottom = 30;\n    baseGridLayout.marginLeft = 20;\n    basePanel.setLayout( baseGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    mainPanel = new Composite( basePanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanelGridData.heightHint = 510; //Height of the panel (WILL NEED TO ADJUST)\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n\n    GridData iconGridData = new GridData();\n    iconGridData.widthHint = 400; // Label width\n    iconGridData.heightHint = 100; // Label height\n    iconLabel = createLabelWithStyle( mainPanel, \"\", iconGridData, props, SWT.NONE );\n    iconLabel.setAlignment( SWT.CENTER );\n\n    GridData statusGridData = new GridData();\n    statusGridData.widthHint = 400; // Label width\n    statusGridData.heightHint = 50; // Label height\n    statusLabel = createLabelWithStyle( mainPanel, \"\", statusGridData, props, SWT.NONE );\n    statusLabel.setFont( new Font( statusLabel.getDisplay(), new FontData( \"Arial\", 20, SWT.NONE ) ) );\n    statusLabel.setAlignment( SWT.CENTER );\n\n    GridData statusDescriptionGridData = new GridData();\n    statusDescriptionGridData.widthHint = 400; // Label width\n    statusDescriptionGridData.heightHint = 100; // Label height\n    statusDescriptionLabel = createLabelWithStyle( mainPanel, \"\", statusDescriptionGridData, props, SWT.WRAP );\n    statusDescriptionLabel.setAlignment( SWT.CENTER );\n\n    GridData questonLabelGridData = new GridData();\n    questonLabelGridData.widthHint = 400; // Label width\n    questonLabelGridData.heightHint = 50; // Label height\n    createLabel( mainPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.question\" ), questonLabelGridData,\n      props ).setAlignment( SWT.CENTER );\n\n    Button editClusterButton = new Button( mainPanel, SWT.PUSH );\n    GridData editButtonGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    editButtonGridData.widthHint = 155;\n    editButtonGridData.horizontalAlignment = SWT.CENTER;\n    editClusterButton.setLayoutData( editButtonGridData );\n    editClusterButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster\" ) );\n    Listener editClusterListener = e -> editCluster();\n    editClusterButton.addListener( SWT.Selection, editClusterListener );\n    props.setLook( editClusterButton );\n\n    Button newClusterButton = new Button( mainPanel, SWT.PUSH );\n    GridData newButtonGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    newButtonGridData.widthHint = 155;\n    newButtonGridData.horizontalAlignment = SWT.CENTER;\n    newClusterButton.setLayoutData( newButtonGridData );\n\n    newClusterButton.setText( ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.importNewCluster\" ) :\n      BaseMessages.getString( PKG, \"NamedClusterDialog.createNewCluster\" ) );\n    Listener newClusterListener = e -> createNewCluster();\n    newClusterButton.addListener( SWT.Selection, newClusterListener );\n    props.setLook( newClusterButton );\n\n    viewTestResultsButton = new Button( mainPanel, SWT.PUSH );\n    GridData viewTestResultsButtonGridData = new GridData( SWT.BEGINNING, SWT.FILL, true, false );\n    viewTestResultsButtonGridData.widthHint = 155;\n    viewTestResultsButtonGridData.horizontalAlignment = SWT.CENTER;\n    viewTestResultsButton.setLayoutData( viewTestResultsButtonGridData );\n    viewTestResultsButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.viewTestResults\" ) );\n    Listener viewTestResultsListener = e -> viewTestResults();\n    viewTestResultsButton.addListener( SWT.Selection, viewTestResultsListener );\n    props.setLook( viewTestResultsButton );\n\n    setControl( parent );\n    initialize( thinNameClusterModel );\n  }\n\n  public void setTestResult( String status ) {\n    if ( status.equals( BaseMessages.getString( PKG, \"NamedClusterDialog.test.pass\" ) ) ) {\n      statusLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.pass\" ) );\n      statusDescriptionLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.description.pass\" ) );\n      iconLabel.setImage(\n        GUIResource.getInstance().getImage( SUCCESS_IMG, getClass().getClassLoader(), 70, 70 ) );\n      viewTestResultsButton.setVisible( true );\n    } else if ( status.equals( BaseMessages.getString( PKG, \"NamedClusterDialog.test.importFailed\" ) ) ) {\n      statusLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.import.fail\" ) );\n      statusDescriptionLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.import.fail.description\" ) );\n      iconLabel.setImage(\n        GUIResource.getInstance().getImage( FAIL_IMG, getClass().getClassLoader(), 70, 70 ) );\n      viewTestResultsButton.setVisible( false );\n\n    } else {\n      statusLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.fail\" ) );\n      statusDescriptionLabel.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.fail.description\" ) );\n      iconLabel.setImage(\n        GUIResource.getInstance().getImage( FAIL_IMG, getClass().getClassLoader(), 70, 70 ) );\n      viewTestResultsButton.setVisible( true );\n    }\n    mainPanel.pack();\n  }\n\n  public void setTestResults( Object[] categories ) {\n    testResults = categories;\n    String status = BaseMessages.getString( PKG, \"NamedClusterDialog.test.pass\" );\n    for ( Object category : testResults ) {\n      TestCategory testCategory = (TestCategory) category;\n      if ( !testCategory.getCategoryStatus().equals( status ) && testCategory.isCategoryActive() ) {\n        status = testCategory.getCategoryStatus();\n        break;\n      }\n    }\n    setTestResult( status );\n  }\n\n  // FOR DEV MODE ONLY\n  private boolean isDevMode() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    return namedClusterDialog.isDevMode();\n  }\n  // FOR DEV MODE ONLY\n\n  private boolean isConnectedToRepo() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    boolean isConnectedToRepo = namedClusterDialog.isConnectedToRepo();\n    if ( isDevMode() ) {\n      isConnectedToRepo = true;\n    }\n    return isConnectedToRepo;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    setTitle( ( (NamedClusterDialog) getWizard() ).isEditMode() ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster.title\" ) :\n      ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n        BaseMessages.getString( PKG, \"NamedClusterDialog.importCluster.title\" ) :\n        BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster.title\" ) );\n\n    if ( isConnectedToRepo() ) {\n      setDescription( BaseMessages.getString( PKG, \"NamedClusterDialog.repositoryNotification\" ) );\n    }\n\n    thinNameClusterModel = model;\n  }\n\n  private void viewTestResults() {\n    TestResultsPage testResultsPage = (TestResultsPage) getWizard().getPage( TestResultsPage.class.getSimpleName() );\n    testResultsPage.setTestResults( testResults );\n    getContainer().showPage( testResultsPage );\n  }\n\n  private void editCluster() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    namedClusterDialog.editCluster();\n  }\n\n  private void createNewCluster() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    namedClusterDialog.createNewCluster();\n  }\n\n  public IWizardPage getPreviousPage() {\n    return null;\n  }\n\n  public IWizardPage getNextPage() {\n    return null;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\", BaseMessages.getString( PKG, \"NamedClusterDialog.help\" ), \"\" );\n  }\n}"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/SecuritySettingsPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.eclipse.jface.wizard.IWizardPage;\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Listener;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.NamedClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabel;\n\npublic class SecuritySettingsPage extends WizardPage {\n\n  private PropsUI props;\n  private Button noneButton;\n  private Button kerberosButton;\n  private Button knoxButton;\n  private Composite basePanel;\n  private Composite parent;\n  private Composite mainPanel;\n  private ThinNameClusterModel thinNameClusterModel;\n  private NamedClusterSecurityType securityType;\n  private final Listener securityTypeListener = e -> setSecurityType();\n\n  public enum NamedClusterSecurityType {NONE, KERBEROS, KNOX}\n\n  private static final Class<?> PKG = SecuritySettingsPage.class;\n\n  public SecuritySettingsPage( ThinNameClusterModel model ) {\n    super( SecuritySettingsPage.class.getSimpleName() );\n    securityType = NamedClusterSecurityType.NONE;\n    thinNameClusterModel = model;\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n    basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout baseGridLayout = new GridLayout( ONE_COLUMN, false );\n    baseGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    baseGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    baseGridLayout.marginBottom = 30;\n    baseGridLayout.marginLeft = 20;\n    basePanel.setLayout( baseGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    mainPanel = new Composite( basePanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanelGridData.heightHint = 510; //Height of the panel (WILL NEED TO ADJUST)\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n\n    GridData clusterNameLabelGridData = new GridData();\n    clusterNameLabelGridData.widthHint = 400; // Label width\n    createLabel( mainPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.security\" ),\n      clusterNameLabelGridData, props );\n\n    noneButton = new Button( mainPanel, SWT.RADIO );\n    noneButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.none\" ) );\n    noneButton.addListener( SWT.Selection, securityTypeListener );\n    props.setLook( noneButton );\n\n    kerberosButton = new Button( mainPanel, SWT.RADIO );\n    kerberosButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.kerberos\" ) );\n    kerberosButton.addListener( SWT.Selection, securityTypeListener );\n    props.setLook( kerberosButton );\n\n    knoxButton = new Button( mainPanel, SWT.RADIO );\n    knoxButton.setText( BaseMessages.getString( PKG, \"NamedClusterDialog.knox\" ) );\n    knoxButton.addListener( SWT.Selection, securityTypeListener );\n    props.setLook( knoxButton );\n\n    setControl( parent );\n    initialize( thinNameClusterModel );\n  }\n\n  private void setSecurityType() {\n    if ( noneButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.NONE;\n      thinNameClusterModel.setSecurityType( \"None\" );\n      getContainer().updateButtons();\n    }\n    if ( kerberosButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.KERBEROS;\n      thinNameClusterModel.setSecurityType( \"Kerberos\" );\n      getContainer().updateButtons();\n    }\n    if ( knoxButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.KNOX;\n      thinNameClusterModel.setSecurityType( \"Knox\" );\n      getContainer().updateButtons();\n    }\n  }\n\n  // FOR DEV MODE ONLY\n  private boolean isDevMode() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    return namedClusterDialog.isDevMode();\n  }\n  // FOR DEV MODE ONLY\n\n  private boolean isConnectedToRepo() {\n    NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n    boolean isConnectedToRepo = namedClusterDialog.isConnectedToRepo();\n    if ( isDevMode() ) {\n      isConnectedToRepo = true;\n    }\n    return isConnectedToRepo;\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    setTitle( ( (NamedClusterDialog) getWizard() ).isEditMode() ?\n      BaseMessages.getString( PKG, \"NamedClusterDialog.editCluster.title\" ) :\n      ( (NamedClusterDialog) getWizard() ).getDialogState().equals( \"import\" ) ?\n        BaseMessages.getString( PKG, \"NamedClusterDialog.importCluster.title\" ) :\n        BaseMessages.getString( PKG, \"NamedClusterDialog.newCluster.title\" ) );\n\n    if ( isConnectedToRepo() ) {\n      setDescription( BaseMessages.getString( PKG, \"NamedClusterDialog.repositoryNotification\" ) );\n    }\n\n    thinNameClusterModel = model;\n    noneButton.setSelection( model.getSecurityType().equals( \"None\" ) );\n    kerberosButton.setSelection( model.getSecurityType().equals( \"Kerberos\" ) );\n    knoxButton.setSelection( model.getSecurityType().equals( \"Knox\" ) );\n    if ( noneButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.NONE;\n    }\n    if ( kerberosButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.KERBEROS;\n    }\n    if ( knoxButton.getSelection() ) {\n      securityType = NamedClusterSecurityType.KNOX;\n    }\n    String shimIdentifier = model.getShimIdentifier();\n    if( shimIdentifier == null || shimIdentifier.isEmpty() ) {\n      NamedClusterDialog namedClusterDialog = (NamedClusterDialog) getWizard();\n      shimIdentifier = namedClusterDialog.getShimIdentifier();\n    }\n    knoxButton.setVisible(\n      shimIdentifier.equals( \"cdpdc71\" ) || shimIdentifier.equals( \"hdp31\" ) );\n  }\n\n\n  public NamedClusterSecurityType getSecurityType() {\n    return securityType;\n  }\n\n  public IWizardPage getNextPage() {\n    IWizardPage nextPage = null;\n    if ( getSecurityType().equals( NamedClusterSecurityType.KERBEROS ) ) {\n      nextPage = getWizard().getPage( KerberosSettingsPage.class.getSimpleName() );\n    }\n    if ( getSecurityType().equals( NamedClusterSecurityType.KNOX ) ) {\n      nextPage = getWizard().getPage( KnoxSettingsPage.class.getSimpleName() );\n    }\n    return nextPage;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\", BaseMessages.getString( PKG, \"NamedClusterDialog.help\" ), \"\" );\n  }\n}"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/TestResultsPage.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.pages;\n\nimport org.eclipse.jface.wizard.WizardPage;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CLabel;\nimport org.eclipse.swt.graphics.Font;\nimport org.eclipse.swt.graphics.FontData;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.ExpandBar;\nimport org.eclipse.swt.widgets.ExpandItem;\nimport org.eclipse.swt.widgets.Label;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.Test;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.TestCategory;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.util.HelpUtils;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.ONE_COLUMN;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.createLabelWithStyle;\n\npublic class TestResultsPage extends WizardPage {\n\n  private PropsUI props;\n  private Composite basePanel;\n  private Composite parent;\n  private Composite mainPanel;\n  private ExpandBar testResultsExpandBar;\n  private ThinNameClusterModel thinNameClusterModel;\n  private static final Class<?> PKG = TestResultsPage.class;\n  private static final String WARNING = BaseMessages.getString( PKG, \"NamedClusterDialog.test.warning\" );\n  private static final String FAIL = BaseMessages.getString( PKG, \"NamedClusterDialog.test.fail\" );\n  private static final String PASS = BaseMessages.getString( PKG, \"NamedClusterDialog.test.pass\" );\n  private static final String WARNING_IMG = \"images/warning_category.svg\";\n  private static final String FAIL_IMG = \"images/fail_category.svg\";\n  private static final String PASS_IMG = \"images/success_category.svg\";\n\n  public TestResultsPage( VariableSpace variables, ThinNameClusterModel model ) {\n    super( TestResultsPage.class.getSimpleName() );\n    thinNameClusterModel = model;\n  }\n\n  public void createControl( Composite composite ) {\n    parent = new Composite( composite, SWT.NONE );\n    props = PropsUI.getInstance();\n    props.setLook( parent );\n    GridLayout gridLayout = new GridLayout( ONE_COLUMN, false );\n    parent.setLayout( gridLayout );\n    basePanel = new Composite( parent, SWT.NONE );\n\n    //START OF MAIN LAYOUT\n    GridLayout baseGridLayout = new GridLayout( ONE_COLUMN, false );\n    baseGridLayout.marginWidth = 60; //TO CENTER CONTENTS\n    baseGridLayout.marginTop = 10; //TO CENTER CONTENTS\n    baseGridLayout.marginBottom = 30;\n    baseGridLayout.marginLeft = 20;\n    basePanel.setLayout( baseGridLayout );\n    GridData basePanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    basePanel.setLayoutData( basePanelGridData );\n    props.setLook( basePanel );\n    //END OF MAIN LAYOUT\n\n    mainPanel = new Composite( basePanel, SWT.NONE );\n    mainPanel.setLayout( new GridLayout( ONE_COLUMN, false ) );\n    GridData mainPanelGridData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    mainPanelGridData.heightHint = 510; //Height of the panel (WILL NEED TO ADJUST)\n    mainPanel.setLayoutData( mainPanelGridData );\n    props.setLook( mainPanel );\n\n    GridData statusGridData = new GridData();\n    statusGridData.widthHint = 400; // Label width\n    statusGridData.heightHint = 50; // Label height\n    Label statusLabel =\n      createLabelWithStyle( mainPanel, BaseMessages.getString( PKG, \"NamedClusterDialog.testResults\" ), statusGridData,\n        props, SWT.NONE );\n    statusLabel.setFont( new Font( statusLabel.getDisplay(), new FontData( \"Arial\", 20, SWT.NONE ) ) );\n    statusLabel.setAlignment( SWT.CENTER );\n    setControl( parent );\n    initialize( thinNameClusterModel );\n  }\n\n  private List<TestCategory> setTestResultsOrder( Object[] categories ) {\n    List<TestCategory> testCategories = new ArrayList<>();\n    String[] categoryNames = new String[ 5 ];\n    if ( Const.isWindows() || Const.isOSX() ) {\n      categoryNames[ 0 ] = \"Kafka\";\n      categoryNames[ 1 ] = \"Oozie\";\n      categoryNames[ 2 ] = \"Job\";\n      categoryNames[ 3 ] = \"Zookeeper\";\n      categoryNames[ 4 ] = \"Hadoop\";\n    } else {\n      categoryNames[ 0 ] = \"Hadoop\";\n      categoryNames[ 1 ] = \"Zookeeper\";\n      categoryNames[ 2 ] = \"Job\";\n      categoryNames[ 3 ] = \"Oozie\";\n      categoryNames[ 4 ] = \"Kafka\";\n    }\n    for ( String categoryName : categoryNames ) {\n      TestCategory category = getTestCategory( categoryName, categories );\n      if ( category != null ) {\n        testCategories.add( category );\n      }\n    }\n    return testCategories;\n  }\n\n  private TestCategory getTestCategory( String categoryName, Object[] categories ) {\n    TestCategory testCategory = null;\n    for ( Object category : categories ) {\n      if ( ( (TestCategory) category ).getCategoryName().startsWith( categoryName ) ) {\n        testCategory = (TestCategory) category;\n      }\n    }\n    return testCategory;\n  }\n\n  public void setTestResults( Object[] categories ) {\n    if ( testResultsExpandBar != null ) {\n      testResultsExpandBar.dispose();\n      mainPanel.pack();\n    }\n    testResultsExpandBar = new ExpandBar( mainPanel, SWT.V_SCROLL );\n    GridData testResultsExpandBarLayoutData = new GridData( SWT.FILL, SWT.FILL, false, false );\n    testResultsExpandBarLayoutData.heightHint = 400; //Height of the panel (WILL NEED TO ADJUST)\n    testResultsExpandBarLayoutData.widthHint = 400; //Height of the panel (WILL NEED TO ADJUST)\n    testResultsExpandBar.setLayoutData( testResultsExpandBarLayoutData );\n    testResultsExpandBar.setSpacing( 8 );\n    props.setLook( testResultsExpandBar );\n    mainPanel.pack();\n    List<TestCategory> testCategories = setTestResultsOrder( categories );\n    for ( TestCategory testCategory : testCategories ) {\n      ExpandItem categoryItem = new ExpandItem( testResultsExpandBar, SWT.NONE, 0 );\n      categoryItem.setText( testCategory.getCategoryName() );\n      if ( testCategory.getCategoryStatus().equals( FAIL ) ) {\n        categoryItem.setImage(\n          GUIResource.getInstance().getImage( FAIL_IMG, getClass().getClassLoader(), 16, 16 ) );\n      } else if ( testCategory.getCategoryStatus().isEmpty() ) {\n        categoryItem.setImage(\n          GUIResource.getInstance().getImage( WARNING_IMG, getClass().getClassLoader(), 16, 16 ) );\n        categoryItem.setText( testCategory.getCategoryName() + \" (skipped)\" );\n      } else if ( testCategory.getCategoryStatus().equals( WARNING ) ) {\n        categoryItem.setImage(\n          GUIResource.getInstance().getImage( WARNING_IMG, getClass().getClassLoader(), 16, 16 ) );\n      } else if ( testCategory.getCategoryStatus().equals( PASS ) ) {\n        categoryItem.setImage(\n          GUIResource.getInstance().getImage( PASS_IMG, getClass().getClassLoader(), 16, 16 ) );\n      }\n      List<Test> tests = testCategory.getTests();\n      Composite testComposite = new Composite( testResultsExpandBar, SWT.NONE );\n      props.setLook( testComposite );\n      for ( Test test : tests ) {\n        GridLayout testLayout = new GridLayout();\n        testLayout.marginLeft = testLayout.marginTop = testLayout.marginRight = testLayout.marginBottom = 10;\n        testLayout.verticalSpacing = 10;\n        testComposite.setLayout( testLayout );\n        GridData testLayoutData = new GridData();\n        testLayoutData.widthHint = 400;\n        testLayoutData.heightHint = 400;\n        testComposite.setLayoutData( testLayoutData );\n        CLabel testLabel = new CLabel( testComposite, SWT.NONE );\n        if ( test.getTestStatus().equals( WARNING ) ) {\n          testLabel.setImage(\n            GUIResource.getInstance().getImage( WARNING_IMG, getClass().getClassLoader(), 16, 16 ) );\n        } else if ( test.getTestStatus().equals( FAIL ) ) {\n          testLabel.setImage(\n            GUIResource.getInstance().getImage( FAIL_IMG, getClass().getClassLoader(), 16, 16 ) );\n        } else if ( test.getTestStatus().equals( PASS ) ) {\n          testLabel.setImage(\n            GUIResource.getInstance().getImage( PASS_IMG, getClass().getClassLoader(), 16, 16 ) );\n        }\n        testLabel.setText( test.getTestName() );\n        props.setLook( testLabel );\n      }\n      categoryItem.setHeight( testComposite.computeSize( SWT.DEFAULT, SWT.DEFAULT ).y );\n      categoryItem.setControl( testComposite );\n      mainPanel.pack();\n    }\n    mainPanel.pack();\n  }\n\n  public void initialize( ThinNameClusterModel model ) {\n    thinNameClusterModel = model;\n  }\n\n  public void performHelp() {\n    HelpUtils.openHelpDialog( parent.getShell(), \"\",\n      \"https://docs.pentaho.com/pdia-11.0-install/use-hadoop-with-pentaho/big-data-issues\", \"\" );\n  }\n}"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/util/BadSiteFilesException.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util;\n\npublic class BadSiteFilesException extends Exception {\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/util/CustomWizardDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util;\n\nimport org.eclipse.jface.dialogs.IDialogConstants;\nimport org.eclipse.jface.wizard.IWizard;\nimport org.eclipse.jface.wizard.WizardDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.graphics.Point;\nimport org.eclipse.swt.graphics.Rectangle;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\n\npublic class CustomWizardDialog extends WizardDialog {\n\n  public CustomWizardDialog( Shell parentShell, IWizard newWizard ) {\n    super( parentShell, newWizard );\n    setDefaultImage( GUIResource.getInstance().getImageWizard() );\n    setHelpAvailable( true );\n    setShellStyle( SWT.CLOSE | SWT.TITLE | SWT.BORDER\n      | SWT.APPLICATION_MODAL | getDefaultOrientation() );\n    create();\n    Rectangle shellBounds = getParentShell().getBounds();\n    Point dialogSize = getShell().getSize();\n    getShell().setLocation( shellBounds.x + ( shellBounds.width - dialogSize.x ) / 2,\n      shellBounds.y + ( shellBounds.height - dialogSize.y ) / 2 );\n  }\n\n  public void style() {\n    PropsUI propsUI = PropsUI.getInstance();\n    propsUI.setLook( getButtonBar() );\n    propsUI.setLook( getDialogArea() );\n  }\n\n  public void enableCancelButton( boolean isEnabled ) {\n    Button cancelButton = getButton( IDialogConstants.CANCEL_ID );\n    cancelButton.setEnabled( isEnabled );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/util/NamedClusterHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.CachedFileItemStream;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.HadoopClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.security.Base64TwoWayPasswordEncoder;\nimport org.pentaho.metastore.api.security.ITwoWayPasswordEncoder;\n\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileNotFoundException;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.AbstractMap;\nimport java.util.ArrayList;\nimport java.util.Date;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.function.Supplier;\nimport java.util.zip.ZipEntry;\nimport java.util.zip.ZipInputStream;\n\npublic abstract class NamedClusterHelper {\n\n  public static final int ONE_COLUMN = 1;\n  public static final int TWO_COLUMNS = 2;\n  public static final String USERNAME = \"USERNAME\";\n  public static final String PASSWORD = \"PASSWORD\";\n  private static final Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n  private static final ITwoWayPasswordEncoder passwordEncoder = new Base64TwoWayPasswordEncoder();\n\n  /**\n   * Data structure to hold driver configuration information\n   */\n  public static class DriverInfo {\n    private final String id;\n    private final String vendor;\n    private final String version;\n\n    public DriverInfo( String id, String vendor, String version ) {\n      this.id = id;\n      this.vendor = vendor;\n      this.version = version;\n    }\n\n    public String getId() {\n      return id;\n    }\n\n    public String getVendor() {\n      return vendor;\n    }\n\n    public String getVersion() {\n      return version;\n    }\n  }\n\n  private static final Map<String, DriverInfo> DRIVER_INFO_MAP = new HashMap<>();\n\n  static {\n    DRIVER_INFO_MAP.put( \"apachevanilla\", new DriverInfo( \"apachevanilla\", \"ApacheVanilla\", \"3.4.0\" ) );\n    DRIVER_INFO_MAP.put( \"cdpdc71\", new DriverInfo( \"cdpdc71\", \"Cloudera\", \"7.1\" ) );\n    DRIVER_INFO_MAP.put( \"dataproc1421\", new DriverInfo( \"dataproc1421\", \"Google Dataproc\", \"1.4\" ) );\n    DRIVER_INFO_MAP.put( \"dataproc23\", new DriverInfo( \"dataproc23\", \"Google Dataproc\", \"2.3\" ) );\n    DRIVER_INFO_MAP.put( \"emr770\", new DriverInfo( \"emr770\", \"EMR\", \"7.7\" ) );\n    DRIVER_INFO_MAP.put( \"hdi40\", new DriverInfo( \"hdi40\", \"HDInsight\", \"4.0\" ) );\n    DRIVER_INFO_MAP.put( \"apache\", new DriverInfo( \"apache\", \"Apache\", \"3.4\" ) );\n  }\n\n  public enum FileType {\n    CONFIGURATION( \"configuration\" ),\n    DRIVER( \".kar\" );\n\n    private final String val;\n\n    FileType( String val ) {\n      this.val = val;\n    }\n\n    String getValue() {\n      return this.val;\n    }\n  }\n\n  public static Label createLabel( Composite parent, String text, GridData gd, PropsUI props ) {\n    Label label = new Label( parent, SWT.NONE );\n    label.setText( text );\n    label.setLayoutData( gd );\n    props.setLook( label );\n    return label;\n  }\n\n  public static Label createLabelWithStyle( Composite parent, String text, GridData gd, PropsUI props, int style ) {\n    Label label = new Label( parent, style );\n    label.setText( text );\n    label.setLayoutData( gd );\n    props.setLook( label );\n    return label;\n  }\n\n  public static TextVar createText( Composite parent, String text, GridData gd, PropsUI props,\n                                    VariableSpace variableSpace ) {\n    return createText( parent, text, gd, props, variableSpace, null );\n  }\n\n  public static TextVar createText( Composite parent, String text, GridData gd, PropsUI props,\n                                    VariableSpace variableSpace, Listener listener ) {\n    TextVar textVar =\n      new TextVar( variableSpace, parent, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    textVar.setText( text );\n    textVar.setLayoutData( gd );\n    if ( listener != null ) {\n      textVar.getTextWidget().addListener( SWT.CHANGED, listener );\n      textVar.getTextWidget().addListener( SWT.MouseExit, listener );\n    }\n    props.setLook( textVar );\n    return textVar;\n  }\n\n  public static Map<String, CachedFileItemStream> processSiteFiles( ThinNameClusterModel model,\n                                                                    HadoopClusterManager manager )\n    throws BadSiteFilesException, IOException {\n    Map<String, CachedFileItemStream> siteFiles = new HashMap<>();\n    List<AbstractMap.SimpleImmutableEntry<String, String>> files = model.getSiteFiles();\n    for ( AbstractMap.SimpleImmutableEntry<String, String> file : files ) {\n      File siteFile = null;\n      String fileName = null;\n      if ( file.getValue().equals( \"keytabAuthFile\" ) || file.getValue().equals( \"keytabImpFile\" ) ) {\n        siteFile = new File( file.getKey() );\n        fileName = file.getValue();\n      } else {\n        siteFile = new File( file.getKey() + file.getValue() );\n        fileName = siteFile.getName();\n      }\n      InputStream fileInputStream = null;\n      try {\n        fileInputStream = new FileInputStream( siteFile );\n      } catch ( FileNotFoundException e ) {\n        if ( file.getKey().isEmpty() ) {\n          if ( manager.getNamedClusterByName( model.getName() ) != null ) {\n            fileInputStream = manager.getSiteFileInputStream( model.getName(), file.getValue() );\n          } else {\n            fileInputStream = manager.getSiteFileInputStream( model.getOldName(), file.getValue() );\n          }\n        } else {\n          if( !( file.getKey().contains( model.getName() ) || file.getKey().contains( model.getOldName() ) ) ) {\n            throw new BadSiteFilesException();\n          } else {\n            continue;\n          }\n        }\n      }\n      List<CachedFileItemStream> fileItemStreams =\n        copyAndUnzip( fileInputStream, FileType.CONFIGURATION, siteFile.getName(), fileName, manager );\n      for ( CachedFileItemStream cachedFileItemStream : fileItemStreams ) {\n        siteFiles.put( cachedFileItemStream.getFieldName(), cachedFileItemStream );\n      }\n    }\n    return siteFiles;\n  }\n\n  public static List<CachedFileItemStream> copyAndUnzip( InputStream fileInputStream, FileType fileType,\n                                                         String fileName, String realFileName,\n                                                         HadoopClusterManager manager )\n    throws IOException {\n    List<CachedFileItemStream> unzippedFileItemStreams = new ArrayList<>();\n    if ( realFileName.endsWith( \".zip\" ) ) {\n      try ( ZipInputStream zis = new ZipInputStream( fileInputStream ) ) {\n        for ( ZipEntry zipEntry = zis.getNextEntry(); zipEntry != null; zipEntry = zis.getNextEntry() ) {\n          if ( !zipEntry.isDirectory() ) {\n            // Remove all directory structure from the zip file names and only unzip the files\n            String[] split = zipEntry.getName().split( \"/\" ); //zip files always use forward slash\n            String unzippedFileName = split[ split.length - 1 ];\n            if ( isValidUpload( unzippedFileName, fileType, manager ) ) {\n              CachedFileItemStream unzippedFileItemStream =\n                new CachedFileItemStream( zis, unzippedFileName, unzippedFileName );\n              unzippedFileItemStream.setLastModified( zipEntry.getLastModifiedTime().toMillis() );\n              unzippedFileItemStreams.add( unzippedFileItemStream );\n            }\n          }\n        }\n      }\n    } else {\n      // File is not zipped\n      if ( isValidUpload( realFileName, fileType, manager ) ) {\n        unzippedFileItemStreams.add( new CachedFileItemStream( fileInputStream, fileName,\n          realFileName ) );\n      }\n    }\n    return unzippedFileItemStreams;\n  }\n\n  public static boolean isValidUpload( String fileName, FileType fileType, HadoopClusterManager manager ) {\n    boolean valid = ( fileType.equals( FileType.CONFIGURATION ) && manager.isValidConfigurationFile( fileName ) )\n      ||\n      ( fileType.equals( FileType.DRIVER ) && fileName.endsWith( FileType.DRIVER.getValue() ) );\n    return valid;\n  }\n\n  public static boolean isConnectedToRepo() {\n    Spoon supplier = spoonSupplier.get();\n    if ( supplier != null ) {\n      Repository repo = supplier.getRepository();\n      return repo != null && repo.getUri().isPresent();\n    } else {\n      return false;\n    }\n  }\n\n  public static String getEndpointURL( String endpoint ) {\n    double cacheBust = Math.round( new Date().getTime() / 1000 ) + Math.random();\n    return spoonSupplier.get().getRepository().getUri()\n      .orElseThrow( () -> new IllegalStateException( \"Repo URI not defined\" ) )\n      .toString() + \"/plugin/pentaho-hadoop-cluster-plugin/api/\" + endpoint + \"?v=\" + cacheBust;\n  }\n\n  public static Map<String, String> getSecurityCredentials() {\n    Repository repo = spoonSupplier.get().getRepository();\n    String userName = repo.getUserInfo().getLogin();\n    String password = repo.getUserInfo().getPassword();\n    Map<String, String> credentials = new HashMap<>();\n    credentials.put( USERNAME, userName );\n    credentials.put( PASSWORD, password );\n    return credentials;\n  }\n\n  public static String encodePassword( String password ) {\n    if ( password != null && !password.startsWith( Encr.PASSWORD_ENCRYPTED_PREFIX ) ) {\n      password = Encr.encryptPasswordIfNotUsingVariables( password );\n    }\n    return password;\n  }\n\n  public static String decodePassword( String password ) {\n    if ( password == null || password.startsWith( Encr.PASSWORD_ENCRYPTED_PREFIX ) ) {\n      return Encr.decryptPasswordOptionallyEncrypted( password );\n    } else {\n      //Password is likely stored encrypted with legacy Base64TwoWayPasswordEncoder\n      if ( !StringUtil.isVariable( password ) ) {\n        return passwordEncoder.decode( password );\n      }\n    }\n    return password;\n  }\n\n  /**\n   * Get the vendor name for a given driver ID\n   * @param driverId The driver identifier\n   * @return The vendor name, or null if not found\n   */\n  public static String getVendorForDriver( String driverId ) {\n    DriverInfo driverInfo = DRIVER_INFO_MAP.get( driverId );\n    return driverInfo != null ? driverInfo.getVendor() : null;\n  }\n\n  /**\n   * Get the version for a given driver ID\n   * @param driverId The driver identifier\n   * @return The version, or null if not found\n   */\n  public static String getVersionForDriver( String driverId ) {\n    DriverInfo driverInfo = DRIVER_INFO_MAP.get( driverId );\n    return driverInfo != null ? driverInfo.getVersion() : null;\n  }\n\n  /**\n   * Get the complete DriverInfo for a given driver ID\n   * @param driverId The driver identifier\n   * @return The DriverInfo object, or null if not found\n   */\n  public static DriverInfo getDriverInfo( String driverId ) {\n    return DRIVER_INFO_MAP.get( driverId );\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/CachedFileItemStream.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\nimport org.apache.commons.fileupload2.core.FileItemInput;\nimport org.apache.commons.io.IOUtils;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\n\npublic class CachedFileItemStream {\n\n  private ByteArrayOutputStream outputStream = new ByteArrayOutputStream();\n  private String name;\n  private String fieldName;\n  private long lastModified; //optional file last modified date\n\n  public CachedFileItemStream( FileItemInput fileItemStream ) throws IOException {\n    this( fileItemStream.getInputStream(), fileItemStream.getName(), fileItemStream.getFieldName() );\n  }\n\n  public CachedFileItemStream( InputStream inputStream, String name, String fieldName ) throws IOException {\n    IOUtils.copy( inputStream, this.outputStream );\n    this.name = name;\n    this.fieldName = fieldName;\n  }\n\n  public ByteArrayOutputStream getCachedOutputStream() {\n    return this.outputStream;\n  }\n\n  public ByteArrayInputStream getCachedInputStream() {\n    return new ByteArrayInputStream( this.outputStream.toByteArray() );\n  }\n\n  public String getName() {\n    return this.name;\n  }\n\n  public String getFieldName() {\n    return this.fieldName;\n  }\n\n  public long getLastModified() {\n    return lastModified;\n  }\n\n  public void setLastModified( long lastModified ) {\n    this.lastModified = lastModified;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/Category.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\nimport java.util.List;\n\npublic interface Category {\n\n  public List<Test> getTests();\n\n  public String getCategoryName();\n\n  public void setCategoryName( String categoryName );\n\n  public void setTests( List<Test> tests );\n\n  public String getCategoryStatus();\n\n  public void setCategoryStatus( String categoryStatus );\n\n  public boolean isCategoryActive();\n\n  public void setCategoryActive( boolean categoryActive );\n\n  public void addTest( Test test );\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/HadoopClusterManager.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.configuration.ConfigurationException;\nimport org.apache.commons.configuration.PropertiesConfiguration;\nimport org.apache.commons.fileupload2.core.FileItemInput;\nimport org.apache.commons.io.FileUtils;\nimport org.apache.http.HttpEntity;\nimport org.apache.http.HttpHost;\nimport org.apache.http.auth.AuthScope;\nimport org.apache.http.auth.UsernamePasswordCredentials;\nimport org.apache.http.client.AuthCache;\nimport org.apache.http.client.methods.CloseableHttpResponse;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.methods.HttpPost;\nimport org.apache.http.client.protocol.HttpClientContext;\nimport org.apache.http.entity.ContentType;\nimport org.apache.http.entity.mime.MultipartEntityBuilder;\nimport org.apache.http.impl.auth.BasicScheme;\nimport org.apache.http.impl.client.BasicAuthCache;\nimport org.apache.http.impl.client.BasicCredentialsProvider;\nimport org.apache.http.impl.client.CloseableHttpClient;\nimport org.apache.http.impl.client.HttpClients;\nimport org.apache.http.util.EntityUtils;\nimport org.json.simple.JSONObject;\nimport org.json.simple.parser.JSONParser;\nimport org.json.simple.parser.ParseException;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.HadoopClusterDialog;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.BadSiteFilesException;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.osgi.impl.NamedClusterSiteFileImpl;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTest;\nimport org.pentaho.runtime.test.RuntimeTestProgressCallback;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.module.RuntimeTestModuleResults;\nimport org.pentaho.runtime.test.result.RuntimeTestResult;\nimport org.pentaho.runtime.test.result.RuntimeTestResultEntry;\nimport org.w3c.dom.Document;\nimport org.w3c.dom.NodeList;\n\nimport javax.xml.xpath.XPath;\nimport javax.xml.xpath.XPathConstants;\nimport javax.xml.xpath.XPathExpression;\nimport javax.xml.xpath.XPathExpressionException;\nimport javax.xml.xpath.XPathFactory;\nimport java.io.BufferedReader;\nimport java.io.ByteArrayOutputStream;\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.io.OutputStream;\nimport java.net.URI;\nimport java.nio.file.Files;\nimport java.nio.file.Path;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardCopyOption;\nimport java.util.AbstractMap.SimpleImmutableEntry;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\nimport static org.pentaho.big.data.impl.cluster.tests.Constants.HADOOP_FILE_SYSTEM;\nimport static org.pentaho.big.data.impl.cluster.tests.Constants.OOZIE;\nimport static org.pentaho.big.data.impl.cluster.tests.Constants.KAFKA;\nimport static org.pentaho.big.data.impl.cluster.tests.Constants.ZOOKEEPER;\nimport static org.pentaho.big.data.impl.cluster.tests.Constants.MAP_REDUCE;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.isConnectedToRepo;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.processSiteFiles;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel.NAME_KEY;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.wizard.util.NamedClusterHelper.encodePassword;\n\npublic class HadoopClusterManager implements RuntimeTestProgressCallback {\n\n  private static final Class<?> PKG = HadoopClusterDialog.class;\n  public static final String STRING_NAMED_CLUSTERS = BaseMessages.getString( PKG, \"HadoopClusterTree.Title\" );\n  public static final String PLACEHOLDER_VALUE = \"[object Object]\";\n  private final String fileSeparator = System.getProperty( \"file.separator\" );\n  private static final String PASS = \"Pass\";\n  private static final String WARNING = \"Warning\";\n  private static final String FAIL = \"Fail\";\n  private static final String NAMED_CLUSTER = \"namedCluster\";\n  private static final String INSTALLED = \"installed\";\n  private static final String CONFIG_PROPERTIES = \"config.properties\";\n  private static final String KEYTAB_AUTH_FILE = \"keytabAuthFile\";\n  private static final String KEYTAB_IMPL_FILE = \"keytabImpFile\";\n  public static final String MAPR_SHIM = \"Map-R\";\n  public static final String MAPRFS_SCHEME = \"maprfs\";\n\n  private static final LogChannelInterface log =\n    KettleLogStore.getLogChannelInterfaceFactory().create( \"HadoopClusterManager\" );\n\n  private final String internalShim;\n\n  private enum KERBEROS_SUBTYPE {\n    PASSWORD( \"Password\" ),\n    KEYTAB( \"Keytab\" );\n\n    private String val;\n\n    KERBEROS_SUBTYPE( String val ) {\n      this.val = val;\n    }\n\n    public String getValue() {\n      return this.val;\n    }\n  }\n\n  private enum SECURITY_TYPE {\n    NONE( \"None\" ),\n    KERBEROS( \"Kerberos\" ),\n    KNOX( \"Knox\" );\n\n    private String val;\n\n    SECURITY_TYPE( String val ) {\n      this.val = val;\n    }\n\n    public String getValue() {\n      return this.val;\n    }\n  }\n\n  private enum IMPERSONATION_TYPE {\n    SIMPLE( \"simple\" ),\n    DISABLED( \"disabled\" );\n\n    private String val;\n\n    IMPERSONATION_TYPE( String val ) {\n      this.val = val;\n    }\n\n    public String getValue() {\n      return this.val;\n    }\n  }\n\n  private static final String KERBEROS_AUTHENTICATION_USERNAME = \"pentaho.authentication.default.kerberos.principal\";\n  private static final String KERBEROS_AUTHENTICATION_PASS = \"pentaho.authentication.default.kerberos.password\";\n  private static final String KERBEROS_IMPERSONATION_USERNAME =\n    \"pentaho.authentication.default.mapping.server.credentials.kerberos.principal\";\n  private static final String KERBEROS_IMPERSONATION_PASS =\n    \"pentaho.authentication.default.mapping.server.credentials.kerberos.password\";\n  private static final String IMPERSONATION = \"pentaho.authentication.default.mapping.impersonation.type\";\n  private static final String KEYTAB_AUTHENTICATION_LOCATION = \"pentaho.authentication.default.kerberos.keytabLocation\";\n  private static final String KEYTAB_IMPERSONATION_LOCATION =\n    \"pentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation\";\n\n  private final Spoon spoon;\n  private final NamedClusterService namedClusterService;\n  private final IMetaStore metaStore;\n  private final VariableSpace variableSpace;\n  private RuntimeTestStatus runtimeTestStatus = null;\n\n  public HadoopClusterManager( Spoon spoon, NamedClusterService namedClusterService, IMetaStore metaStore,\n                               String internalShim ) {\n    this.spoon = spoon;\n    this.namedClusterService = namedClusterService;\n    this.metaStore = metaStore != null ? metaStore : spoon.getMetaStore();\n    this.variableSpace = spoon == null ? new Variables() : (AbstractMeta) spoon.getActiveMeta();\n    this.internalShim = internalShim;\n  }\n\n  public HadoopClusterManager( NamedClusterService namedClusterService, IMetaStore metaStore,\n                               String internalShim ) {\n    this( null, namedClusterService, metaStore, internalShim);\n  }\n\n  public JSONObject importNamedCluster( ThinNameClusterModel model,\n                                        Map<String, CachedFileItemStream> siteFilesSource ) {\n    JSONObject response = new JSONObject();\n    response.put( NAMED_CLUSTER, \"\" );\n    try {\n      // Create and initialize template.\n      NamedCluster nc = namedClusterService.getClusterTemplate();\n      nc.setHdfsHost( \"\" );\n      nc.setHdfsPort( \"\" );\n      nc.setJobTrackerHost( \"\" );\n      nc.setJobTrackerPort( \"\" );\n      nc.setZooKeeperHost( \"\" );\n      nc.setZooKeeperPort( \"\" );\n      nc.setOozieUrl( \"\" );\n      nc.setName( model.getName() );\n      nc.setHdfsUsername( model.getHdfsUsername() );\n      nc.setHdfsPassword( encodePassword( model.getHdfsPassword() ) );\n      if ( variableSpace != null ) {\n        nc.shareVariablesWith( variableSpace );\n      } else {\n        nc.initializeVariablesFrom( null );\n      }\n\n      boolean isConfigurationSet =\n        configureNamedCluster( siteFilesSource, nc);\n      if ( isConfigurationSet ) {\n        deleteNamedClusterSchemaOnly( model );\n        setupKnoxSecurity( nc, model );\n        deleteConfigFolder( nc.getName() );\n        installSiteFiles( siteFilesSource, nc );\n        namedClusterService.create( nc, metaStore );\n        createConfigProperties( nc );\n        setupKerberosSecurity( model, siteFilesSource, \"\", \"\" );\n        response.put( NAMED_CLUSTER, nc.getName() );\n      }\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n    return response;\n  }\n\n  private void deleteNamedClusterSchemaOnly( ThinNameClusterModel model ) throws MetaStoreException {\n    List<String> existingNcNames = namedClusterService.listNames( metaStore );\n    for ( String existingNcName : existingNcNames ) {\n      if ( existingNcName.equalsIgnoreCase( model.getName() ) ) {\n        namedClusterService.delete( existingNcName, metaStore );\n      }\n    }\n  }\n\n  private NamedCluster convertToNamedCluster( ThinNameClusterModel model ) {\n\n    NamedCluster nc = namedClusterService.getClusterTemplate();\n    nc.setName( model.getName() );\n    nc.setHdfsHost( model.getHdfsHost() );\n    nc.setHdfsPort( model.getHdfsPort() );\n    nc.setHdfsUsername( model.getHdfsUsername() );\n    nc.setHdfsPassword( encodePassword( model.getHdfsPassword() ) );\n    nc.setJobTrackerHost( model.getJobTrackerHost() );\n    nc.setJobTrackerPort( model.getJobTrackerPort() );\n    nc.setZooKeeperHost( model.getZooKeeperHost() );\n    nc.setZooKeeperPort( model.getZooKeeperPort() );\n    nc.setOozieUrl( model.getOozieUrl() );\n    nc.setKafkaBootstrapServers( model.getKafkaBootstrapServers() );\n    resolveShimIdentifier( nc );\n    setupKnoxSecurity( nc, model );\n    if ( variableSpace != null ) {\n      nc.shareVariablesWith( variableSpace );\n    } else {\n      nc.initializeVariablesFrom( null );\n    }\n    return nc;\n  }\n\n  public boolean deleteConfigFolder( String configFolderName ) throws IOException {\n    File configFolder = new File( getNamedClusterConfigsRootDir() );\n    File[] files = configFolder.listFiles();\n    if ( files != null ) {\n      for ( File file : files ) {\n        if ( file.isDirectory() && file.getName().equalsIgnoreCase( configFolderName ) ) {\n          FileUtils.deleteDirectory( file );\n          break;\n        }\n      }\n    }\n    return true;\n  }\n\n  public JSONObject createNamedCluster( ThinNameClusterModel model,\n                                        Map<String, CachedFileItemStream> siteFilesSource ) {\n    return createNamedCluster( model, siteFilesSource, \"\", \"\" );\n  }\n\n  @VisibleForTesting\n  public JSONObject createNamedCluster( ThinNameClusterModel model,\n                                        Map<String, CachedFileItemStream> siteFilesSource,\n                                        String keytabAuthenticationLocation, String keytabImpersonationLocation ) {\n    JSONObject response = new JSONObject();\n    response.put( NAMED_CLUSTER, \"\" );\n    try {\n      NamedCluster nc = convertToNamedCluster( model );\n      deleteConfigFolder( nc.getName() );\n      installSiteFiles( siteFilesSource, nc );\n      namedClusterService.create( nc, metaStore );\n      createConfigProperties( nc );\n      setupKerberosSecurity( model, siteFilesSource, keytabAuthenticationLocation, keytabImpersonationLocation );\n      response.put( NAMED_CLUSTER, nc.getName() );\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n    return response;\n  }\n\n  public JSONObject editNamedCluster( ThinNameClusterModel model, boolean isEditMode,\n                                      Map<String, CachedFileItemStream> siteFilesSource ) {\n    JSONObject response = new JSONObject();\n    response.put( NAMED_CLUSTER, \"\" );\n    try {\n      final NamedCluster newNc = namedClusterService.getNamedClusterByName( model.getName(), metaStore );\n      final NamedCluster oldNc = namedClusterService.getNamedClusterByName( model.getOldName(), metaStore );\n      // Must get the current shim identifier before the creation of the Named Cluster xml schema for later comparison.\n\n      String shimId = null;\n      List<NamedClusterSiteFile> existingSiteFiles = new ArrayList<>();\n      if ( oldNc != null ) {\n        shimId = oldNc.getShimIdentifier();\n        existingSiteFiles = oldNc.getSiteFiles();\n      }\n      NamedCluster nc = convertToNamedCluster( model );\n      nc.setSiteFiles( getIntersectionSiteFiles( model, existingSiteFiles ) );\n      installSiteFiles( siteFilesSource, nc );\n      if ( newNc != null ) {\n        namedClusterService.update( nc, metaStore ); //new cluster name exists\n      } else {\n        namedClusterService.create( nc, metaStore ); //new cluster does not exist.  Use creation logic\n      }\n\n      File oldConfigFolder = new File( getNamedClusterConfigsRootDir() + fileSeparator + model.getOldName() );\n      File newConfigFolder = new File( getNamedClusterConfigsRootDir() + fileSeparator + nc.getName() );\n\n      // Copy all files from the old config folder to the new config folder.\n      if ( !oldConfigFolder.getName().equalsIgnoreCase( newConfigFolder.getName() ) ) {\n        FileUtils.copyDirectory( oldConfigFolder, newConfigFolder );\n      } else {\n        boolean success = oldConfigFolder.renameTo( newConfigFolder );\n        if ( !success ) {\n          log.logError( \"Renaming Named Cluster configuration folder failed.\" );\n        }\n      }\n\n\n      // If the user changed the shim, create a new config.properties file that corresponds to that shim\n      // in the new config folder. Also save the keytab locations to set them again in the new config.properties\n      // unless the kerberos subtype is Password.\n      String keytabAuthenticationLocation = \"\";\n      String keytabImpersonationLocation = \"\";\n      String kerberosSubType = model.getKerberosSubType();\n      if ( !kerberosSubType.equals( KERBEROS_SUBTYPE.PASSWORD.getValue() ) ) {\n        String configFile =\n          getNamedClusterConfigsRootDir() + fileSeparator + nc.getName() + fileSeparator + CONFIG_PROPERTIES;\n        PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n        keytabAuthenticationLocation = (String) config.getProperty( KEYTAB_AUTHENTICATION_LOCATION );\n        keytabImpersonationLocation = (String) config.getProperty( KEYTAB_IMPERSONATION_LOCATION );\n      }\n      if ( nc.getShimIdentifier() != null && !nc.getShimIdentifier().equals( shimId ) ) {\n        createConfigProperties( nc );\n      }\n      setupKerberosSecurity( model, siteFilesSource, keytabAuthenticationLocation, keytabImpersonationLocation );\n\n      // Delete old config folder.\n      if ( isEditMode && !oldConfigFolder.getName().equalsIgnoreCase( newConfigFolder.getName() ) ) {\n        deleteNamedCluster( metaStore, model.getOldName(), false );\n      }\n\n      response.put( NAMED_CLUSTER, nc.getName() );\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n    return response;\n  }\n\n  public InputStream getSiteFileInputStream( String namedCluster, String siteFile ) {\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedCluster, this.metaStore );\n    return nc.getSiteFileInputStream( siteFile );\n  }\n\n  public ThinNameClusterModel getNamedCluster( String namedCluster ) {\n    ThinNameClusterModel model = null;\n    try {\n      List<NamedCluster> namedClusters = namedClusterService.list( metaStore );\n      for ( NamedCluster nc : namedClusters ) {\n        if ( nc.getName().equalsIgnoreCase( namedCluster ) ) {\n          model = new ThinNameClusterModel();\n          model.setName( nc.getName() );\n          model.setShimIdentifier( nc.getShimIdentifier());\n          model.setHdfsHost( nc.getHdfsHost() );\n          model.setHdfsUsername( nc.getHdfsUsername() );\n          model.setHdfsPassword( nc.getHdfsPassword() );\n          model.setHdfsPort( nc.getHdfsPort() );\n          model.setJobTrackerHost( nc.getJobTrackerHost() );\n          model.setJobTrackerPort( nc.getJobTrackerPort() );\n          model.setKafkaBootstrapServers( nc.getKafkaBootstrapServers() );\n          model.setOozieUrl( nc.getOozieUrl() );\n          model.setZooKeeperPort( nc.getZooKeeperPort() );\n          model.setZooKeeperHost( nc.getZooKeeperHost() );\n          model.setGatewayPassword( nc.getGatewayPassword() );\n          model.setGatewayUrl( nc.getGatewayUrl() );\n          model.setGatewayUsername( nc.getGatewayUsername() );\n          model.setSecurityType( SECURITY_TYPE.NONE.getValue() );\n          if ( nc.isUseGateway() ) {\n            model.setSecurityType( SECURITY_TYPE.KNOX.getValue() );\n          } else {\n            resolveKerberosSecurity( model, nc );\n          }\n          model.setSiteFiles( nc.getSiteFiles().stream()\n            .map( sf -> new SimpleImmutableEntry<>( NAME_KEY, sf.getSiteFileName() ) )\n            .collect( Collectors.toList() ) );\n          break;\n        }\n      }\n    } catch ( MetaStoreException e ) {\n      log.logError( e.getMessage() );\n    }\n    return model;\n  }\n\n  private boolean configureNamedCluster( Map<String, CachedFileItemStream> siteFilesSource, NamedCluster nc ) {\n    resolveShimIdentifier( nc );\n\n    String oozieBaseUrl = \"oozie.base.url\";\n    Map<String, String> properties = new HashMap();\n    extractProperties( siteFilesSource, \"core-site.xml\", properties, new String[] { \"fs.defaultFS\" } );\n    extractProperties( siteFilesSource, \"yarn-site.xml\", properties,\n            new String[] { \"yarn.resourcemanager.address\", \"yarn.resourcemanager.hostname\" } );\n    extractProperties( siteFilesSource, \"hive-site.xml\", properties,\n            new String[] { \"hive.zookeeper.quorum\", \"hive.zookeeper.client.port\" } );\n    extractProperties( siteFilesSource, \"oozie-site.xml\", properties, new String[] { oozieBaseUrl } );\n    if ( properties.get( oozieBaseUrl ) == null ) {\n      extractProperties( siteFilesSource, \"oozie-default.xml\", properties, new String[] { oozieBaseUrl } );\n    }\n\n    boolean isConfigurationSet = false;\n    /*\n     * Address taken from\n     * fs.defaultFS\n     * in\n     * core-site.xml\n     * */\n    String hdfsAddress = properties.get( \"fs.defaultFS\" );\n    if ( hdfsAddress != null ) {\n      URI hdfsURL = URI.create( hdfsAddress );\n      nc.setHdfsHost( hdfsURL.getHost() );\n      nc.setHdfsPort( hdfsURL.getPort() != -1 ? hdfsURL.getPort() + \"\" : \"\" );\n      isConfigurationSet = true;\n    }\n\n    /*\n     * Address taken from\n     * yarn.resourcemanager.address\n     * in\n     * yarn-site.xml\n     *\n     * If address not available\n     * Hostname taken from\n     * yarn.resourcemanager.hostname\n     * in\n     * yarn-site.xml\n     * */\n    String jobTrackerAddress = properties.get( \"yarn.resourcemanager.address\" );\n    String jobTrackerHostname = properties.get( \"yarn.resourcemanager.hostname\" );\n    if ( jobTrackerAddress != null ) {\n      Map<String, String> hostAndPort = extractHostAndPort( jobTrackerAddress );\n      nc.setJobTrackerHost( hostAndPort.get( \"host\" ) );\n      nc.setJobTrackerPort( hostAndPort.get( \"port\" ) );\n      isConfigurationSet = true;\n    } else if ( jobTrackerHostname != null ) {\n      nc.setJobTrackerHost( jobTrackerHostname );\n      isConfigurationSet = true;\n    }\n\n    /*\n     * Address and port taken from\n     * hive.zookeeper.quorum\n     * hive.zookeeper.client.port\n     * in\n     * hive-site.xml\n     * */\n    String zooKeeperAddress = properties.get( \"hive.zookeeper.quorum\" );\n    String zooKeeperPort = properties.get( \"hive.zookeeper.client.port\" );\n\n    if ( zooKeeperAddress != null ) {\n      List<String> addresses = Arrays.asList( zooKeeperAddress.split( \",\" ) );\n      List<String> hostNames = addresses.stream().map( address ->\n              extractHostAndPort( address ).get( \"host\" ) ).collect( Collectors.toList() );\n      zooKeeperAddress = String.join( \",\", hostNames );\n    }\n\n    if ( zooKeeperAddress != null && zooKeeperPort != null ) {\n      nc.setZooKeeperHost( zooKeeperAddress );\n      nc.setZooKeeperPort( zooKeeperPort );\n      isConfigurationSet = true;\n    }\n\n    /*\n     * Address and port taken from\n     * oozie.base.url\n     * in\n     * oozie-site.xml\n     * if it does not exist then it is taken from\n     * oozie-default.xml\n     * */\n    String oozieAddress = properties.get( oozieBaseUrl );\n    if ( oozieAddress != null ) {\n      nc.setOozieUrl( oozieAddress );\n      isConfigurationSet = true;\n    }\n\n    return isConfigurationSet;\n  }\n\n  private void resolveShimIdentifier( NamedCluster nc ) {\n    String shimIdentifier = getShimIdentifier();\n    if( shimIdentifier != null ) {\n      nc.setShimIdentifier( shimIdentifier );\n    }\n  }\n\n  private void extractProperties( Map<String, CachedFileItemStream> siteFilesSource, String fileName,\n                                  Map<String, String> properties,\n                                  String[] keys ) {\n    CachedFileItemStream siteFile = siteFilesSource.get( fileName );\n\n    if ( siteFile != null ) {\n      Document document = parseSiteFileDocument( siteFile );\n      if ( document != null ) {\n        XPathFactory xpathFactory = XPathFactory.newInstance();\n        XPath xpath = xpathFactory.newXPath();\n        for ( String key : keys ) {\n          try {\n            XPathExpression expr =\n              xpath.compile( \"/configuration/property[name[starts-with(.,'\" + key + \"')]]/value/text()\" );\n            NodeList nodes = (NodeList) expr.evaluate( document, XPathConstants.NODESET );\n            if ( nodes.getLength() > 0 ) {\n              properties.put( key, nodes.item( 0 ).getNodeValue() );\n            }\n          } catch ( XPathExpressionException e ) {\n            log.logMinimal( e.getMessage() );\n          }\n        }\n      }\n    }\n  }\n\n  public JSONObject installDriver( FileItemInput driver ) {\n    boolean success = false;\n    if ( driver != null ) {\n      String destination = Const.getShimDriverDeploymentLocation();\n\n      try ( final InputStream driverStream = driver.getInputStream() ) {\n        FileUtils.copyInputStreamToFile( driverStream,\n          new File( destination + fileSeparator + driver.getFieldName() ) );\n        success = true;\n      } catch ( IOException e ) {\n        log.logError( e.getMessage() );\n      }\n    }\n    JSONObject response = new JSONObject();\n    response.put( INSTALLED, success );\n    return response;\n  }\n\n  /**\n   * Get Intersection of siteFiles\n   *\n   * @param model\n   * @param existingSiteFiles\n   * @return a list of siteFiles at the intersection of siteFiles in the model and the existingSiteFiles\n   */\n  private List<NamedClusterSiteFile> getIntersectionSiteFiles( ThinNameClusterModel model,\n                                                               List<NamedClusterSiteFile> existingSiteFiles ) {\n    List<String> newSiteFileNames =\n      Optional.ofNullable( model.getSiteFiles() )\n        .map( Collection::stream )\n        .orElseGet( Stream::empty )\n        .map( SimpleImmutableEntry::getValue )\n        .collect( Collectors.toList() );\n\n    return existingSiteFiles.stream()\n      .filter( siteFile -> newSiteFileNames.contains( siteFile.getSiteFileName() ) )\n      .collect( Collectors.toList() );\n  }\n\n  private void installSiteFiles( Map<String, CachedFileItemStream> siteFileSource, NamedCluster nc )\n    throws IOException {\n    for ( Map.Entry<String, CachedFileItemStream> siteFile : siteFileSource.entrySet() ) {\n      String name = siteFile.getValue().getFieldName();\n      if ( isValidConfigurationFile( name ) ) {\n        if ( name.equals( KEYTAB_AUTH_FILE ) || name.equals( KEYTAB_IMPL_FILE ) || !name.endsWith( \"-site.xml\" ) ) {\n          name = extractFileNameFromFullPath( siteFile.getValue().getName() );\n          addFileToConfigFolder( siteFile.getValue().getCachedOutputStream(), name, nc );\n        } else {\n          addFileToNamedClusterSiteFiles( siteFile, name, nc );\n        }\n      }\n    }\n  }\n\n  private void addFileToConfigFolder( ByteArrayOutputStream outputStream, String fileName, NamedCluster nc )\n    throws IOException {\n    File destination = new File(\n      getNamedClusterConfigsRootDir() + fileSeparator + nc.getName() + fileSeparator + fileName );\n    destination.getParentFile().mkdirs();\n    try ( OutputStream fos = new FileOutputStream( destination ) ) {\n      outputStream.writeTo( fos );\n    }\n  }\n\n  private void addFileToNamedClusterSiteFiles( Map.Entry<String, CachedFileItemStream> cachedFileItemStreamMapEntry,\n                                               String fileName, NamedCluster nc )\n    throws IOException {\n    InputStream inputStream = cachedFileItemStreamMapEntry.getValue().getCachedInputStream();\n    InputStreamReader isReader = new InputStreamReader( inputStream );\n    BufferedReader reader = new BufferedReader( isReader );\n    StringBuilder sb = new StringBuilder();\n    String str;\n    while ( ( str = reader.readLine() ) != null ) {\n      sb.append( str );\n    }\n    //skip placeholder site file contents because the site file content is in the NamedCluster already\n    if ( !sb.toString().equals( PLACEHOLDER_VALUE ) ) {\n      boolean nameExists = false;\n      //replace the contents if the name exists\n      for ( NamedClusterSiteFile siteFile : nc.getSiteFiles() ) {\n        if ( siteFile.getSiteFileName().equals( fileName ) ) {\n          siteFile.setSiteFileContents( sb.toString() );\n          nameExists = true;\n          break;\n        }\n      }\n      //Add the file if the name didn't exist\n      if ( !nameExists ) {\n        nc.addSiteFile(\n          new NamedClusterSiteFileImpl( fileName, cachedFileItemStreamMapEntry.getValue().getLastModified(),\n            sb.toString() ) );\n      }\n    }\n  }\n\n  public boolean isValidConfigurationFile( String fileName ) {\n    return fileName != null && ( fileName.endsWith( \"-site.xml\" ) || fileName.endsWith( \"-default.xml\" )\n      || fileName.equals( CONFIG_PROPERTIES ) || fileName.equals( KEYTAB_AUTH_FILE )\n      || fileName.equals( KEYTAB_IMPL_FILE ) || fileName.equals( \"data\" ) );\n  }\n\n  private Document parseSiteFileDocument( CachedFileItemStream file ) {\n    Document document = null;\n    try {\n      document = XMLHandler.loadXMLFile( file.getCachedInputStream() );\n    } catch ( KettleXMLException e ) {\n      log.logMinimal( String.format( \"Site file %s is not a well formed XML document\", file.getName() ) );\n    }\n    return document;\n  }\n\n  private void createConfigProperties( NamedCluster namedCluster ) throws IOException {\n    Path clusterConfigDirPath = Paths.get( getNamedClusterConfigsRootDir() + fileSeparator + namedCluster.getName() );\n    Path\n      configPropertiesPath =\n      Paths.get(\n        getNamedClusterConfigsRootDir() + fileSeparator + namedCluster.getName() + fileSeparator + CONFIG_PROPERTIES );\n    Files.createDirectories( clusterConfigDirPath );\n    String sampleConfigProperties = namedCluster.getShimIdentifier() + \"sampleconfig.properties\";\n    InputStream\n      inputStream =\n      HadoopClusterDelegateImpl.class.getClassLoader().getResourceAsStream( sampleConfigProperties );\n    if ( inputStream != null ) {\n      Files.copy( inputStream, configPropertiesPath, StandardCopyOption.REPLACE_EXISTING );\n    }\n  }\n\n  private void setupKerberosSecurity( ThinNameClusterModel model, Map<String, CachedFileItemStream> siteFilesSource,\n                                      String keytabAuthenticationLocation, String keytabImpersonationLocation ) {\n    Path\n      configPropertiesPath =\n      Paths\n        .get( getNamedClusterConfigsRootDir() + fileSeparator + model.getName() + fileSeparator + CONFIG_PROPERTIES );\n\n    String securityType = model.getSecurityType();\n    if ( !StringUtil.isEmpty( securityType ) ) {\n      resetKerberosSecurity( configPropertiesPath );\n      if ( securityType.equals( SECURITY_TYPE.KERBEROS.getValue() ) ) {\n        String kerberosSubType = model.getKerberosSubType();\n        if ( kerberosSubType.equals( KERBEROS_SUBTYPE.PASSWORD.getValue() ) ) {\n          setupKerberosPasswordSecurity( configPropertiesPath, model );\n        }\n        if ( kerberosSubType.equals( KERBEROS_SUBTYPE.KEYTAB.getValue() ) ) {\n          setupKeytabSecurity( model, configPropertiesPath, siteFilesSource, keytabAuthenticationLocation,\n            keytabImpersonationLocation );\n        }\n      }\n    }\n  }\n\n  private void resetKerberosSecurity( Path configPropertiesPath ) {\n    try {\n      PropertiesConfiguration config = new PropertiesConfiguration( configPropertiesPath.toFile() );\n      config.setProperty( KEYTAB_AUTHENTICATION_LOCATION, \"\" );\n      config.setProperty( KEYTAB_IMPERSONATION_LOCATION, \"\" );\n      config.setProperty( IMPERSONATION, IMPERSONATION_TYPE.DISABLED.getValue() );\n      config.setProperty( KERBEROS_AUTHENTICATION_USERNAME, \"\" );\n      config.setProperty( KERBEROS_AUTHENTICATION_PASS, \"\" );\n      config.setProperty( KERBEROS_IMPERSONATION_USERNAME, \"\" );\n      config.setProperty( KERBEROS_IMPERSONATION_PASS, \"\" );\n      config.save();\n    } catch ( ConfigurationException e ) {\n      log.logMinimal( e.getMessage() );\n    }\n  }\n\n  private void retrieveKerberosSecurity( ThinNameClusterModel model, NamedCluster nc ) {\n    try {\n      String endpointURL = NamedClusterHelper.getEndpointURL( \"getNamedCluster\" );\n      endpointURL = endpointURL + \"&namedCluster=\" + nc.getName();\n      String result = doGet( endpointURL );\n      JSONObject jsonObject = (JSONObject) new JSONParser().parse( result );\n      String securityType = (String) jsonObject.get( \"securityType\" );\n      String kerberosSubType = (String) jsonObject.get( \"kerberosSubType\" );\n      String kerberosAuthenticationUsername = (String) jsonObject.get( \"kerberosAuthenticationUsername\" );\n      String kerberosAuthenticationPassword = (String) jsonObject.get( \"kerberosAuthenticationPassword\" );\n      String kerberosImpersonationUsername = (String) jsonObject.get( \"kerberosImpersonationUsername\" );\n      String kerberosImpersonationPassword = (String) jsonObject.get( \"kerberosImpersonationPassword\" );\n      String keytabAuthFile = (String) jsonObject.get( \"keytabAuthFile\" );\n      String keytabImpFile = (String) jsonObject.get( \"keytabImpFile\" );\n\n      model.setSecurityType( securityType );\n      model.setKerberosSubType( kerberosSubType );\n      model.setKerberosAuthenticationUsername( kerberosAuthenticationUsername );\n      model.setKerberosAuthenticationPassword( kerberosAuthenticationPassword );\n      model.setKerberosImpersonationUsername( kerberosImpersonationUsername );\n      model.setKerberosImpersonationPassword( kerberosImpersonationPassword );\n      model.setKeytabAuthFile( keytabAuthFile );\n      model.setKeytabImpFile( keytabImpFile );\n    } catch ( ParseException e  ) {\n      log.logError( e.getMessage() );\n    }\n  }\n\n  private void resolveKerberosSecurity( ThinNameClusterModel model, NamedCluster nc ) {\n    if ( NamedClusterHelper.isConnectedToRepo() ) {\n      retrieveKerberosSecurity( model, nc );\n    } else {\n      try {\n        String configFile =\n          getNamedClusterConfigsRootDir() + fileSeparator + nc.getName() + fileSeparator + CONFIG_PROPERTIES;\n        PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n        model.setKerberosAuthenticationUsername( (String) config.getProperty( KERBEROS_AUTHENTICATION_USERNAME ) );\n        model.setKerberosAuthenticationPassword( (String) config.getProperty( KERBEROS_AUTHENTICATION_PASS ) );\n        model.setKerberosImpersonationUsername( (String) config.getProperty( KERBEROS_IMPERSONATION_USERNAME ) );\n        model.setKerberosImpersonationPassword( (String) config.getProperty( KERBEROS_IMPERSONATION_PASS ) );\n        String keytabAuthenticationLocation = (String) config.getProperty( KEYTAB_AUTHENTICATION_LOCATION );\n        String keytabImpersonationLocation = (String) config.getProperty( KEYTAB_IMPERSONATION_LOCATION );\n\n        // Resolve the keytab auth and impl files if set to be displayed in the UI.\n        if ( !StringUtil.isEmpty( keytabAuthenticationLocation ) ) {\n          model.setKeytabAuthFile( keytabAuthenticationLocation );\n        }\n        if ( !StringUtil.isEmpty( keytabImpersonationLocation ) ) {\n          model.setKeytabImpFile( keytabImpersonationLocation );\n        }\n\n        // If Kerberos security properties are empty then security type is None else if at least one of them has a\n        // value then the security type is Kerberos\n        if ( StringUtil.isEmpty( keytabAuthenticationLocation )\n          && StringUtil.isEmpty( keytabImpersonationLocation )\n          && StringUtil.isEmpty( model.getKerberosAuthenticationPassword() )\n          && StringUtil.isEmpty( model.getKerberosImpersonationPassword() ) ) {\n          model.setSecurityType( SECURITY_TYPE.NONE.getValue() );\n        } else {\n          model.setSecurityType( SECURITY_TYPE.KERBEROS.getValue() );\n        }\n\n        // If kerberos keytab impersonation and kerberos keytab impersonation location are empty then kerberos sub type\n        // is Password else is Keytab\n        if ( StringUtil.isEmpty( keytabAuthenticationLocation )\n          && StringUtil.isEmpty( keytabImpersonationLocation ) ) {\n          model.setKerberosSubType( KERBEROS_SUBTYPE.PASSWORD.getValue() );\n        } else {\n          model.setKerberosSubType( KERBEROS_SUBTYPE.KEYTAB.getValue() );\n        }\n      } catch ( ConfigurationException e ) {\n        log.logError( e.getMessage() );\n      }\n    }\n  }\n\n  private void setupKerberosPasswordSecurity( Path configPropertiesPath, ThinNameClusterModel model ) {\n    try {\n      PropertiesConfiguration config = new PropertiesConfiguration( configPropertiesPath.toFile() );\n      config.setProperty( KERBEROS_AUTHENTICATION_USERNAME, model.getKerberosAuthenticationUsername() );\n      if ( !StringUtil.isEmpty( model.getKerberosAuthenticationPassword() ) ) {\n        config.setProperty( KERBEROS_AUTHENTICATION_PASS,\n          encodePassword( model.getKerberosAuthenticationPassword() ) );\n      } else {\n        config.setProperty( KERBEROS_AUTHENTICATION_PASS, \"\" );\n      }\n      config.setProperty( KERBEROS_IMPERSONATION_USERNAME, model.getKerberosImpersonationUsername() );\n      if ( !StringUtil.isEmpty( model.getKerberosImpersonationPassword() ) ) {\n        config.setProperty( KERBEROS_IMPERSONATION_PASS,\n          encodePassword( model.getKerberosImpersonationPassword() ) );\n      } else {\n        config.setProperty( KERBEROS_IMPERSONATION_PASS, \"\" );\n      }\n      if ( ( !StringUtil.isEmpty( model.getKerberosImpersonationUsername() )\n        && !StringUtil.isEmpty( model.getKerberosImpersonationPassword() ) )\n        || ( !StringUtil.isEmpty( model.getKerberosAuthenticationUsername() )\n        && !StringUtil.isEmpty( model.getKerberosAuthenticationPassword() ) ) ) {\n        config.setProperty( IMPERSONATION, IMPERSONATION_TYPE.SIMPLE.getValue() );\n      } else {\n        config.setProperty( IMPERSONATION, IMPERSONATION_TYPE.DISABLED.getValue() );\n      }\n      config.save();\n    } catch ( ConfigurationException e ) {\n      log.logMinimal( e.getMessage() );\n    }\n  }\n\n  private String extractFileNameFromFullPath( String fileName ) {\n    /*\n     * This method is necessary because a difference in upload functionality from Linux and Windows.\n     * On Linux the file uploaded is provided with the name only.\n     * On Windows the file uploaded is provided with the full path and we only need the name.\n     * */\n    int lastIndex = fileName.lastIndexOf( '/' ) != -1 ? fileName.lastIndexOf( '/' ) : fileName.lastIndexOf( '\\\\' );\n    lastIndex = lastIndex == -1 ? 0 : lastIndex + 1;\n    fileName = fileName.substring( lastIndex );\n    return fileName;\n  }\n\n  private void setupKeytabSecurity( ThinNameClusterModel model, Path configPropertiesPath,\n                                    Map<String, CachedFileItemStream> siteFilesSource,\n                                    String keytabAuthenticationLocation, String keytabImpersonationLocation ) {\n    String namedClusterName = model.getName();\n    CachedFileItemStream keytabImpFile = siteFilesSource.get( KEYTAB_IMPL_FILE );\n\n    // Process the keytabAuthenticationLocation in case the Named Cluster name changed.\n    // If it didn't then the resulting value should be the same.\n    // Required for deleting orphaned keytab files.\n    if ( !keytabAuthenticationLocation.isEmpty() ) {\n      String name = extractFileNameFromFullPath( keytabAuthenticationLocation );\n      keytabAuthenticationLocation =\n        getNamedClusterConfigsRootDir() + fileSeparator + namedClusterName + fileSeparator + name;\n    }\n    // Process the keytabImpersonationLocation in case the Named Cluster name changed.\n    // If it didn't then the resulting value should be the same.\n    if ( !keytabImpersonationLocation.isEmpty() ) {\n      String name = extractFileNameFromFullPath( keytabImpersonationLocation );\n      keytabImpersonationLocation =\n        getNamedClusterConfigsRootDir() + fileSeparator + namedClusterName + fileSeparator + name;\n    }\n\n    String authenticationLocation = keytabAuthenticationLocation;\n    String impersonationLocation = keytabImpersonationLocation;\n\n    if ( model.getKeytabAuthFile() != null && !model.getKeytabAuthFile().isEmpty() ) {\n      String name = extractFileNameFromFullPath( model.getKeytabAuthFile() );\n      authenticationLocation =\n        getNamedClusterConfigsRootDir() + fileSeparator + namedClusterName + fileSeparator + name;\n    }\n\n    if ( model.getKeytabImpFile() != null && !model.getKeytabImpFile().isEmpty() ) {\n      String name = extractFileNameFromFullPath( model.getKeytabImpFile() );\n      impersonationLocation =\n        getNamedClusterConfigsRootDir() + fileSeparator + namedClusterName + fileSeparator + name;\n    }\n\n    try {\n      PropertiesConfiguration config = new PropertiesConfiguration( configPropertiesPath.toFile() );\n      // Authentication\n      config.setProperty( KERBEROS_AUTHENTICATION_USERNAME, model.getKerberosAuthenticationUsername() );\n      if ( !StringUtil.isEmpty( authenticationLocation ) ) {\n        config.setProperty( KEYTAB_AUTHENTICATION_LOCATION, authenticationLocation );\n      }\n\n      // Impersonation\n      config.setProperty( KERBEROS_IMPERSONATION_USERNAME, model.getKerberosImpersonationUsername() );\n      if ( keytabImpFile == null && StringUtil.isEmpty( model.getKeytabImpFile() ) ) {\n        config.setProperty( KEYTAB_IMPERSONATION_LOCATION, \"\" );\n      } else if ( !StringUtil.isEmpty( impersonationLocation ) ) {\n        config.setProperty( KEYTAB_IMPERSONATION_LOCATION, impersonationLocation );\n      }\n\n      // If the keytabAuthFile is not used anymore delete it.\n      if ( !keytabAuthenticationLocation.isEmpty() ) {\n        if ( !keytabAuthenticationLocation.equals( authenticationLocation ) ) {\n          if ( !keytabAuthenticationLocation.equals( impersonationLocation ) ) {\n            File toDelete = new File( keytabAuthenticationLocation );\n            if ( toDelete.exists() ) {\n              toDelete.delete();\n            }\n          }\n        }\n      }\n\n      // If the keytabImpFile is not used anymore delete it.\n      if ( !keytabImpersonationLocation.isEmpty() ) {\n        if ( !keytabImpersonationLocation.equals( impersonationLocation ) || model.getKeytabImpFile().isEmpty() ) {\n          if ( !keytabImpersonationLocation.equals( authenticationLocation ) ) {\n            File toDelete = new File( keytabImpersonationLocation );\n            if ( toDelete.exists() ) {\n              toDelete.delete();\n            }\n          }\n        }\n      }\n\n      if ( !StringUtil.isEmpty( (String) config.getProperty( KEYTAB_AUTHENTICATION_LOCATION ) )\n        || !StringUtil.isEmpty( (String) config.getProperty( KEYTAB_IMPERSONATION_LOCATION ) ) ) {\n        config.setProperty( IMPERSONATION, IMPERSONATION_TYPE.SIMPLE.getValue() );\n      } else {\n        config.setProperty( IMPERSONATION, IMPERSONATION_TYPE.DISABLED.getValue() );\n      }\n\n      config.save();\n    } catch ( ConfigurationException e ) {\n      log.logMinimal( e.getMessage() );\n    }\n  }\n\n  private void setupKnoxSecurity( NamedCluster nc, ThinNameClusterModel model ) {\n    if ( model.getSecurityType() != null && model.getSecurityType().equals( SECURITY_TYPE.KNOX.getValue() ) ) {\n      String userName = model.getGatewayUsername();\n      String url = model.getGatewayUrl();\n      String password = model.getGatewayPassword();\n      nc.setGatewayPassword( encodePassword( password ) );\n      nc.setGatewayUrl( encodePassword( url ) );\n      nc.setGatewayUsername( userName );\n      nc.setUseGateway(\n        !StringUtil.isEmpty( userName ) && !StringUtil.isEmpty( url ) && !StringUtil.isEmpty( password ) );\n    }\n  }\n\n  /*\n   * Extract Hostname and Port from a hostname/URL pattern\n   */\n  private Map<String, String> extractHostAndPort( String urlPattern ) {\n    final String HTTP_PATTERN = \"http://\";\n    if ( !urlPattern.startsWith( HTTP_PATTERN ) ) {\n      urlPattern = HTTP_PATTERN + urlPattern;\n    }\n    URI parsedURI = URI.create( urlPattern );\n    Map<String, String> map = new HashMap<>();\n    map.put( \"host\", parsedURI.getHost() );\n    map.put( \"port\", parsedURI.getPort() != -1 ? parsedURI.getPort() + \"\" : \"\" );\n    return map;\n  }\n\n  public void deleteNamedCluster( IMetaStore metaStore, String namedCluster, boolean refreshTree ) {\n    try {\n      if ( namedClusterService.read( namedCluster, metaStore ) != null ) {\n        namedClusterService.delete( namedCluster, metaStore );\n        if ( isConnectedToRepo() ) {\n          String endpointURL = NamedClusterHelper.getEndpointURL( \"deleteNamedCluster\" );\n          endpointURL = endpointURL + \"&namedCluster=\" + namedCluster;\n          doGet( endpointURL );\n        } else {\n          deleteConfigFolder( namedCluster );\n        }\n      }\n      if ( refreshTree ) {\n        refreshTree();\n      }\n    } catch ( Exception e ) {\n      log.logMinimal( e.getMessage() );\n    }\n  }\n\n  public String getShimIdentifier() {\n    String shimIdentifier = null;\n    if ( isConnectedToRepo() ) {\n      String endpointURL = NamedClusterHelper.getEndpointURL( \"getShimIdentifier\" );\n      shimIdentifier = doGet( endpointURL );\n    } else {\n      shimIdentifier = BigDataServicesHelper.getShimIdentifier();\n    }\n    return shimIdentifier;\n  }\n\n  public NamedCluster getNamedClusterByName( String namedCluster) {\n    return namedClusterService.getNamedClusterByName( namedCluster, this.metaStore );\n  }\n\n  public Object runTests( RuntimeTester runtimeTester, String namedCluster ) {\n    NamedCluster nc = namedClusterService.getNamedClusterByName( namedCluster, this.metaStore );\n    if ( nc != null ) {\n      try {\n        if ( runtimeTester != null ) {\n          runtimeTestStatus = null;\n          runtimeTester.runtimeTest( nc, this );\n          synchronized ( this ) {\n            while ( runtimeTestStatus == null ) {\n              wait();\n            }\n          }\n        }\n      } catch ( Exception e ) {\n        log.logMinimal( e.getLocalizedMessage() );\n      }\n      return produceTestCategories( runtimeTestStatus, nc );\n    } else {\n      return \"[]\";\n    }\n  }\n\n  public Object[] produceTestCategories( RuntimeTestStatus runtimeTestStatus, NamedCluster nc ) {\n    LinkedHashMap<String, TestCategory> categories = new LinkedHashMap<>();\n    if ( NamedClusterHelper.isConnectedToRepo() ) {\n      String endpointURL = NamedClusterHelper.getEndpointURL( \"runTests\" );\n      endpointURL = endpointURL + \"&namedCluster=\" + nc.getName();\n      String result = doGet( endpointURL );\n      ObjectMapper mapper = new ObjectMapper();\n      try {\n        return mapper.readValue( result, TestCategory[].class );\n      } catch ( Exception e ) {\n        log.logError( e.getMessage() );\n      }\n    } else {\n      categories.put( HADOOP_FILE_SYSTEM, new TestCategory( \"Hadoop file system\" ) );\n      categories.put( ZOOKEEPER, new TestCategory( \"Zookeeper connection\" ) );\n      categories.put( MAP_REDUCE, new TestCategory( \"Job tracker / resource manager\" ) );\n      categories.put( OOZIE, new TestCategory( \"Oozie host connection\" ) );\n      categories.put( KAFKA, new TestCategory( \"Kafka connection\" ) );\n\n      if ( runtimeTestStatus != null && nc != null ) {\n        for ( RuntimeTestModuleResults moduleResults : runtimeTestStatus.getModuleResults() ) {\n          for ( RuntimeTestResult testResult : moduleResults.getRuntimeTestResults() ) {\n            RuntimeTest runtimeTest = testResult.getRuntimeTest();\n            String name = runtimeTest.getName();\n            String status = getTestStatus( testResult.getOverallStatusEntry() );\n            String module = runtimeTest.getModule();\n            Category category = categories.get( module );\n            category.setCategoryActive( true );\n\n            if ( module.equals( HADOOP_FILE_SYSTEM ) ) {\n              Test test = new Test( name );\n              test.setTestStatus( status );\n              test.setTestActive( true );\n              category.addTest( test );\n              configureHadoopFileSystemTestCategory( category, !StringUtil.isEmpty( nc.getHdfsHost() ), status );\n            } else if ( module.equals( OOZIE ) ) {\n              configureTestCategories( category, !StringUtil.isEmpty( nc.getOozieUrl() ), status );\n            } else if ( module.equals( KAFKA ) ) {\n              configureTestCategories( category, !StringUtil.isEmpty( nc.getKafkaBootstrapServers() ), status );\n            } else if ( module.equals( ZOOKEEPER ) ) {\n              configureTestCategories( category, !StringUtil.isEmpty( nc.getZooKeeperHost() ), status );\n            } else if ( module.equals( MAP_REDUCE ) ) {\n              configureTestCategories( category, !StringUtil.isEmpty( nc.getJobTrackerHost() ), status );\n            }\n          }\n        }\n      }\n    }\n    return categories.values().toArray();\n  }\n\n  private void configureHadoopFileSystemTestCategory( Category category, boolean isActive, String status ) {\n    category.setCategoryActive( isActive );\n    if ( category.isCategoryActive() ) {\n      String currentStatus = category.getCategoryStatus();\n      if ( status.equals( FAIL ) || ( status.equals( WARNING ) && !currentStatus.equals( FAIL ) ) || (\n        status.equals( PASS ) && StringUtil.isEmpty( currentStatus ) ) ) {\n        category.setCategoryStatus( status );\n      }\n    }\n  }\n\n  private void configureTestCategories( Category category, boolean isActive, String status ) {\n    category.setCategoryActive( isActive );\n    if ( category.isCategoryActive() ) {\n      category.setCategoryStatus( status );\n    }\n  }\n\n  private String getTestStatus( RuntimeTestResultEntry summary ) {\n    String status = \"\";\n    switch ( summary.getSeverity() ) {\n      case INFO:\n        status = PASS;\n        break;\n      case SKIPPED:\n        status = WARNING;\n        break;\n      case FATAL:\n        status = FAIL;\n        break;\n      case ERROR:\n        status = FAIL;\n        break;\n      case WARNING:\n        status = FAIL;\n        break;\n      default:\n        break;\n    }\n    return status;\n  }\n\n  public void onProgress( final RuntimeTestStatus clusterTestStatus ) {\n    synchronized ( this ) {\n      if ( clusterTestStatus.isDone() ) {\n        runtimeTestStatus = clusterTestStatus;\n        notifyAll();\n      }\n    }\n  }\n\n  @VisibleForTesting\n  void refreshTree() {\n    if ( spoon != null && spoon.getShell() != null ) {\n      spoon.getShell().getDisplay().asyncExec( () -> spoon.refreshTree( STRING_NAMED_CLUSTERS ) );\n    }\n  }\n\n  @VisibleForTesting\n  String getNamedClusterConfigsRootDir() {\n    return System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\";\n  }\n\n  public static String doGet( String endpointURL ) {\n    String result = null;\n    try {\n      HttpGet httpGet = new HttpGet( endpointURL );\n      HttpHost targetHost = new HttpHost( httpGet.getURI().getHost(), httpGet.getURI().getPort(), httpGet.getURI().getScheme() );\n      BasicCredentialsProvider credsProvider = new BasicCredentialsProvider();\n      AuthScope authScope = new AuthScope( targetHost );\n      String userName = NamedClusterHelper.getSecurityCredentials().get( NamedClusterHelper.USERNAME );\n      String password = NamedClusterHelper.getSecurityCredentials().get( NamedClusterHelper.PASSWORD );\n      credsProvider.setCredentials( authScope, new UsernamePasswordCredentials( userName, password ) );\n      AuthCache authCache = new BasicAuthCache();\n      authCache.put( targetHost, new BasicScheme() );\n      HttpClientContext context = HttpClientContext.create();\n      context.setCredentialsProvider( credsProvider );\n      context.setAuthCache( authCache );\n      try ( CloseableHttpClient httpClient = createHttpClientAcceptingTlsIfNeeded( endpointURL ) ) {\n        try ( CloseableHttpResponse response = httpClient.execute( httpGet, context ) ) {\n          HttpEntity entity = response.getEntity();\n          if ( entity != null ) {\n            result = EntityUtils.toString( entity );\n          }\n        }\n      }\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n    return result;\n  }\n\n  private boolean doMultipartHttpPost( String endpoint, ThinNameClusterModel thinNameClusterModel, File driverFile ) throws BadSiteFilesException, IOException {\n    boolean result;\n    String endpointURL = NamedClusterHelper.getEndpointURL( endpoint );\n    HttpPost httpPost = new HttpPost( endpointURL );\n    HttpHost targetHost = new HttpHost( httpPost.getURI().getHost(), httpPost.getURI().getPort(), httpPost.getURI().getScheme() );\n    BasicCredentialsProvider credsProvider = new BasicCredentialsProvider();\n    AuthScope authScope = new AuthScope( targetHost );\n    String userName = NamedClusterHelper.getSecurityCredentials().get( NamedClusterHelper.USERNAME );\n    String password = NamedClusterHelper.getSecurityCredentials().get( NamedClusterHelper.PASSWORD );\n    credsProvider.setCredentials( authScope, new UsernamePasswordCredentials( userName, password ) );\n    AuthCache authCache = new BasicAuthCache();\n    authCache.put( targetHost, new BasicScheme() );\n    HttpClientContext context = HttpClientContext.create();\n    context.setCredentialsProvider( credsProvider );\n    context.setAuthCache( authCache );\n    MultipartEntityBuilder builder = MultipartEntityBuilder.create();\n    if ( driverFile != null ) {\n      builder.addBinaryBody(\n        driverFile.getName(),\n        driverFile,\n        ContentType.APPLICATION_OCTET_STREAM,\n        driverFile.getName()\n      );\n    } else {\n      Map<String, CachedFileItemStream>  siteFileSource = NamedClusterHelper.processSiteFiles( thinNameClusterModel, this );\n      for ( Map.Entry<String, CachedFileItemStream> siteFile : siteFileSource.entrySet() ) {\n        String name = siteFile.getValue().getFieldName();\n        if ( isValidConfigurationFile( name ) ) {\n          if ( name.equals( KEYTAB_AUTH_FILE ) || name.equals( KEYTAB_IMPL_FILE ) || !name.endsWith( \"-site.xml\" ) ) {\n            builder.addBinaryBody(\n              name,\n              siteFile.getValue().getCachedInputStream(),\n              ContentType.APPLICATION_OCTET_STREAM,\n              siteFile.getValue().getName()\n            );\n          } else {\n            builder.addBinaryBody(\n              siteFile.getValue().getName(),\n              siteFile.getValue().getCachedInputStream(),\n              ContentType.APPLICATION_OCTET_STREAM,\n              siteFile.getValue().getName()\n            );\n          }\n        }\n      }\n    }\n    if ( thinNameClusterModel != null ) {\n      ObjectMapper mapper = new ObjectMapper();\n      String json = mapper.writeValueAsString( thinNameClusterModel );\n      builder.addTextBody( \"data\", json, ContentType.APPLICATION_JSON );\n    }\n\n    // Use an HTTP client that can accept TLS or load certificates when needed.\n    try {\n      CloseableHttpClient httpClient = createHttpClientAcceptingTlsIfNeeded( endpointURL );\n      try ( CloseableHttpClient client = httpClient ) {\n        HttpEntity multipart = builder.build();\n        httpPost.setEntity( multipart );\n        try ( CloseableHttpResponse response = client.execute( httpPost, context ) ) {\n          result = response.getStatusLine().getStatusCode() == 200;\n        }\n      }\n    } catch ( Exception e ) {\n      throw new IOException( e );\n    }\n    return result;\n  }\n\n  private static CloseableHttpClient createHttpClientAcceptingTlsIfNeeded( String endpointURL ) throws Exception {\n    if ( endpointURL != null && endpointURL.toLowerCase().startsWith( \"https\" ) ) {\n      // Create an SSLContext that trusts all certificates (useful for self-signed certs).\n      // Warning: this disables certificate validation and hostname verification — use only if you understand the risks.\n      javax.net.ssl.TrustManager[] trustAll = new javax.net.ssl.TrustManager[] {\n        new javax.net.ssl.X509TrustManager() {\n          public java.security.cert.X509Certificate[] getAcceptedIssuers() { return new java.security.cert.X509Certificate[0]; }\n          public void checkClientTrusted( java.security.cert.X509Certificate[] certs, String authType ) { }\n          public void checkServerTrusted( java.security.cert.X509Certificate[] certs, String authType ) { }\n        }\n      };\n      javax.net.ssl.SSLContext sslContext = javax.net.ssl.SSLContext.getInstance( \"TLS\" );\n      sslContext.init( null, trustAll, new java.security.SecureRandom() );\n\n      org.apache.http.conn.ssl.SSLConnectionSocketFactory sslsf =\n        new org.apache.http.conn.ssl.SSLConnectionSocketFactory( sslContext, org.apache.http.conn.ssl.NoopHostnameVerifier.INSTANCE );\n\n      org.apache.http.config.Registry<org.apache.http.conn.socket.ConnectionSocketFactory> registry =\n        org.apache.http.config.RegistryBuilder.<org.apache.http.conn.socket.ConnectionSocketFactory>create()\n          .register( \"https\", sslsf )\n          .register( \"http\", new org.apache.http.conn.socket.PlainConnectionSocketFactory() )\n          .build();\n\n      org.apache.http.impl.conn.PoolingHttpClientConnectionManager cm =\n        new org.apache.http.impl.conn.PoolingHttpClientConnectionManager( registry );\n\n      return HttpClients.custom().setConnectionManager( cm ).build();\n    } else {\n      return HttpClients.createDefault();\n    }\n  }\n\n  public boolean processDriverFile( String driverFile, HadoopClusterManager manager ) throws Exception {\n    boolean result = false;\n    if ( NamedClusterHelper.isConnectedToRepo() ) {\n      File file = new File( driverFile );\n      if ( NamedClusterHelper.isValidUpload( file.getName(), NamedClusterHelper.FileType.DRIVER, manager ) ) {\n        result = doMultipartHttpPost( \"installDriver\", null, file );\n      }\n    } else {\n      File file = new File( driverFile );\n      FileInputStream driverStream = new FileInputStream( file );\n      if ( NamedClusterHelper.isValidUpload( file.getName(), NamedClusterHelper.FileType.DRIVER, manager ) ) {\n        String destination = Const.getShimDriverDeploymentLocation();\n        FileUtils.copyInputStreamToFile( driverStream,\n          new File( destination + File.separator + file.getName() ) );\n        result = true;\n      }\n    }\n    return result;\n  }\n\n  public void saveNewNamedCluster( ThinNameClusterModel thinNameClusterModel, String dialogState ) throws IOException, BadSiteFilesException {\n    if ( NamedClusterHelper.isConnectedToRepo() ) {\n      if ( dialogState.equals( \"new-edit\" ) ) {\n        doMultipartHttpPost( \"createNamedCluster\", thinNameClusterModel, null );\n      }\n      if ( dialogState.equals( \"import\" ) ) {\n        doMultipartHttpPost( \"importNamedCluster\", thinNameClusterModel, null );\n      }\n    } else {\n      Map<String, CachedFileItemStream> siteFiles = processSiteFiles( thinNameClusterModel, this );\n      if ( dialogState.equals( \"new-edit\" ) ) {\n        createNamedCluster( thinNameClusterModel, siteFiles );\n      }\n      if ( dialogState.equals( \"import\" ) ) {\n        importNamedCluster( thinNameClusterModel, siteFiles );\n      }\n    }\n  }\n\n  public void saveEditedNamedCluster( ThinNameClusterModel thinNameClusterModel, boolean isEditMode ) throws IOException, BadSiteFilesException {\n    if ( NamedClusterHelper.isConnectedToRepo() ) {\n      if( isEditMode ) {\n        doMultipartHttpPost( \"editNamedCluster\", thinNameClusterModel, null );\n      } else {\n        doMultipartHttpPost( \"duplicateNamedCluster\", thinNameClusterModel, null );\n      }\n    } else {\n      Map<String, CachedFileItemStream> siteFiles = processSiteFiles( thinNameClusterModel, this );\n      editNamedCluster( thinNameClusterModel, isEditMode, siteFiles );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/Test.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\npublic class Test {\n\n  private String testName = \"\";\n  private String testStatus = \"\";\n  private boolean isTestActive = false;\n\n  public Test() {\n  }\n\n  public Test( String name ) {\n    setTestName( name );\n  }\n\n  public String getTestName() {\n    return testName;\n  }\n\n  public void setTestName( String testName ) {\n    this.testName = testName;\n  }\n\n  public String getTestStatus() {\n    return testStatus;\n  }\n\n  public void setTestStatus( String testStatus ) {\n    this.testStatus = testStatus;\n  }\n\n  public boolean isTestActive() {\n    return isTestActive;\n  }\n\n  public void setTestActive( boolean testActive ) {\n    isTestActive = testActive;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/TestCategory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class TestCategory implements Category {\n\n  private List<Test> tests = new ArrayList<>();\n  private String categoryName = \"\";\n  private String categoryStatus = \"\";\n  private boolean isCategoryActive = false;\n\n  public TestCategory() {\n  }\n\n  public TestCategory( String name ) {\n    setCategoryName( name );\n  }\n\n  public List<Test> getTests() {\n    return tests;\n  }\n\n  public String getCategoryName() {\n    return categoryName;\n  }\n\n  public void setCategoryName( String categoryName ) {\n    this.categoryName = categoryName;\n  }\n\n  public void setTests( List<Test> tests ) {\n    this.tests = tests;\n  }\n\n  public String getCategoryStatus() {\n    return categoryStatus;\n  }\n\n  public void setCategoryStatus( String categoryStatus ) {\n    this.categoryStatus = categoryStatus;\n  }\n\n  public boolean isCategoryActive() {\n    return isCategoryActive;\n  }\n\n  public void setCategoryActive( boolean categoryActive ) {\n    isCategoryActive = categoryActive;\n  }\n\n  public void addTest( Test test ) {\n    this.tests.add( test );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/lifecycle/HadoopClusterLifecycleListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.lifecycle;\n\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.tree.ThinHadoopClusterFolderProvider;\nimport org.pentaho.di.core.annotations.LifecyclePlugin;\nimport org.pentaho.di.core.lifecycle.LifeEventHandler;\nimport org.pentaho.di.core.lifecycle.LifecycleException;\nimport org.pentaho.di.core.lifecycle.LifecycleListener;\nimport org.pentaho.di.ui.spoon.Spoon;\n\nimport java.util.function.Supplier;\n\n@LifecyclePlugin( id = \"HadoopClusterLifecycleListener\" )\npublic class HadoopClusterLifecycleListener implements LifecycleListener {\n  private Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n\n  @Override\n  public void onStart( LifeEventHandler handler ) throws LifecycleException {\n    Spoon spoon = spoonSupplier.get();\n    if ( spoon != null ) {\n      spoon.getTreeManager().addTreeProvider( Spoon.STRING_CONFIGURATIONS, new ThinHadoopClusterFolderProvider( ) );\n    }\n  }\n\n  @Override\n  public void onExit( LifeEventHandler handler ) throws LifecycleException {\n\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/model/ThinNameClusterModel.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model;\n\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.CachedFileItemStream;\nimport org.json.simple.JSONObject;\nimport org.json.simple.parser.JSONParser;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\n\nimport java.io.InputStreamReader;\nimport java.util.AbstractMap.SimpleImmutableEntry;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.stream.Collectors;\n\npublic class ThinNameClusterModel {\n  private static final LogChannelInterface log =\n    KettleLogStore.getLogChannelInterfaceFactory().create( \"ThinNameClusterModel\" );\n\n  public static final String NAME_KEY = \"name\";\n\n  private String name;\n  private String shimIdentifier;\n  private String hdfsHost;\n  private String hdfsPort;\n  private String hdfsUsername;\n  private String hdfsPassword;\n  private String jobTrackerHost;\n  private String jobTrackerPort;\n  private String zooKeeperHost;\n  private String zooKeeperPort;\n  private String oozieUrl;\n  private String kafkaBootstrapServers;\n  private String oldName;\n  private String securityType;\n  private String kerberosSubType;\n  private String kerberosAuthenticationUsername;\n  private String kerberosAuthenticationPassword;\n  private String kerberosImpersonationUsername;\n  private String kerberosImpersonationPassword;\n  private String gatewayUrl;\n  private String gatewayUsername;\n  private String gatewayPassword;\n  private String keytabAuthFile;\n  private String keytabImpFile;\n  private List<SimpleImmutableEntry<String, String>> siteFiles;\n\n  public void setShimIdentifier(String shimIdentifier ) {\n    this.shimIdentifier = shimIdentifier;\n  }\n\n  public String getShimIdentifier() {\n    return shimIdentifier;\n  }\n\n  public String getHdfsHost() {\n    return hdfsHost;\n  }\n\n  public void setHdfsHost( String hdfsHost ) {\n    this.hdfsHost = hdfsHost;\n  }\n\n  public String getHdfsPort() {\n    return hdfsPort;\n  }\n\n  public void setHdfsPort( String hdfsPort ) {\n    this.hdfsPort = hdfsPort;\n  }\n\n  public String getHdfsUsername() {\n    return hdfsUsername;\n  }\n\n  public void setHdfsUsername( String hdfsUsername ) {\n    this.hdfsUsername = hdfsUsername;\n  }\n\n  public String getHdfsPassword() {\n    return hdfsPassword;\n  }\n\n  public void setHdfsPassword( String hdfsPassword ) {\n    this.hdfsPassword = hdfsPassword;\n  }\n\n  public String getJobTrackerHost() {\n    return jobTrackerHost;\n  }\n\n  public void setJobTrackerHost( String jobTrackerHost ) {\n    this.jobTrackerHost = jobTrackerHost;\n  }\n\n  public String getJobTrackerPort() {\n    return jobTrackerPort;\n  }\n\n  public void setJobTrackerPort( String jobTrackerPort ) {\n    this.jobTrackerPort = jobTrackerPort;\n  }\n\n  public String getZooKeeperHost() {\n    return zooKeeperHost;\n  }\n\n  public void setZooKeeperHost( String zooKeeperHost ) {\n    this.zooKeeperHost = zooKeeperHost;\n  }\n\n  public String getZooKeeperPort() {\n    return zooKeeperPort;\n  }\n\n  public void setZooKeeperPort( String zooKeeperPort ) {\n    this.zooKeeperPort = zooKeeperPort;\n  }\n\n  public String getKafkaBootstrapServers() {\n    return kafkaBootstrapServers;\n  }\n\n  public void setKafkaBootstrapServers( String kafkaBootstrapServers ) {\n    this.kafkaBootstrapServers = kafkaBootstrapServers;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getOozieUrl() {\n    return oozieUrl;\n  }\n\n  public void setOozieUrl( String oozieUrl ) {\n    this.oozieUrl = oozieUrl;\n  }\n\n  public String getOldName() {\n    return oldName;\n  }\n\n  public void setOldName( String oldName ) {\n    this.oldName = oldName;\n  }\n\n  public String getSecurityType() {\n    return securityType;\n  }\n\n  public void setSecurityType( String securityType ) {\n    this.securityType = securityType;\n  }\n\n  public String getKerberosSubType() {\n    return kerberosSubType;\n  }\n\n  public void setKerberosSubType( String kerberosSubType ) {\n    this.kerberosSubType = kerberosSubType;\n  }\n\n  public String getKerberosAuthenticationUsername() {\n    return kerberosAuthenticationUsername;\n  }\n\n  public void setKerberosAuthenticationUsername( String kerberosAuthenticationUsername ) {\n    this.kerberosAuthenticationUsername = kerberosAuthenticationUsername;\n  }\n\n  public String getKerberosAuthenticationPassword() {\n    return kerberosAuthenticationPassword;\n  }\n\n  public void setKerberosAuthenticationPassword( String kerberosAuthenticationPassword ) {\n    this.kerberosAuthenticationPassword = kerberosAuthenticationPassword;\n  }\n\n  public String getKerberosImpersonationUsername() {\n    return kerberosImpersonationUsername;\n  }\n\n  public void setKerberosImpersonationUsername( String kerberosImpersonationUsername ) {\n    this.kerberosImpersonationUsername = kerberosImpersonationUsername;\n  }\n\n  public String getKerberosImpersonationPassword() {\n    return kerberosImpersonationPassword;\n  }\n\n  public void setKerberosImpersonationPassword( String kerberosImpersonationPassword ) {\n    this.kerberosImpersonationPassword = kerberosImpersonationPassword;\n  }\n\n  public String getGatewayUrl() {\n    return gatewayUrl;\n  }\n\n  public void setGatewayUrl( String gatewayUrl ) {\n    this.gatewayUrl = gatewayUrl;\n  }\n\n  public String getGatewayUsername() {\n    return gatewayUsername;\n  }\n\n  public void setGatewayUsername( String gatewayUsername ) {\n    this.gatewayUsername = gatewayUsername;\n  }\n\n  public String getGatewayPassword() {\n    return gatewayPassword;\n  }\n\n  public void setGatewayPassword( String gatewayPassword ) {\n    this.gatewayPassword = gatewayPassword;\n  }\n\n  public String getKeytabAuthFile() {\n    return keytabAuthFile;\n  }\n\n  public void setKeytabAuthFile( String keytabAuthFile ) {\n    this.keytabAuthFile = keytabAuthFile;\n  }\n\n  public String getKeytabImpFile() {\n    return keytabImpFile;\n  }\n\n  public void setKeytabImpFile( String keytabImpFile ) {\n    this.keytabImpFile = keytabImpFile;\n  }\n\n  public List<SimpleImmutableEntry<String, String>> getSiteFiles() {\n    return siteFiles;\n  }\n\n  public void setSiteFiles( List<SimpleImmutableEntry<String, String>> siteFiles ) {\n    this.siteFiles = siteFiles;\n  }\n\n  public static ThinNameClusterModel unmarshall( Map<String, CachedFileItemStream> siteFilesSource ) {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    try {\n      final CachedFileItemStream fileItemStream = siteFilesSource.remove( \"data\" );\n\n      InputStreamReader inputStreamReader = new InputStreamReader( fileItemStream.getCachedInputStream() );\n      JSONParser parser = new JSONParser();\n      JSONObject json = (JSONObject) parser.parse( inputStreamReader );\n      model.setName( (String) json.get( \"name\" ) );\n      model.setShimIdentifier( (String) json.get( \"shimIdentifier\" ) );\n      model.setHdfsHost( (String) json.get( \"hdfsHost\" ) );\n      model.setHdfsPort( (String) json.get( \"hdfsPort\" ) );\n      model.setHdfsUsername( (String) json.get( \"hdfsUsername\" ) );\n      model.setHdfsPassword( (String) json.get( \"hdfsPassword\" ) );\n      model.setJobTrackerHost( (String) json.get( \"jobTrackerHost\" ) );\n      model.setJobTrackerPort( (String) json.get( \"jobTrackerPort\" ) );\n      model.setZooKeeperHost( (String) json.get( \"zooKeeperHost\" ) );\n      model.setZooKeeperPort( (String) json.get( \"zooKeeperPort\" ) );\n      model.setOozieUrl( (String) json.get( \"oozieUrl\" ) );\n      model.setKafkaBootstrapServers( (String) json.get( \"kafkaBootstrapServers\" ) );\n      model.setOldName( (String) json.get( \"oldName\" ) );\n      model.setSecurityType( (String) json.get( \"securityType\" ) );\n      model.setKerberosSubType( (String) json.get( \"kerberosSubType\" ) );\n      model.setKerberosAuthenticationUsername( (String) json.get( \"kerberosAuthenticationUsername\" ) );\n      model.setKerberosAuthenticationPassword( (String) json.get( \"kerberosAuthenticationPassword\" ) );\n      model.setKerberosImpersonationUsername( (String) json.get( \"kerberosImpersonationUsername\" ) );\n      model.setKerberosImpersonationPassword( (String) json.get( \"kerberosImpersonationPassword\" ) );\n      model.setGatewayUrl( (String) json.get( \"gatewayUrl\" ) );\n      model.setGatewayUsername( (String) json.get( \"gatewayUsername\" ) );\n      model.setGatewayPassword( (String) json.get( \"gatewayPassword\" ) );\n      model.setKeytabImpFile( (String) json.get( \"keytabImpFile\" ) );\n      model.setKeytabAuthFile( (String) json.get( \"keytabAuthFile\" ) );\n      model.setSiteFiles( siteFilesSource.keySet().stream()\n        .map( name -> new SimpleImmutableEntry<>( NAME_KEY, name ) )\n        .collect( Collectors.toList() ) );\n    } catch ( Exception e ) {\n      log.logError( e.getMessage() );\n    }\n    return model;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/tree/HadoopClusterPopupMenuExtension.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.tree;\n\nimport com.google.common.collect.ImmutableMap;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.widgets.Menu;\nimport org.eclipse.swt.widgets.MenuItem;\nimport org.eclipse.swt.widgets.Tree;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.HadoopClusterDelegate;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.HadoopClusterManager;\nimport org.pentaho.di.core.extension.ExtensionPoint;\nimport org.pentaho.di.core.extension.ExtensionPointInterface;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.TreeSelection;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\n\nimport java.io.UnsupportedEncodingException;\nimport java.net.URLDecoder;\nimport java.net.URLEncoder;\nimport java.util.Collections;\nimport java.util.Map;\nimport java.util.function.Supplier;\n\nimport static org.pentaho.di.i18n.BaseMessages.getString;\n\n@ExtensionPoint( id = \"HadoopClusterPopupMenuExtension\", description = \"Creates popup menus for Hadoop clusters\",\n  extensionPointId = \"SpoonPopupMenuExtension\" )\npublic class HadoopClusterPopupMenuExtension implements ExtensionPointInterface {\n\n  private static final Class<?> PKG = HadoopClusterPopupMenuExtension.class;\n\n  public static final String IMPORT_STATE = \"import\";\n  public static final String NEW_EDIT_STATE = \"new-edit\";\n  public static final String TESTING_STATE = \"testing\";\n  public static final String DELETE_STATE = \"delete\";\n  private static final int RESULT_YES = 0;\n\n  private Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n  private Menu rootMenu;\n  private Menu itemMenu;\n  private HadoopClusterDelegate hadoopClusterDelegate;\n  private NamedClusterService namedClusterService;\n  private String internalShim;\n\n  private static final Logger logChannel = LogManager.getLogger( HadoopClusterPopupMenuExtension.class );\n  private NamedCluster lastNamedCluster;\n  private RuntimeTester runtimeTester = RuntimeTesterImpl.getInstance();\n  private HadoopClusterManager hadoopClusterManager;\n\n  public HadoopClusterPopupMenuExtension() {\n\n    this.namedClusterService =  BigDataServicesHelper.getNamedClusterService();\n    this.hadoopClusterDelegate = new HadoopClusterDelegate( this.namedClusterService, runtimeTester );\n    this.internalShim = \"\";\n    this.hadoopClusterManager =\n      new HadoopClusterManager( spoonSupplier.get(), namedClusterService, spoonSupplier.get().getMetaStore(), internalShim );\n  }\n\n  public HadoopClusterPopupMenuExtension( HadoopClusterDelegate hadoopClusterDelegate,\n                                          NamedClusterService namedClusterService, String internalShim ) {\n    this.hadoopClusterDelegate = hadoopClusterDelegate;\n    this.namedClusterService = namedClusterService;\n    this.internalShim = internalShim;\n  }\n\n  public void callExtensionPoint( LogChannelInterface log, Object extension ) {\n    final Tree selectionTree = (Tree) extension;\n    createNewPopupMenu( selectionTree );\n  }\n\n  private void createNewPopupMenu( final Tree selectionTree ) {\n\n    Menu popupMenu = null;\n\n    TreeSelection[] objects = spoonSupplier.get().getTreeObjects( selectionTree );\n    if ( objects.length != 1 ) {\n      return;\n    }\n\n    TreeSelection object = objects[ 0 ];\n    Object selection = object.getSelection();\n\n    if ( selection instanceof Class<?> && selection.equals( NamedCluster.class ) ) {\n      popupMenu = createRootPopupMenu( selectionTree );\n    } else if ( selection instanceof NamedCluster ) {\n      popupMenu = createMaintPopupMenu( selectionTree, (NamedCluster) selection );\n    }\n\n    if ( popupMenu != null ) {\n      ConstUI.displayMenu( popupMenu, selectionTree );\n    } else {\n      selectionTree.setMenu( null );\n    }\n  }\n\n  private Menu createRootPopupMenu( final Tree tree ) {\n    if ( !showAdminFunctions() ) {\n      return null;\n    }\n    if ( rootMenu == null ) {\n      rootMenu = new Menu( tree );\n      createPopupMenuItem( rootMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.New\" ),\n        NEW_EDIT_STATE );\n      createPopupMenuItem( rootMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.Import\" ),\n        IMPORT_STATE );\n    }\n    return rootMenu;\n  }\n\n  public Menu createMaintPopupMenu( final Tree selectionTree, NamedCluster namedCluster ) {\n    // don't create another menu if the current one is for this namedCluster,\n    // otherwise we can see extra pop-up menus.\n    if ( itemMenu == null || !namedCluster.equals( this.lastNamedCluster ) ) {\n      this.lastNamedCluster = namedCluster;\n      itemMenu = new Menu( selectionTree );\n      try {\n        String name = URLEncoder.encode( namedCluster.getName(), \"UTF-8\" );\n\n        if ( showAdminFunctions() ) {\n          createPopupMenuItem( itemMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.Edit\" ),\n            NEW_EDIT_STATE, ImmutableMap.of( \"name\", name ) );\n\n          createPopupMenuItem( itemMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.Duplicate\" ),\n            NEW_EDIT_STATE, ImmutableMap.of( \"name\", name, \"duplicateName\",\n              getString( PKG, \"HadoopClusterPopupMenuExtension.Duplicate.Prefix\" ) + name ) );\n        }\n        createPopupMenuItem( itemMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.Test\" ),\n          TESTING_STATE, ImmutableMap.of( \"name\", name ) );\n\n        if ( showAdminFunctions() ) {\n          createDeleteMenuItem( itemMenu, getString( PKG, \"HadoopClusterPopupMenuExtension.MenuItem.Delete\" ), name );\n        }\n      } catch ( UnsupportedEncodingException e ) {\n        logChannel.error( e.getMessage() );\n      }\n    }\n    return itemMenu;\n  }\n\n  private boolean showAdminFunctions() {\n    Repository repo = spoonSupplier.get().getRepository();\n    if ( repo != null && repo.getUri().isPresent() ) {\n      return repo.getSecurityProvider().getUserInfo().isAdmin();\n    }\n    return true;\n  }\n\n  private void createPopupMenuItem( Menu menu, String menuItemLabel, String state ) {\n    createPopupMenuItem( menu, menuItemLabel, state, Collections.emptyMap() );\n  }\n\n  private void createPopupMenuItem( Menu menu, String menuItemLabel, String state, Map urlParams ) {\n    MenuItem menuItem = new MenuItem( menu, SWT.NONE );\n    menuItem.setText( menuItemLabel );\n    menuItem.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent selectionEvent ) {\n        hadoopClusterDelegate.openDialog( state, urlParams );\n      }\n    } );\n  }\n\n  private void createDeleteMenuItem( Menu menu, String menuItemLabel, String namedCluster ) {\n    MenuItem menuItem = new MenuItem( menu, SWT.NONE );\n    menuItem.setText( menuItemLabel );\n    menuItem.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent selectionEvent ) {\n        try {\n          String nCluster = URLDecoder.decode( namedCluster, \"UTF-8\" );\n          String title = BaseMessages.getString( PKG, \"PopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Title\" );\n          String message =\n            BaseMessages.getString( PKG, \"PopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Message\",\n              nCluster );\n          String deleteButton =\n            BaseMessages.getString( PKG, \"PopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Delete\" );\n          String doNotDeleteButton =\n            BaseMessages.getString( PKG, \"PopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.DoNotDelete\" );\n          MessageDialog dialog =\n            new MessageDialog( spoonSupplier.get().getShell(), title, null, message, MessageDialog.WARNING,\n              new String[] {\n                deleteButton, doNotDeleteButton }, 0 );\n          int response = dialog.open();\n          if ( response != RESULT_YES ) {\n            return;\n          }\n          hadoopClusterManager.deleteNamedCluster( spoonSupplier.get().getMetaStore(), nCluster, true );\n        } catch ( UnsupportedEncodingException e ) {\n          logChannel.error( e.getMessage() );\n        }\n      }\n    } );\n  }\n\n}"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/tree/ThinHadoopClusterEditExtension.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.tree;\n\nimport com.google.common.collect.ImmutableMap;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.dialog.HadoopClusterDelegate;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.extension.ExtensionPoint;\nimport org.pentaho.di.core.extension.ExtensionPointInterface;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.ui.spoon.SelectionTreeExtension;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.io.UnsupportedEncodingException;\nimport java.net.URLEncoder;\nimport java.util.Collections;\n\n\n@ExtensionPoint( id = \"ThinHadoopClusterEditExtension\", description = \"Edits named cluster\",\n  extensionPointId = \"SpoonViewTreeExtension\" )\npublic class ThinHadoopClusterEditExtension implements ExtensionPointInterface {\n\n  HadoopClusterDelegate hadoopClusterDelegate;\n  private static final Logger logChannel = LogManager.getLogger( ThinHadoopClusterEditExtension.class );\n\n  public ThinHadoopClusterEditExtension() {\n    this.hadoopClusterDelegate = new HadoopClusterDelegate( BigDataServicesHelper.getNamedClusterService(), RuntimeTesterImpl.getInstance() );\n  }\n\n  public ThinHadoopClusterEditExtension( HadoopClusterDelegate hadoopClusterDelegate ) {\n    this.hadoopClusterDelegate = hadoopClusterDelegate;\n  }\n\n  public void callExtensionPoint( LogChannelInterface log, Object extension ) throws KettleException {\n    try {\n      SelectionTreeExtension selectionTreeExtension = (SelectionTreeExtension) extension;\n      Object selection = selectionTreeExtension.getSelection();\n      if ( selectionTreeExtension.getAction().equals( Spoon.EDIT_SELECTION_EXTENSION ) ) {\n        if ( selection instanceof NamedCluster ) {\n          NamedCluster namedCluster = (NamedCluster) selection;\n          String name = URLEncoder.encode( namedCluster.getName(), \"UTF-8\" );\n          hadoopClusterDelegate.openDialog( \"new-edit\", ImmutableMap.of( \"name\", name ) );\n        }\n      } else if ( selectionTreeExtension.getAction().equals( Spoon.CREATE_NEW_SELECTION_EXTENSION ) ) {\n        if ( selection.equals( NamedCluster.class ) ) {\n          hadoopClusterDelegate.openDialog(\"new-edit\", Collections.emptyMap());\n        }\n      }\n\n    } catch ( UnsupportedEncodingException e ) {\n      logChannel.error( e.getMessage() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/tree/ThinHadoopClusterFolderProvider.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.tree;\n\nimport org.eclipse.swt.graphics.Image;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.widget.tree.TreeNode;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.tree.TreeFolderProvider;\nimport org.pentaho.di.ui.util.SwtSvgImageUtil;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.function.Supplier;\nimport java.util.Optional;\n\npublic class ThinHadoopClusterFolderProvider extends TreeFolderProvider {\n\n  public static final String STRING_NEW_HADOOP_CLUSTER =\n    BaseMessages.getString( ThinHadoopClusterFolderProvider.class, \"HadoopClusterTree.Title\" );\n  private static Class<?> PKG = Spoon.class;\n  private Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n  private NamedClusterService namedClusterService;\n\n  @Override\n  public void refresh( Optional<AbstractMeta> meta, TreeNode treeNode, String filter ) {\n\n    if ( getNamedClusterService() != null ) {\n      List<NamedCluster> namedClusters = null;\n      List<MetaStoreException> exceptionList = new ArrayList<>();\n      try {\n        namedClusters = getNamedClusterService().list( Spoon.getInstance().getMetaStore(), exceptionList );\n        for ( MetaStoreException e : exceptionList ) {\n          new ErrorDialog( Spoon.getInstance().getShell(), BaseMessages.getString( PKG, \"Spoon.ErrorDialog.Title\" ),\n            BaseMessages.getString( PKG, \"Spoon.ErrorDialog.ErrorFetchingFromRepo.NamedCluster\" ), e );\n        }\n      } catch ( MetaStoreException e ) {\n        new ErrorDialog( Spoon.getInstance().getShell(), BaseMessages.getString( PKG, \"Spoon.ErrorDialog.Title\" ),\n          BaseMessages.getString( PKG, \"Spoon.ErrorDialog.ErrorFetchingFromRepo.NamedCluster\" ), e );\n        return;\n      }\n\n      for ( NamedCluster namedCluster : namedClusters ) {\n        if ( !filterMatch( namedCluster.getName(), filter ) ) {\n          continue;\n        }\n        createTreeNode( treeNode, namedCluster.getName(), getHadoopClusterImage() );\n      }\n    }\n  }\n\n  @Override\n  public String getTitle() {\n    return STRING_NEW_HADOOP_CLUSTER;\n  }\n\n  @Override\n  public Class getType() {\n    return NamedCluster.class;\n  }\n\n  private Image getHadoopClusterImage() {\n    return SwtSvgImageUtil\n      .getImage( spoonSupplier.get().getShell().getDisplay(), getClass().getClassLoader(), \"images/hadoop_clusters.svg\",\n        ConstUI.ICON_SIZE, ConstUI.ICON_SIZE );\n  }\n\n  private NamedClusterService getNamedClusterService() {\n    if ( namedClusterService == null ) {\n      namedClusterService = BigDataServicesHelper.getNamedClusterService();\n    }\n    return namedClusterService;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/tree/ThinHadoopClusterTreeDelegateExtension.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.tree;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.core.extension.ExtensionPoint;\nimport org.pentaho.di.core.extension.ExtensionPointInterface;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.TreeSelection;\nimport org.pentaho.di.ui.spoon.delegates.SpoonTreeDelegateExtension;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\n\nimport java.util.List;\nimport java.util.function.Supplier;\n\n@ExtensionPoint( id = \"ThinHadoopClusterTreeDelegateExtension\", description = \"\",\n  extensionPointId = \"SpoonTreeDelegateExtension\" )\n\npublic class ThinHadoopClusterTreeDelegateExtension implements ExtensionPointInterface {\n  private Supplier<Spoon> spoonSupplier = Spoon::getInstance;\n  private NamedClusterService namedClusterService;\n\n  public void callExtensionPoint( LogChannelInterface log, Object extension ) {\n\n    SpoonTreeDelegateExtension treeDelExt = (SpoonTreeDelegateExtension) extension;\n\n    int caseNumber = treeDelExt.getCaseNumber();\n    AbstractMeta meta = treeDelExt.getTransMeta();\n    String[] path = treeDelExt.getPath();\n    List<TreeSelection> objects = treeDelExt.getObjects();\n\n    TreeSelection object = null;\n    switch ( caseNumber ) {\n      case 2:\n        if ( path[ 1 ].equals( ThinHadoopClusterFolderProvider.STRING_NEW_HADOOP_CLUSTER ) ) {\n          object = new TreeSelection( path[ 1 ], NamedCluster.class, meta );\n        }\n        break;\n      case 3:\n        if ( path[ 1 ].equals( ThinHadoopClusterFolderProvider.STRING_NEW_HADOOP_CLUSTER ) ) {\n          try {\n            String name = path[2];\n            NamedClusterService ncs = getNamedClusterService();\n            if ( ncs != null ) {\n              NamedCluster nc = ncs.read( name, spoonSupplier.get().getMetaStore() );\n              object = new TreeSelection( path[2], nc, meta );\n            }\n          } catch ( MetaStoreException e ) {\n            // Ignore\n          }\n        }\n        break;\n    }\n\n    if ( object != null ) {\n      objects.add( object );\n    }\n  }\n\n  private NamedClusterService getNamedClusterService() {\n    if ( namedClusterService == null ) {\n      namedClusterService = BigDataServicesHelper.getNamedClusterService();\n    }\n    return namedClusterService;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/kettle-password-encoder-plugins.xml",
    "content": "<password-encoder-plugins>\n\n  <password-encoder-plugin id=\"Kettle\">\n    <description>Kettle Password Encoder</description>\n    <classname>org.pentaho.support.encryption.KettleTwoWayPasswordEncoder</classname>\n  </password-encoder-plugin>\n \n</password-encoder-plugins>"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/messages/messages.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/messages/messages_en_US.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/messages/messages.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters\nNamedClusterDialog.newCluster.title=New Cluster\nNamedClusterDialog.editCluster.title=Edit Cluster\nNamedClusterDialog.importCluster.title=Import Cluster\nNamedClusterDialog.newCluster=Hadoop Cluster\nNamedClusterDialog.clusterName=Cluster Name\nNamedClusterDialog.hdfs=HDFS\nNamedClusterDialog.hostname=Hostname\nNamedClusterDialog.port=Port\nNamedClusterDialog.username=Username\nNamedClusterDialog.password=Password\nNamedClusterDialog.jobTracker=Jobtracker\nNamedClusterDialog.activeDriver=Current Configured Driver:\nNamedClusterDialog.originalDriver=Original Configured Driver:\nNamedClusterDialog.noDriver=No driver configured\nNamedClusterDialog.mismatchedDriver=WARNING - Mismatched driver may have unpredictable results\nNamedClusterDialog.siteXmlFiles=Site XML files\nNamedClusterDialog.browseButton=Browse to add file(s)\nNamedClusterDialog.zooKeeper=ZooKeeper\nNamedClusterDialog.oozie=Oozie\nNamedClusterDialog.kafka=Kafka\nNamedClusterDialog.bootstrapServers=Boostrap servers\nNamedClusterDialog.file=File\nNamedClusterDialog.security=What is your security type?\nNamedClusterDialog.none=None\nNamedClusterDialog.kerberos=Kerberos\nNamedClusterDialog.knox=Knox\nNamedClusterDialog.securityMethod=Security method\nNamedClusterDialog.authenticationUsername=Authentication username\nNamedClusterDialog.impersonationUsername=Impersonation username\nNamedClusterDialog.authenticationKeytab=Authentication Keytab\nNamedClusterDialog.impersonationKeytab=Impersonation Keytab\nNamedClusterDialog.gatewayURL=Gateway URL\nNamedClusterDialog.gatewayUsername=Gateway Username\nNamedClusterDialog.gatewayPassword=Gateway Password\nNamedClusterDialog.browse=Browse\nNamedClusterDialog.remove=Remove\nNamedClusterDialog.removeSiteFile=Removes selected files\nNamedClusterDialog.siteFileAlert=Do you want to remove the selected site files?\nNamedClusterDialog.noFileSelected=No file selected\nNamedClusterDialog.question=What would you like to do?\nNamedClusterDialog.viewTestResults=View test results\nNamedClusterDialog.editCluster=Edit this cluster\nNamedClusterDialog.createNewCluster=Create a new cluster\nNamedClusterDialog.importNewCluster=Import a new Cluster\nNamedClusterDialog.clusterNameExists=A hadoop cluster with the provided name already exists. Please provide a different name.\nNamedClusterDialog.clusterOverwriteTitle=Overwrite Cluster Configuration?\nNamedClusterDialog.clusterOverwrite=A hadoop cluster with the name ''{0}'' already exists.\\n\\nDo you want to overwrite the existing configuration?\nNamedClusterDialog.testResults=Test results\nNamedClusterDialog.fail=We couldn't connect.\nNamedClusterDialog.fail.description=Your Hadoop cluster has been created, and it is working. However, we were unable to connect to some services. Please check your Hadoop cluster configuration files(s), and view the test results for further details.\nNamedClusterDialog.pass=Congratulations!\nNamedClusterDialog.description.pass=Your Hadoop cluster has been created, and all services appear to be up and running.\nNamedClusterDialog.import.fail=Import Failed.\nNamedClusterDialog.import.fail.description=Your Hadoop cluster was not created. Please check your Hadoop cluster configuration file(s).\nNamedClusterDialog.test.fail=Fail\nNamedClusterDialog.test.warning=Warning\nNamedClusterDialog.test.pass=Pass\nNamedClusterDialog.test.importFailed=ImportFailed\nNamedClusterDialog.test.hadoopFileSystemConnection=Hadoop File System Connection\nNamedClusterDialog.clear=Clear\nNamedClusterDialog.repositoryNotification=This cluster will be stored on the repository, changes may impact other users\nNamedClusterDialog.help=https://docs.pentaho.com/pdia/11.0-data-integration/extracting-data-into-pdi/connecting-to-a-hadoop-cluster-with-the-pdi-client-article"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/dialog/wizard/pages/messages/messages_en_US.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters\nNamedClusterDialog.newCluster.title=New Cluster\nNamedClusterDialog.editCluster.title=Edit Cluster\nNamedClusterDialog.importCluster.title=Import Cluster\nNamedClusterDialog.newCluster=Hadoop Cluster\nNamedClusterDialog.clusterName=Cluster Name\nNamedClusterDialog.hdfs=HDFS\nNamedClusterDialog.hostname=Hostname\nNamedClusterDialog.port=Port\nNamedClusterDialog.username=Username\nNamedClusterDialog.password=Password\nNamedClusterDialog.jobTracker=Jobtracker\nNamedClusterDialog.activeDriver=Current Configured Driver:\nNamedClusterDialog.originalDriver=Original Configured Driver:\nNamedClusterDialog.noDriver=No driver configured\nNamedClusterDialog.mismatchedDriver=WARNING - Mismatched driver may have unpredictable results\nNamedClusterDialog.siteXmlFiles=Site XML files\nNamedClusterDialog.browseButton=Browse to add file(s)\nNamedClusterDialog.zooKeeper=ZooKeeper\nNamedClusterDialog.oozie=Oozie\nNamedClusterDialog.kafka=Kafka\nNamedClusterDialog.bootstrapServers=Boostrap servers\nNamedClusterDialog.file=File\nNamedClusterDialog.security=What is your security type?\nNamedClusterDialog.none=None\nNamedClusterDialog.kerberos=Kerberos\nNamedClusterDialog.knox=Knox\nNamedClusterDialog.securityMethod=Security method\nNamedClusterDialog.authenticationUsername=Authentication username\nNamedClusterDialog.impersonationUsername=Impersonation username\nNamedClusterDialog.authenticationKeytab=Authentication Keytab\nNamedClusterDialog.impersonationKeytab=Impersonation Keytab\nNamedClusterDialog.gatewayURL=Gateway URL\nNamedClusterDialog.gatewayUsername=Gateway Username\nNamedClusterDialog.gatewayPassword=Gateway Password\nNamedClusterDialog.browse=Browse\nNamedClusterDialog.remove=Remove\nNamedClusterDialog.removeSiteFile=Removes selected files\nNamedClusterDialog.siteFileAlert=Do you want to remove the selected site files?\nNamedClusterDialog.noFileSelected=No file selected\nNamedClusterDialog.question=What would you like to do?\nNamedClusterDialog.viewTestResults=View test results\nNamedClusterDialog.editCluster=Edit this cluster\nNamedClusterDialog.createNewCluster=Create a new cluster\nNamedClusterDialog.importNewCluster=Import a new Cluster\nNamedClusterDialog.clusterNameExists=A hadoop cluster with the provided name already exists. Please provide a different name.\nNamedClusterDialog.clusterOverwriteTitle=Overwrite Cluster Configuration?\nNamedClusterDialog.clusterOverwrite=A hadoop cluster with the name ''{0}'' already exists.\\n\\nDo you want to overwrite the existing configuration?\nNamedClusterDialog.testResults=Test results\nNamedClusterDialog.fail=We couldn't connect.\nNamedClusterDialog.fail.description=Your Hadoop cluster has been created, and it is working. However, we were unable to connect to some services. Please check your Hadoop cluster configuration files(s), and view the test results for further details.\nNamedClusterDialog.pass=Congratulations!\nNamedClusterDialog.description.pass=Your Hadoop cluster has been created, and all services appear to be up and running.\nNamedClusterDialog.import.fail=Import Failed.\nNamedClusterDialog.import.fail.description=Your Hadoop cluster was not created. Please check your Hadoop cluster configuration file(s).\nNamedClusterDialog.test.fail=Fail\nNamedClusterDialog.test.warning=Warning\nNamedClusterDialog.test.pass=Pass\nNamedClusterDialog.test.importFailed=ImportFailed\nNamedClusterDialog.test.hadoopFileSystemConnection=Hadoop File System Connection\nNamedClusterDialog.clear=Clear\nNamedClusterDialog.repositoryNotification=This cluster will be stored on the repository, changes may impact other users\nNamedClusterDialog.help=https://docs.pentaho.com/pdia/11.0-data-integration/extracting-data-into-pdi/connecting-to-a-hadoop-cluster-with-the-pdi-client-article"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/messages/messages_en_US.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters\nHadoopClusterPopupMenuExtension.MenuItem.New=New cluster\nHadoopClusterPopupMenuExtension.MenuItem.Import=Import cluster\nHadoopClusterPopupMenuExtension.MenuItem.Edit=Edit cluster\nHadoopClusterPopupMenuExtension.MenuItem.Delete=Delete cluster\n\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Message=Are you sure you want to delete your Hadoop Cluster {0}? This cannot be undone!\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Title=Delete Hadoop Cluster\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Delete=Yes, Delete\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.DoNotDelete=No"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/main/resources/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/tree/messages/messages_en_US.properties",
    "content": "HadoopClusterTree.Title=Hadoop clusters\nHadoopClusterPopupMenuExtension.MenuItem.New=New cluster\nHadoopClusterPopupMenuExtension.MenuItem.Import=Import cluster\nHadoopClusterPopupMenuExtension.MenuItem.Edit=Edit cluster\nHadoopClusterPopupMenuExtension.MenuItem.Delete=Delete cluster\nHadoopClusterPopupMenuExtension.MenuItem.Duplicate=Duplicate cluster\nHadoopClusterPopupMenuExtension.MenuItem.Test=Test cluster\nHadoopClusterPopupMenuExtension.Duplicate.Prefix=(copy of)\\ \n\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Message=Are you sure you want to delete your Hadoop Cluster {0}? This cannot be undone!\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Title=Delete Hadoop Cluster\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Delete=Yes, Delete\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.DoNotDelete=No"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/java/org/pentaho/big/data/kettle/plugins/hadoopcluster/ui/endpoints/HadoopClusterManagerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints;\n\nimport com.google.common.collect.ImmutableList;\nimport org.apache.commons.configuration.ConfigurationException;\nimport org.apache.commons.configuration.PropertiesConfiguration;\nimport org.apache.commons.fileupload2.core.FileItemInput;\nimport org.apache.commons.io.FileUtils;\nimport org.json.simple.JSONObject;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Captor;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LogChannelInterfaceFactory;\nimport org.pentaho.di.core.osgi.api.NamedClusterSiteFile;\nimport org.pentaho.di.core.osgi.impl.NamedClusterSiteFileImpl;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.core.ShimIdentifierInterface;\nimport org.pentaho.metastore.stores.delegate.DelegatingMetaStore;\nimport org.pentaho.runtime.test.RuntimeTestStatus;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.util.AbstractMap.SimpleImmutableEntry;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.lenient;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.HadoopClusterManager.PLACEHOLDER_VALUE;\nimport static org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.model.ThinNameClusterModel.NAME_KEY;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class HadoopClusterManagerTest {\n  private static final String CORE_SITE = \"core-site.xml\";\n  private static final String HIVE_SITE = \"hive-site.xml\";\n  private static final String OOZIE_SITE = \"oozie-site.xml\";\n  private static final String YARN_SITE = \"yarn-site.xml\";\n  public static final String MAPR_SHIM_VENDOR = \"Map-R\";\n  public static final String MAPRFS_SCHEME = \"maprfs\";\n\n  @Mock private Spoon spoon;\n  @Mock private LogChannelInterfaceFactory logChannelFactory;\n  @Mock private LogChannelInterface logChannel;\n  @Mock private NamedClusterService namedClusterService;\n  @Mock private DelegatingMetaStore metaStore;\n  @Mock private NamedCluster namedCluster;\n  @Mock private NamedCluster knoxNamedCluster;\n  @Mock( lenient = true ) private ShimIdentifierInterface cdhShim;\n  @Mock( lenient = true ) private ShimIdentifierInterface internalShim;\n  @Mock( lenient = true ) private ShimIdentifierInterface maprShim;\n  @Captor ArgumentCaptor<NamedClusterSiteFile> siteFileCaptor;\n  private String ncTestName = \"ncTest\";\n  private String knoxNC = \"knoxNC\";\n  private HadoopClusterManager hadoopClusterManager;\n\n  @Before public void setup() throws Exception {\n    KettleLogStore.setLogChannelInterfaceFactory( logChannelFactory );\n    when( logChannelFactory.create( any() ) ).thenReturn( logChannel );\n\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n\n    if ( getShimTestDir().exists() ) {\n      FileUtils.deleteDirectory( getShimTestDir() );\n    }\n    when( cdhShim.getId() ).thenReturn( \"cdh514\" );\n    when( cdhShim.getVendor() ).thenReturn( \"Cloudera\" );\n    when( internalShim.getId() ).thenReturn( \"apache\" );\n    when( internalShim.getVendor() ).thenReturn( \"Apache\" );\n    when( maprShim.getId() ).thenReturn( \"mapr\" );\n    when( maprShim.getVendor() ).thenReturn( MAPR_SHIM_VENDOR );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    when( namedCluster.getName() ).thenReturn( ncTestName );\n    when( namedClusterService.getNamedClusterByName( ncTestName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.getShimIdentifier() ).thenReturn( \"cdh514\" );\n    when( namedClusterService.getNamedClusterByName( knoxNC, metaStore ) ).thenReturn( knoxNamedCluster );\n    when( knoxNamedCluster.isUseGateway() ).thenReturn( true );\n    when( knoxNamedCluster.getGatewayPassword() ).thenReturn( \"password\" );\n    when( knoxNamedCluster.getGatewayUrl() ).thenReturn( \"http://localhost:8008\" );\n    when( knoxNamedCluster.getGatewayUsername() ).thenReturn( \"username\" );\n    hadoopClusterManager = new HadoopClusterManager( spoon, namedClusterService, metaStore, \"apache\" );\n    when( namedClusterService.list( metaStore ) ).thenReturn( ImmutableList.of( namedCluster ) );\n  }\n\n  @Test public void testSecuredImportNamedCluster() throws Exception {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n\n    Map<String, CachedFileItemStream> cachedFileItemStreamMap = getFiles( \"src/test/resources/secured\" );\n    File keytabFileDirectory = new File( \"src/test/resources/keytab\" );\n    Map<String, CachedFileItemStream> keytabFileItems = getFiles( keytabFileDirectory.getPath(), \"keytabAuthFile\" );\n    cachedFileItemStreamMap.putAll( keytabFileItems );\n\n    JSONObject result = hadoopClusterManager.importNamedCluster( model, cachedFileItemStreamMap );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster, times(3) ).addSiteFile( siteFileCaptor.capture() );\n    assertSiteFields( cachedFileItemStreamMap, siteFileCaptor.getAllValues(), new String[] { \"keytabAuthFile\" } );\n    assertTrue( new File( getShimTestDir(), \"test.keytab\" ).exists() );\n  }\n\n  @Test public void testUnsecuredImportNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    Map<String, CachedFileItemStream> cachedFileItemStreamMap = getFiles( \"src/test/resources/unsecured\" );\n    JSONObject result = hadoopClusterManager.importNamedCluster( model, cachedFileItemStreamMap );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster, times(3) ).addSiteFile( siteFileCaptor.capture() );\n    assertSiteFields( cachedFileItemStreamMap, siteFileCaptor.getAllValues(), new String[] { \"oozie-default.xml\"} );\n  }\n\n  private void assertSiteFields( Map<String, CachedFileItemStream> streamMap, List<NamedClusterSiteFile> siteFiles,\n                                 String[] missingFiles ) {\n    assertEquals( siteFiles.size(), streamMap.size() - missingFiles.length );\n    for ( int i = 0; i < siteFiles.size(); i++ ) {\n      NamedClusterSiteFile namedClusterSiteFile = siteFiles.get( i );\n      for ( String missingFile : missingFiles ) {\n        assertNotEquals( namedClusterSiteFile.getSiteFileName(), missingFile );\n      }\n      assert ( streamMap.containsKey( namedClusterSiteFile.getSiteFileName() ) );\n      CachedFileItemStream cachedFileItemStream = streamMap.get( namedClusterSiteFile.getSiteFileName() );\n      assertEquals( cachedFileItemStream.getLastModified(), namedClusterSiteFile.getSourceFileModificationTime() );\n    }\n  }\n\n  @Test public void testMissingInfoImportNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    Map<String, CachedFileItemStream> cachedFileItemStreamMap = getFiles( \"src/test/resources/missing-info\" );\n    JSONObject result =\n      hadoopClusterManager.importNamedCluster( model, cachedFileItemStreamMap );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster, times(3) ).addSiteFile( siteFileCaptor.capture() );\n    assertSiteFields( cachedFileItemStreamMap, siteFileCaptor.getAllValues(), new String[] { \"oozie-default.xml\"} );\n    ThinNameClusterModel thinNameClusterModel = hadoopClusterManager.getNamedCluster( ncTestName );\n    assertTrue( StringUtil.isEmpty( thinNameClusterModel.getHdfsHost() ) );\n    assertTrue( StringUtil.isEmpty( thinNameClusterModel.getHdfsPort() ) );\n    assertTrue( StringUtil.isEmpty( thinNameClusterModel.getJobTrackerPort() ) );\n  }\n\n  @Test public void testSiteXMLParsingImportNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result =\n            hadoopClusterManager.importNamedCluster( model, getFiles( \"src/test/resources/unsecured\" ) );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster ).setJobTrackerHost( \"svqxbdcn6cdh514un3.pentahoqa.com\" );\n    verify( namedCluster ).setJobTrackerPort( \"8032\" );\n    verify( namedCluster ).setZooKeeperHost( \"svqxbdcn6cdh514un1.pentahoqa.com,svqxbdcn6cdh514un5.pentahoqa.com,\" +\n            \"svqxbdcn6cdh514un4.pentahoqa.com,svqxbdcn6cdh514un2.pentahoqa.com,svqxbdcn6cdh514un3.pentahoqa.com\" );\n  }\n\n  @Test public void testSiteXMLParsingImportDataprocNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result =\n            hadoopClusterManager.importNamedCluster( model, getFiles( \"src/test/resources/dataproc\" ) );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster ).setJobTrackerHost( \"cluster-ec0a-m-0\" );\n    verify( namedCluster ).setJobTrackerPort( \"\" );\n    verify( namedCluster ).setZooKeeperHost( \"cluster-ec0a-m-0,cluster-ec0a-m-1,cluster-ec0a-m-2\" );\n  }\n\n  @Test public void testCreateNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result = hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    verify( namedCluster, never() ).setStorageScheme( any( String.class ) );\n  }\n\n  @Test public void testOverwriteNamedClusterCaseInsensitive() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( \"NCTESTName\" );\n    hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n\n    model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result = hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n\n    String\n      shimTestDir =\n      System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\" + File.separator\n        + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator + ncTestName;\n    assertTrue( new File( shimTestDir ).exists() );\n  }\n\n  @Test public void testEditNamedCluster() throws Exception {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setKerberosSubType( \"\" );\n    model.setName( ncTestName );\n    model.setOldName( ncTestName );\n    JSONObject result = hadoopClusterManager.editNamedCluster( model, true, getFiles( \"/\" ) );\n    verify( namedClusterService ).update( any(), any() );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );  //Not really testing anything here since result is mocked\n  }\n\n  @Test public void testEditNamedClusterNameChange() throws Exception{\n    String newNcName = \"newNcName\";\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setKerberosSubType( \"\" );\n    model.setName( newNcName );\n    model.setOldName( ncTestName );\n    JSONObject result = hadoopClusterManager.editNamedCluster( model, true, getFiles( \"/\" ) );\n    verify( namedCluster ).setName( newNcName );\n    verify( namedClusterService ).create( any(), any() );\n  }\n\n  @Test public void testEditRemoveSiteFileNotInModel() {\n    //Create thin model with reference to only 1 site file (mimics 3 being removed in the UI)\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setKerberosSubType( \"\" );\n    model.setName( ncTestName );\n    model.setOldName( ncTestName );\n    List<SimpleImmutableEntry<String, String>> siteFiles = new ArrayList<>();\n    SimpleImmutableEntry<String, String> onlySiteFile = new SimpleImmutableEntry<>( NAME_KEY, CORE_SITE );\n    siteFiles.add( onlySiteFile );\n    model.setSiteFiles( siteFiles );\n\n    //Initialize named cluster mock with 4 site files\n    List<NamedClusterSiteFile> namedClusterSiteFiles = new ArrayList<>();\n    NamedClusterSiteFileImpl sf1 = new NamedClusterSiteFileImpl();\n    sf1.setSiteFileName( CORE_SITE );\n    namedClusterSiteFiles.add( sf1 );\n    NamedClusterSiteFileImpl sf2 = new NamedClusterSiteFileImpl();\n    sf2.setSiteFileName( HIVE_SITE );\n    namedClusterSiteFiles.add( sf2 );\n    NamedClusterSiteFileImpl sf3 = new NamedClusterSiteFileImpl();\n    sf3.setSiteFileName( OOZIE_SITE );\n    namedClusterSiteFiles.add( sf3 );\n    NamedClusterSiteFileImpl sf4 = new NamedClusterSiteFileImpl();\n    sf4.setSiteFileName( YARN_SITE );\n    namedClusterSiteFiles.add( sf4 );\n\n    //Implement getSiteFiles() for the namedCluster mock\n    when( namedCluster.getSiteFiles() ).thenReturn( namedClusterSiteFiles );\n\n    //Get the files, but remove all but 1 to match the thin model\n    final Map<String, CachedFileItemStream> filesFromThinClient = getFiles( \"src/test/resources/unsecured\" );\n    filesFromThinClient.remove( HIVE_SITE );\n    filesFromThinClient.remove( OOZIE_SITE );\n    filesFromThinClient.remove( YARN_SITE );\n\n    //Call the edit method\n    JSONObject result =\n      hadoopClusterManager.editNamedCluster( model, true, filesFromThinClient );\n\n    //Capture setSiteFiles() for the namedCluster mock\n    ArgumentCaptor<List<NamedClusterSiteFile>> siteFileCaptor = ArgumentCaptor.forClass( (Class) List.class );\n    verify( namedCluster ).setSiteFiles( siteFileCaptor.capture() );\n\n    //Assert that setSiteFiles() siteFileCaptor argument was set to have only 1 site file, matching the model\n    assertEquals( 1, siteFileCaptor.getValue().size() );\n\n    //Assert that the edit method ran without error and returned the name of the cluster\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n  }\n\n  @Test public void testKeepFilesNotChangedInThinClient() throws IOException {\n    //Create thin model\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setKerberosSubType( \"\" );\n    model.setName( ncTestName );\n    model.setOldName( ncTestName );\n\n    //Thin model references 4 site files\n    List<SimpleImmutableEntry<String, String>> siteFiles = new ArrayList<>();\n    SimpleImmutableEntry<String, String> thinSiteFile1 = new SimpleImmutableEntry<>( \"name\", CORE_SITE );\n    siteFiles.add( thinSiteFile1 );\n    SimpleImmutableEntry<String, String> thinSiteFile2 = new SimpleImmutableEntry<>( \"name\", HIVE_SITE );\n    siteFiles.add( thinSiteFile2 );\n    SimpleImmutableEntry<String, String> thinSiteFile3 = new SimpleImmutableEntry<>( \"name\", OOZIE_SITE );\n    siteFiles.add( thinSiteFile3 );\n    SimpleImmutableEntry<String, String> thinSiteFile4 = new SimpleImmutableEntry<>( \"name\", YARN_SITE );\n    siteFiles.add( thinSiteFile4 );\n\n    model.setSiteFiles( siteFiles );\n\n    //Initialize named cluster mock with 4 site files\n    List<NamedClusterSiteFile> namedClusterSiteFiles = new ArrayList<>();\n    NamedClusterSiteFileImpl sf1 = new NamedClusterSiteFileImpl();\n    sf1.setSiteFileName( CORE_SITE );\n    String originalContent = \"originalContent\";\n    sf1.setSiteFileContents( originalContent );\n    namedClusterSiteFiles.add( sf1 );\n\n    NamedClusterSiteFileImpl sf2 = new NamedClusterSiteFileImpl();\n    sf2.setSiteFileName( HIVE_SITE );\n    sf2.setSiteFileContents( originalContent );\n    namedClusterSiteFiles.add( sf2 );\n\n    NamedClusterSiteFileImpl sf3 = new NamedClusterSiteFileImpl();\n    sf3.setSiteFileName( OOZIE_SITE );\n    String originalOozieFileContent = \"orig ofc\";\n    sf3.setSiteFileContents( originalOozieFileContent );\n    namedClusterSiteFiles.add( sf3 );\n\n    NamedClusterSiteFileImpl sf4 = new NamedClusterSiteFileImpl();\n    sf4.setSiteFileName( YARN_SITE );\n    String originalYarnFileContent = \"orig yfc\";\n    sf4.setSiteFileContents( originalYarnFileContent );\n    namedClusterSiteFiles.add( sf4 );\n\n    //Implement getSiteFiles() for the namedCluster mock\n    when( namedCluster.getSiteFiles() ).thenReturn( namedClusterSiteFiles );\n\n    //Modify the content of the core and hive site files but leave the other two the same by using the placeholder value\n    //The placeholder value represents a file in the thin client that was not changed.\n    final Map<String, CachedFileItemStream> filesFromThinClient = new HashMap<>();\n\n    String modifiedCoreFileContent = \"new cfc\";\n    CachedFileItemStream modifiedCoreSiteFile =\n      new CachedFileItemStream( new ByteArrayInputStream( modifiedCoreFileContent.getBytes() ), CORE_SITE,\n        CORE_SITE );\n    filesFromThinClient.put( CORE_SITE, modifiedCoreSiteFile );\n\n\n    String modifiedHiveFileContent = \"new hfc\";\n    CachedFileItemStream modifiedHiveSiteFile =\n      new CachedFileItemStream( new ByteArrayInputStream( modifiedHiveFileContent.getBytes() ), HIVE_SITE,\n        HIVE_SITE );\n    filesFromThinClient.put( HIVE_SITE, modifiedHiveSiteFile );\n\n    CachedFileItemStream unmodifiedOozieSiteFile =\n      new CachedFileItemStream( new ByteArrayInputStream( PLACEHOLDER_VALUE.getBytes() ), OOZIE_SITE,\n        OOZIE_SITE );\n    filesFromThinClient.put( OOZIE_SITE, unmodifiedOozieSiteFile );\n\n\n    CachedFileItemStream unmodifiedYarnSiteFile =\n      new CachedFileItemStream( new ByteArrayInputStream( PLACEHOLDER_VALUE.getBytes() ), YARN_SITE,\n        YARN_SITE );\n    filesFromThinClient.put( YARN_SITE, unmodifiedYarnSiteFile );\n\n    //Call the edit method\n    JSONObject result =\n      hadoopClusterManager.editNamedCluster( model, true, filesFromThinClient );\n\n    //Assert that the edit method ran without error and returned the name of the cluster\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n\n    //Assert that Core and hive site files were modified, but that oozie and yarn maintained original content\n    for ( NamedClusterSiteFile ncsf : namedClusterSiteFiles ) {\n      if ( ncsf.getSiteFileName().equals( CORE_SITE ) ) {\n        assertEquals( modifiedCoreFileContent, ncsf.getSiteFileContents() );\n      } else if ( ncsf.getSiteFileName().equals( HIVE_SITE ) ) {\n        assertEquals( modifiedHiveFileContent, ncsf.getSiteFileContents() );\n      } else if ( ncsf.getSiteFileName().equals( OOZIE_SITE ) ) {\n        assertEquals( originalOozieFileContent, ncsf.getSiteFileContents() );\n      } else if ( ncsf.getSiteFileName().equals( YARN_SITE ) ) {\n        assertEquals( originalYarnFileContent, ncsf.getSiteFileContents() );\n      } else {\n        fail();\n      }\n    }\n  }\n\n\n  @Test public void testFailNamedCluster() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result = hadoopClusterManager.importNamedCluster( model, getFiles( \"src/test/resources/bad\" ) );\n    assertEquals( \"\", result.get( \"namedCluster\" ) );\n  }\n\n  @Test public void testInstallDriver() throws IOException {\n    System.getProperties()\n      .setProperty( \"SHIM_DRIVER_DEPLOYMENT_LOCATION\", \"src/test/resources/driver-destination\" );\n\n    File driverFile = new File( \"src/test/resources/driver-source/driver.kar\" );\n\n    FileItemInput fileItemStream = mock( FileItemInput.class );\n    when( fileItemStream.getFieldName() ).thenReturn( driverFile.getName() );\n    when( fileItemStream.getInputStream() ).thenReturn( new FileInputStream( driverFile ) );\n\n    JSONObject response = hadoopClusterManager.installDriver( fileItemStream );\n    boolean isSuccess = (boolean) response.get( \"installed\" );\n    if ( isSuccess ) {\n      File driver = new File( \"src/test/resources/driver-destination/driver.kar\" );\n      assertTrue( driver.exists() );\n    }\n  }\n\n  @Test public void testRunTests() {\n    RuntimeTestStatus runtimeTestStatus = mock( RuntimeTestStatus.class );\n    when( namedClusterService.getNamedClusterByName( ncTestName, this.metaStore ) ).thenReturn( namedCluster );\n    when( runtimeTestStatus.isDone() ).thenReturn( true );\n\n    hadoopClusterManager.onProgress( runtimeTestStatus );\n    Object[] categories = (Object[]) hadoopClusterManager.runTests( null, ncTestName );\n\n    for ( Object category : categories ) {\n      TestCategory testCategory = (TestCategory) category;\n      String categoryName = testCategory.getCategoryName();\n      boolean isCategoryNameValid = false;\n      if ( categoryName.equals( \"Hadoop file system\" ) || categoryName.equals( \"Oozie host connection\" ) || categoryName\n        .equals( \"Kafka connection\" ) || categoryName.equals( \"Zookeeper connection\" ) || categoryName\n        .equals( \"Job tracker / resource manager\" ) ) {\n        isCategoryNameValid = true;\n      }\n      assertTrue( isCategoryNameValid );\n      assertFalse( testCategory.isCategoryActive() );\n      List<org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.Test> tests = testCategory.getTests();\n      for ( org.pentaho.big.data.kettle.plugins.hadoopcluster.ui.endpoints.Test test : tests ) {\n        assertEquals( \"Warning\", test.getTestStatus() );\n      }\n    }\n  }\n\n  @Test public void testNamedClusterKnoxSecurity() {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( knoxNC );\n    model.setSecurityType( \"Knox\" );\n    model.setGatewayUsername( \"username\" );\n    model.setGatewayUrl( \"http://localhost:8008\" );\n    model.setGatewayPassword( \"password\" );\n    hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n\n    NamedCluster namedCluster = namedClusterService.getNamedClusterByName( knoxNC, metaStore );\n    assertEquals( true, namedCluster.isUseGateway() );\n    assertEquals( \"password\", namedCluster.getGatewayPassword() );\n    assertEquals( \"http://localhost:8008\", namedCluster.getGatewayUrl() );\n    assertEquals( \"username\", namedCluster.getGatewayUsername() );\n  }\n\n  @Test public void testNamedClusterKerberosPasswordSecurity() throws ConfigurationException {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    model.setSecurityType( \"Kerberos\" );\n    model.setKerberosSubType( \"Password\" );\n    model.setKerberosAuthenticationUsername( \"username\" );\n    model.setKerberosAuthenticationPassword( \"password\" );\n    model.setKerberosImpersonationUsername( \"impersonationusername\" );\n    model.setKerberosImpersonationPassword( \"impersonationpassword\" );\n\n    hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n\n    String configFile = System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator\n      + \"ncTest\" + File.separator + \"config.properties\";\n\n    PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n    assertEquals( \"username\", config.getProperty( \"pentaho.authentication.default.kerberos.principal\" ) );\n    assertEquals( Encr.encryptPasswordIfNotUsingVariables( \"password\" ),\n      config.getProperty( \"pentaho.authentication.default.kerberos.password\" ) );\n    assertEquals( \"impersonationusername\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.principal\" ) );\n    assertEquals( Encr.encryptPasswordIfNotUsingVariables( \"impersonationpassword\" ),\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.password\" ) );\n    assertEquals( \"simple\", config.getProperty( \"pentaho.authentication.default.mapping.impersonation.type\" ) );\n\n    ThinNameClusterModel retrievingModel = hadoopClusterManager.getNamedCluster( ncTestName );\n    assertEquals( \"Kerberos\", retrievingModel.getSecurityType() );\n    assertEquals( \"Password\", retrievingModel.getKerberosSubType() );\n    assertEquals( \"username\", retrievingModel.getKerberosAuthenticationUsername() );\n    assertEquals( \"Encrypted 2be98afc86aa7f2e4bb18bd63c99dbdde\", retrievingModel.getKerberosAuthenticationPassword() );\n    assertEquals( \"impersonationusername\", retrievingModel.getKerberosImpersonationUsername() );\n    assertEquals( \"Encrypted 696d706570cdf7c1a91ece9d8abb18bd63c99dbdde\", retrievingModel.getKerberosImpersonationPassword() );\n  }\n\n  @Test public void testNamedClusterKerberosKeytabSecurity() throws ConfigurationException {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    model.setSecurityType( \"Kerberos\" );\n    model.setKerberosSubType( \"Keytab\" );\n\n    File keytabFileDirectory = new File( \"src/test/resources/keytab\" );\n    Map<String, CachedFileItemStream> keytabFileItems = getFiles( keytabFileDirectory.getPath(), \"keytabAuthFile\" );\n\n    hadoopClusterManager.createNamedCluster( model, keytabFileItems, \"src/test/resources/keytab/test.keytab\", \"\" );\n\n    String configFile = System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator\n      + \"ncTest\" + File.separator + \"config.properties\";\n\n    PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n    assertEquals( System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n        + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator\n        + \"ncTest\" + File.separator + \"test.keytab\",\n      config.getProperty( \"pentaho.authentication.default.kerberos.keytabLocation\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation\" ) );\n    assertEquals( \"simple\", config.getProperty( \"pentaho.authentication.default.mapping.impersonation.type\" ) );\n\n    ThinNameClusterModel retrievingModel = hadoopClusterManager.getNamedCluster( ncTestName );\n    assertEquals( \"Kerberos\", retrievingModel.getSecurityType() );\n    assertEquals( \"Keytab\", retrievingModel.getKerberosSubType() );\n  }\n\n  @Test public void testGetNamedCluster() throws ConfigurationException {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    JSONObject result = hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n    assertEquals( ncTestName, result.get( \"namedCluster\" ) );\n    ThinNameClusterModel nc = hadoopClusterManager.getNamedCluster( \"NCTEST\" );\n    assertEquals( \"ncTest\", nc.getName() );\n  }\n\n  @Test\n  public void testValidSiteFile() {\n    assertFalse( hadoopClusterManager.isValidConfigurationFile( \"file\" ) );\n    assertTrue( hadoopClusterManager.isValidConfigurationFile( CORE_SITE ) );\n    assertTrue( hadoopClusterManager.isValidConfigurationFile( \"config.properties\" ) );\n  }\n\n  @Test\n  public void allowsNullSpoon() {\n    hadoopClusterManager = new HadoopClusterManager( null, namedClusterService, metaStore, \"apache\" );\n    hadoopClusterManager.refreshTree();\n    assertTrue( hadoopClusterManager.getNamedClusterConfigsRootDir().endsWith( \"Configs\" ) );\n  }\n\n  @Test public void testResetSecurity() throws ConfigurationException {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    model.setSecurityType( \"None\" );\n    model.setKerberosSubType( \"Password\" );\n    model.setKerberosAuthenticationUsername( \"username\" );\n    model.setKerberosAuthenticationPassword( \"password\" );\n    model.setKerberosImpersonationUsername( \"impersonationusername\" );\n    model.setKerberosImpersonationPassword( \"impersonationpassword\" );\n\n    hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n\n    String configFile = System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator\n      + \"ncTest\" + File.separator + \"config.properties\";\n\n    PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n    assertEquals( \"\", config.getProperty( \"pentaho.authentication.default.kerberos.principal\" ) );\n    assertEquals( \"\", config.getProperty( \"pentaho.authentication.default.kerberos.password\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.principal\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.password\" ) );\n    assertEquals( \"disabled\", config.getProperty( \"pentaho.authentication.default.mapping.impersonation.type\" ) );\n\n    assertEquals( \"\", config.getProperty( \"pentaho.authentication.default.kerberos.keytabLocation\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.keytabLocation\" ) );\n  }\n\n  @Test public void testNamedClusterKerberosPasswordSecurityWithBlankPassword() throws ConfigurationException {\n    ThinNameClusterModel model = new ThinNameClusterModel();\n    model.setName( ncTestName );\n    model.setSecurityType( \"Kerberos\" );\n    model.setKerberosSubType( \"Password\" );\n    model.setKerberosAuthenticationUsername( \"username\" );\n    model.setKerberosAuthenticationPassword( \"password\" );\n    model.setKerberosImpersonationUsername( \"\" );\n    model.setKerberosImpersonationPassword( \"\" );\n\n    hadoopClusterManager.createNamedCluster( model, getFiles( \"/\" ) );\n\n    String configFile = System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\"\n      + File.separator + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator\n      + \"ncTest\" + File.separator + \"config.properties\";\n\n    PropertiesConfiguration config = new PropertiesConfiguration( new File( configFile ) );\n    assertEquals( \"username\", config.getProperty( \"pentaho.authentication.default.kerberos.principal\" ) );\n    assertEquals( Encr.encryptPasswordIfNotUsingVariables( \"password\" ),\n      config.getProperty( \"pentaho.authentication.default.kerberos.password\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.principal\" ) );\n    assertEquals( \"\",\n      config.getProperty( \"pentaho.authentication.default.mapping.server.credentials.kerberos.password\" ) );\n    assertEquals( \"simple\", config.getProperty( \"pentaho.authentication.default.mapping.impersonation.type\" ) );\n\n    ThinNameClusterModel retrievingModel = hadoopClusterManager.getNamedCluster( ncTestName );\n    assertEquals( \"Kerberos\", retrievingModel.getSecurityType() );\n    assertEquals( \"Password\", retrievingModel.getKerberosSubType() );\n    assertEquals( \"username\", retrievingModel.getKerberosAuthenticationUsername() );\n    assertEquals( \"Encrypted 2be98afc86aa7f2e4bb18bd63c99dbdde\", retrievingModel.getKerberosAuthenticationPassword() );\n    assertEquals( \"\", retrievingModel.getKerberosImpersonationUsername() );\n    assertEquals( \"\", retrievingModel.getKerberosImpersonationPassword() );\n  }\n\n  @After public void tearDown() throws IOException {\n    FileUtils.deleteDirectory( getShimTestDir() );\n    FileUtils.deleteDirectory( new File( \"src/test/resources/driver-destination\" ) );\n    FileUtils\n      .deleteDirectory( new File( hadoopClusterManager.getNamedClusterConfigsRootDir() + File.separator + knoxNC ) );\n  }\n\n  private File getShimTestDir() {\n    String\n      shimTestDir =\n      System.getProperty( \"user.home\" ) + File.separator + \".pentaho\" + File.separator + \"metastore\" + File.separator\n        + \"pentaho\" + File.separator + \"NamedCluster\" + File.separator + \"Configs\" + File.separator + ncTestName;\n    return new File( shimTestDir );\n  }\n\n  private Map<String, CachedFileItemStream> getFiles( String filesLocation ) {\n    return getFiles( filesLocation, null );\n  }\n\n  private Map<String, CachedFileItemStream> getFiles( String filesLocation, String customFieldName ) {\n    Map<String, CachedFileItemStream> fileItemStreamByName = new HashMap<>();\n    try {\n      File siteFilesDirectory = new File( filesLocation );\n      File[] siteFiles = siteFilesDirectory.listFiles();\n      for ( File siteFile : siteFiles ) {\n        String fieldName = customFieldName == null ? siteFile.getName() : customFieldName;\n        CachedFileItemStream cachedFileItemStream =\n          new CachedFileItemStream( new FileInputStream( siteFile ), siteFile.getName(), fieldName );\n        cachedFileItemStream.setLastModified(  siteFile.lastModified() );\n        fileItemStreamByName.put( fieldName, cachedFileItemStream );\n      }\n    } catch ( IOException e ) {\n      fileItemStreamByName = new HashMap<>();\n    }\n    return fileItemStreamByName;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/bad/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>simple</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>authentication</value>\n  </property>\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.mapred.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.mapred.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/dataproc/core-site.xml",
    "content": "<?xml version=\"1.0\" ?>\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<!--\n  Licensed under the Apache License, Version 2.0 (the \"License\");\n  you may not use this file except in compliance with the License.\n  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License. See accompanying LICENSE file.\n-->\n<!-- Put site-specific property overrides in this file. -->\n<configuration>\n  <property>\n    <name>fs.default.name</name>\n    <value>hdfs://cluster-ec0a</value>\n    <description>The old FileSystem used by FsShell.</description>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.token.service.use_ip</name>\n    <value>false</value>\n    <description>\n      Controls whether tokens always use IP addresses. DNS changes will not\n      be detected if this option is enabled. Existing client connections that\n      break will always reconnect to the IP of the original host. New clients\n      will connect to the host's new IP but fail to locate a token. Disabling\n      this option will allow existing and new clients to detect an IP change\n      and continue to locate the new host's token.\n    </description>\n  </property>\n  <property>\n    <name>hadoop.tmp.dir</name>\n    <value>/hadoop/tmp</value>\n    <description>A base for other temporary directories.</description>\n  </property>\n  <property>\n    <name>ha.zookeeper.parent-znode</name>\n    <value>/hadoop/hadoop-ha</value>\n    <description>\n      The ZooKeeper znode under which the ZK failover controller stores\n      its information. Note that the nameservice ID is automatically\n      appended to this znode, so it is not normally necessary to       configure\n      this, even in a federated environment.\n    </description>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>ha.zookeeper.quorum</name>\n    <value>cluster-ec0a-m-0:2181,cluster-ec0a-m-1:2181,cluster-ec0a-m-2:2181</value>\n  </property>\n  <property>\n    <name>fs.defaultFS</name>\n    <value>hdfs://cluster-ec0a</value>\n    <description>\n      The name of the default file system. A URI whose scheme and authority\n      determine the FileSystem implementation. The uri's scheme determines\n      the config property (fs.SCHEME.impl) naming the FileSystem\n      implementation class. The uri's authority is used to determine the\n      host, port, etc. for a filesystem.\n    </description>\n  </property>\n  <property>\n    <name>hadoop.zk.address</name>\n    <value>cluster-ec0a-m-0:2181,cluster-ec0a-m-1:2181,cluster-ec0a-m-2:2181</value>\n  </property>\n  <property>\n    <name>hadoop.http.filter.initializers</name>\n    <value>org.apache.hadoop.security.HttpCrossOriginFilterInitializer,org.apache.hadoop.http.lib.StaticUserWebFilter</value>\n  </property>\n  <property>\n    <name>fs.gs.working.dir</name>\n    <value>/</value>\n    <description>\n      The directory relative gs: uris resolve in inside of the default bucket.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.system.bucket</name>\n    <value>dataproc-staging-us-central1-888792280192-7eit8tmn</value>\n    <description>\n      GCS bucket to use as a default bucket if fs.default.name is not a gs: uri.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.metadata.cache.directory</name>\n    <value>/hadoop_gcs_connector_metadata_cache</value>\n    <description>\n      Only used if fs.gs.metadata.cache.type is FILESYSTEM_BACKED, specifies\n      the local path to use as the base path for storing mirrored GCS metadata.\n      Must be an absolute path, must be a directory, and must be fully\n      readable/writable/executable by any user running processes which use the\n      GCS connector.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.impl</name>\n    <value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem</value>\n    <description>The FileSystem for gs: (GCS) uris.</description>\n  </property>\n  <property>\n    <name>fs.gs.project.id</name>\n    <value>hitachi3-281807</value>\n    <description>\n      Google Cloud Project ID with access to configured GCS buckets.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.metadata.cache.enable</name>\n    <value>false</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>fs.gs.implicit.dir.infer.enable</name>\n    <value>true</value>\n    <description>\n      If set, we create and return in-memory directory objects on the fly when\n      no backing object exists, but we know there are files with the same\n      prefix.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.application.name.suffix</name>\n    <value>-dataproc</value>\n    <description>\n      Appended to the user-agent header for API requests to GCS to help identify\n      the traffic as coming from Dataproc.\n    </description>\n  </property>\n  <property>\n    <name>fs.AbstractFileSystem.gs.impl</name>\n    <value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS</value>\n    <description>The AbstractFileSystem for gs: (GCS) uris.</description>\n  </property>\n  <property>\n    <name>fs.gs.metadata.cache.type</name>\n    <value>FILESYSTEM_BACKED</value>\n    <description>\n      Specifies which implementation of DirectoryListCache to use for\n      supplementing GCS API &amp;amp;amp;amp;quot;list&amp;amp;amp;amp;quot; requests.\n      Supported       implementations:       IN_MEMORY: Enforces immediate\n      consistency within       same Java process.       FILESYSTEM_BACKED:\n      Enforces consistency across       all cooperating processes       pointed\n      at the same local mirror       directory, which may be an NFS directory\n      for massively-distributed       coordination.\n    </description>\n  </property>\n  <property>\n    <name>fs.gs.block.size</name>\n    <value>134217728</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled.protocols</name>\n    <value>TLSv1.1,TLSv1.2,TLSv1.3</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/dataproc/hdfs-site.xml",
    "content": "<?xml version=\"1.0\" ?>\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<!--\n  Licensed under the Apache License, Version 2.0 (the \"License\");\n  you may not use this file except in compliance with the License.\n  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License. See accompanying LICENSE file.\n-->\n<!-- Put site-specific property overrides in this file. -->\n<configuration>\n  <property>\n    <name>dfs.namenode.file.close.num-committed-allowed</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>dfs.namenode.shared.edits.dir</name>\n    <value>qjournal://cluster-ec0a-m-0:8485;cluster-ec0a-m-1:8485;cluster-ec0a-m-2:8485/cluster-ec0a</value>\n    <description>\n      A directory on shared storage between the multiple namenodes       in an\n      HA cluster. This directory will be written by the active and read       by\n      the standby in order to keep the namespaces synchronized. This directory\n      does not need to be listed in dfs.namenode.edits.dir above. It should be\n      left empty in a non-HA cluster.\n    </description>\n  </property>\n  <property>\n    <name>dfs.namenode.name.dir</name>\n    <value>file:///hadoop/dfs/name</value>\n    <description>\n      Determines where on the local filesystem the DFS namenode should store the\n      name table(fsimage). If this is a comma-delimited list of directories then\n      the name table is replicated in all of thedirectories, for redundancy.\n    </description>\n  </property>\n  <property>\n    <name>dfs.permissions.enabled</name>\n    <value>false</value>\n    <description>\n      If &amp;amp;quot;true&amp;amp;quot;, enable permission checking in HDFS. If\n      &amp;amp;quot;false&amp;amp;quot;, permission       checking is turned off, but\n      all other       behavior is unchanged. Switching       from one parameter\n      value to the       other does not change the mode, owner or       group of\n      files or       directories.\n    </description>\n  </property>\n  <property>\n    <name>dfs.client.read.shortcircuit</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>dfs.ha.automatic-failover.enabled</name>\n    <value>true</value>\n    <description>\n      Whether automatic failover is enabled. See the HDFS High\n      Availability documentation for details on automatic HA\n      configuration.\n    </description>\n  </property>\n  <property>\n    <name>dfs.journalnode.edits.dir</name>\n    <value>/var/tmp</value>\n  </property>\n  <property>\n    <name>dfs.replication</name>\n    <value>2</value>\n    <description>\n      Default block replication. The actual number of replications can be\n      specified when the file is created. The default is used if replication\n      is not specified in create time.\n    </description>\n  </property>\n  <property>\n    <name>dfs.namenode.checkpoint.dir</name>\n    <value>file:///hadoop/dfs/namesecondary</value>\n    <description>\n      Determines where on the local filesystem the DFS secondary namenode should\n      store       the temporary images to merge. If this is a comma-delimited\n      list of directories then       the image is replicated in all of the\n      directories for redundancy.\n    </description>\n  </property>\n  <property>\n    <name>dfs.nameservices</name>\n    <value>cluster-ec0a</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.datanode.data.dir</name>\n    <value>/hadoop/dfs/data</value>\n    <description>\n      Determines where on the local filesystem an DFS datanode should store its\n      blocks. If this is a comma-delimited list of directories, then data will\n      be stored in all named directories, typically on different\n      devices.Directories that do not exist are ignored.\n    </description>\n  </property>\n  <property>\n    <name>dfs.namenode.rpc-address.cluster-ec0a.nn1</name>\n    <value>cluster-ec0a-m-1:8020</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.rpc-address.cluster-ec0a.nn0</name>\n    <value>cluster-ec0a-m-0:8020</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.client.failover.proxy.provider.cluster-ec0a</name>\n    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>\n  </property>\n  <property>\n    <name>dfs.permissions.supergroup</name>\n    <value>hadoop</value>\n    <description>The name of the group of super-users.</description>\n  </property>\n  <property>\n    <name>dfs.hosts</name>\n    <value>/etc/hadoop/conf/nodes_include</value>\n  </property>\n  <property>\n    <name>dfs.ha.fencing.methods</name>\n    <value>shell(/bin/true)</value>\n  </property>\n  <property>\n    <name>dfs.namenode.datanode.registration.retry-hostname-dns-lookup</name>\n    <value>true</value>\n    <description>\n      If true, then the namenode will retry reverse dns lookup for hostname of\n      the       datanode. This helps in environments where DNS lookup can be\n      flaky.\n    </description>\n  </property>\n  <property>\n    <name>dfs.ha.namenodes.cluster-ec0a</name>\n    <value>nn0,nn1</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.http-address.cluster-ec0a.nn1</name>\n    <value>cluster-ec0a-m-1:9870</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.http-address.cluster-ec0a.nn0</name>\n    <value>cluster-ec0a-m-0:9870</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.domain.socket.path</name>\n    <value>/var/lib/hadoop-hdfs/dn_socket</value>\n  </property>\n  <property>\n    <name>dfs.hosts.exclude</name>\n    <value>/etc/hadoop/conf/nodes_exclude</value>\n  </property>\n  <property>\n    <name>dfs.datanode.data.dir.perm</name>\n    <value>700</value>\n    <description>\n      Permissions for the directories on on the local filesystem where the DFS\n      data node store its blocks. The permissions can either be octal or\n      symbolic.\n    </description>\n  </property>\n  <property>\n    <name>dfs.namenode.servicerpc-address.cluster-ec0a.nn1</name>\n    <value>cluster-ec0a-m-1:8051</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.servicerpc-address.cluster-ec0a.nn0</name>\n    <value>cluster-ec0a-m-0:8051</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.https-address.cluster-ec0a.nn1</name>\n    <value>cluster-ec0a-m-1:9871</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.https-address.cluster-ec0a.nn0</name>\n    <value>cluster-ec0a-m-0:9871</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.https-address</name>\n    <value>0.0.0.0:9871</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.service.handler.count</name>\n    <value>10</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.handler.count</name>\n    <value>20</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.datanode.address</name>\n    <value>0.0.0.0:9866</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.http-address</name>\n    <value>0.0.0.0:9870</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.datanode.https.address</name>\n    <value>0.0.0.0:9865</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.secondary.http-address</name>\n    <value>0.0.0.0:9868</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.secondary.https-address</name>\n    <value>0.0.0.0:9869</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.datanode.http.address</name>\n    <value>0.0.0.0:9864</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.datanode.ipc.address</name>\n    <value>0.0.0.0:9867</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.lifeline.rpc-address.cluster-ec0a.nn0</name>\n    <value>cluster-ec0a-m-0:8050</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>dfs.namenode.lifeline.rpc-address.cluster-ec0a.nn1</name>\n    <value>cluster-ec0a-m-1:8050</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/dataproc/hive-site.xml",
    "content": "<?xml version=\"1.0\" ?>\n<!--\n  Licensed to the Apache Software Foundation (ASF) under one or more\n  contributor license agreements.  See the NOTICE file distributed with\n  this work for additional information regarding copyright ownership.\n  The ASF licenses this file to You under the Apache License, Version 2.0\n  (the \"License\"); you may not use this file except in compliance with\n  the License.  You may obtain a copy of the License at\n\n      http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License.\n-->\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<configuration>\n  <!-- Hive Configuration can either be stored in this file or in the hadoop configuration files  -->\n  <!-- that are implied by Hadoop setup variables.                                                -->\n  <!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive    -->\n  <!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->\n  <!-- resource).                                                                                 -->\n  <!-- Hive Execution Parameters -->\n  <property>\n    <name>javax.jdo.option.ConnectionURL</name>\n    <!--\n      Will be clobbered at startup by master 0.\n      Required to run schema tool during image build.\n    -->\n    <value>jdbc:mysql://cluster-ec0a-m-0/metastore</value>\n    <description>the URL of the MySQL database</description>\n  </property>\n  <property>\n    <name>javax.jdo.option.ConnectionDriverName</name>\n    <value>com.mysql.jdbc.Driver</value>\n  </property>\n  <property>\n    <name>javax.jdo.option.ConnectionUserName</name>\n    <value>hive</value>\n  </property>\n  <property>\n    <name>datanucleus.fixedDatastore</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>javax.jdo.option.ConnectionPassword</name>\n    <value>hive-password</value>\n  </property>\n  <property>\n    <name>datanucleus.autoStartMechanism</name>\n    <value>SchemaTable</value>\n  </property>\n  <property>\n    <!--\n      Crank up low-level retries from default value of 3. Hive 2.* will have\n      metastore connection attempts fast-fail instead of hanging between\n      \"Starting hive metastore\" and \"Started the new metastore...\", and\n      these retries happen with only 1 second between attempts. Metastore\n      startup appears to take ~5 seconds; in the rare case of startup\n      longer than 60 seconds, the secondary retry loop waits 1 minute between\n      attempts.\n    -->\n    <name>hive.metastore.connect.retries</name>\n    <value>60</value>\n  </property>\n  <property>\n    <name>datanucleus.autoCreateSchema</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.localize.resource.num.wait.attempts</name>\n    <value>25</value>\n  </property>\n  <property>\n    <name>hive.execution.engine</name>\n    <value>tez</value>\n  </property>\n  <property>\n    <name>hive.metastore.uris</name>\n    <value>thrift://cluster-ec0a-m-0:9083,thrift://cluster-ec0a-m-1:9083,thrift://cluster-ec0a-m-2:9083</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.quorum</name>\n    <value>cluster-ec0a-m-0:2181,cluster-ec0a-m-1:2181,cluster-ec0a-m-2:2181</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.client.port</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.server2.support.dynamic.service.discovery</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.zookeeper.namespace</name>\n    <value>hiveserver2</value>\n  </property>\n  <property>\n    <name>hive.support.concurrency</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.session.timeout</name>\n    <value>1200000</value>\n  </property>\n  <property>\n    <name>hive.user.install.directory</name>\n    <value>gs://dataproc-staging-us-central1-888792280192-7eit8tmn/google-cloud-dataproc-metainfo/e553cf63-468f-4b23-a59f-1025ad8e335d/hive/user-install-dir</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/dataproc/mapred-site.xml",
    "content": "<?xml version=\"1.0\" ?>\n<!--\n  Licensed to the Apache Software Foundation (ASF) under one or more\n  contributor license agreements.  See the NOTICE file distributed with\n  this work for additional information regarding copyright ownership.\n  The ASF licenses this file to You under the Apache License, Version 2.0\n  (the \"License\"); you may not use this file except in compliance with\n  the License.  You may obtain a copy of the License at\n\n      http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License.\n-->\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<configuration>\n  <property>\n    <name>mapreduce.job.reduces</name>\n    <value>2</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.address</name>\n    <value>cluster-ec0a-m-0:10020</value>\n    <description>MapReduce JobHistory Server IPC host:port</description>\n  </property>\n  <property>\n    <name>mapreduce.framework.name</name>\n    <value>yarn</value>\n  </property>\n  <property>\n    <name>mapreduce.fileoutputcommitter.task.cleanup.enabled</name>\n    <value>true</value>\n    <description>\n      Whether tasks should delete their task temporary directories. This is\n      purely an optimization for filesystems without O(1) recursive delete,\n      as commitJob will recursively delete the entire job temporary\n      directory. HDFS has O(1) recursive delete, so this parameter is left\n      false by default. Users of object stores, for example, may want to set\n      this to true.        Note: this is only used if\n      mapreduce.fileoutputcommitter.algorithm.version=2       See\n      https://issues.apache.org/jira/browse/MAPREDUCE-7029 for details.\n    </description>\n  </property>\n  <property>\n    <name>yarn.app.mapreduce.am.resource.mb</name>\n    <value>1024</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.map.java.opts</name>\n    <value>-Xmx819m</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.recovery.store.fs.uri</name>\n    <value>${hadoop.tmp.dir}/mapred/history/recoverystore</value>\n    <description>URI where history server state is stored.</description>\n  </property>\n  <property>\n    <name>mapreduce.job.working.dir</name>\n    <value>/user/${user.name}</value>\n    <description>\n      The FileSystem working directory to use for relative paths.\n    </description>\n  </property>\n  <property>\n    <name>mapred.local.dir</name>\n    <value>/hadoop/mapred/local</value>\n    <description>\n      Directories on the local machine in which to store mapreduce temp files.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.fileoutputcommitter.failures.attempts</name>\n    <value>4</value>\n    <description>\n      Number of attempts when failure happens in commit job.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.reduce.java.opts</name>\n    <value>-Xmx1638m</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.map.memory.mb</name>\n    <value>1024</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.reduce.memory.mb</name>\n    <value>2048</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.recovery.enable</name>\n    <value>true</value>\n    <description>\n      Enable history server to recover server state on startup.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.tasktracker.map.tasks.maximum</name>\n    <value>1</value>\n    <description>\n      Property from MapReduce version 1 still used for TeraGen sharding.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.input.fileinputformat.list-status.num-threads</name>\n    <value>20</value>\n    <description>\n      The number of threads to use to list and fetch block locations for the\n      specified input paths. Note: multiple threads should not be used if a\n      custom non thread-safe path filter is used. Setting a larger value than\n      the default of 1 can significantly improve job startup overhead,\n      especially if using GCS as input with multi-level directories, such\n      as in partitioned Hive tables.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.reduce.cpu.vcores</name>\n    <value>1</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.map.cpu.vcores</name>\n    <value>1</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.recovery.store.class</name>\n    <value>org.apache.hadoop.mapreduce.v2.hs.HistoryServerFileSystemStateStoreService</value>\n    <description>\n      Class used to store history server state for recovery.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.job.maps</name>\n    <value>15</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.webapp.address</name>\n    <value>cluster-ec0a-m-0:19888</value>\n    <description>MapReduce JobHistory Server Web UI host:port</description>\n  </property>\n  <property>\n    <name>yarn.app.mapreduce.am.command-opts</name>\n    <value>-Xmx819m</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>yarn.app.mapreduce.am.resource.cpu-vcores</name>\n    <value>1</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.jobhistory.always-scan-user-dir</name>\n    <value>true</value>\n    <description>Enable history server to always scan user dir.</description>\n  </property>\n  <property>\n    <name>mapreduce.fileoutputcommitter.algorithm.version</name>\n    <value>2</value>\n    <description>\n      Updated file output committer algorithm in Hadoop 2.7+. Significantly\n      improves commitJob times when using the Google Cloud Storage connector.\n      See https://issues.apache.org/jira/browse/MAPEDUCE-4815 for more details.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.application.classpath</name>\n    <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,\n      $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,\n      /usr/local/share/google/dataproc/lib/*</value>\n  </property>\n  <property>\n    <name>mapred.bq.project.id</name>\n    <value>hitachi3-281807</value>\n    <description>\n      Google Cloud Project ID to use for BigQuery operations.\n    </description>\n  </property>\n  <property>\n    <name>mapred.bq.output.buffer.size</name>\n    <value>67108864</value>\n    <description>\n      The size in bytes of the output buffer to use when writing to BigQuery.\n    </description>\n  </property>\n  <property>\n    <name>mapred.bq.gcs.bucket</name>\n    <value>dataproc-staging-us-central1-888792280192-7eit8tmn</value>\n    <description>\n      The GCS bucket holding temporary BigQuery data for the input connector.\n    </description>\n  </property>\n  <property>\n    <name>mapreduce.job.reduce.slowstart.completedmaps</name>\n    <value>0.95</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>mapreduce.task.io.sort.mb</name>\n    <value>256</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/dataproc/yarn-site.xml",
    "content": "<?xml version=\"1.0\" ?>\n<!--\n  Licensed under the Apache License, Version 2.0 (the \"License\");\n  you may not use this file except in compliance with the License.\n  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License. See accompanying LICENSE file.\n-->\n<configuration>\n  <!-- Site specific YARN configuration properties -->\n  <property>\n    <name>yarn.resourcemanager.webapp.methods-allowed</name>\n    <value>GET,HEAD</value>\n    <description>\n      The HTTP methods allowed by the YARN Resource Manager web UI and REST API.\n    </description>\n  </property>\n  <property>\n    <description>\n      Name of the cluster. In a HA setting,       this is used to ensure the RM\n      participates in leader       election for this cluster and ensures it does\n      not affect       other clusters\n    </description>\n    <name>yarn.resourcemanager.cluster-id</name>\n    <value>cluster-ec0a</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.store.class</name>\n    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.local-dirs</name>\n    <value>/hadoop/yarn/nm-local-dir</value>\n    <description>\n      Directories on the local machine in which to application temp files.\n    </description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.hostname</name>\n    <value>cluster-ec0a-m-0</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.resource.memory-mb</name>\n    <value>3072</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-mb</name>\n    <value>256</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nodes.include-path</name>\n    <value>/etc/hadoop/conf/nodes_include</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.container-executor.os.sched.priority.adjustment</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.resource.cpu-vcores</name>\n    <value>1</value>\n    <description>\n      Number of vcores that can be allocated for containers. This is used by\n      the RM scheduler when allocating resources for containers. This is not\n      used to limit the number of physical cores used by YARN containers.\n    </description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>\n    <value>/hadoop/yarn-leader-election</value>\n    <description>\n      The base znode path to use for storing leader information,       when\n      using ZooKeeper based leader election.\n    </description>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-mb</name>\n    <value>3072</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.enabled</name>\n    <value>true</value>\n    <description>\n      Enable RM high-availability. When enabled,       (1) The RM starts in the\n      Standby mode by default, and transitions to       the Active mode when\n      prompted to.       (2) The nodes in the RM ensemble are listed in\n      yarn.resourcemanager.ha.rm-ids       (3) The id of each RM either comes\n      from yarn.resourcemanager.ha.id       if yarn.resourcemanager.ha.id is\n      explicitly specified or can be       figured out by matching\n      yarn.resourcemanager.address.{id} with local address       (4) The actual\n      physical addresses come from the configs of the pattern       - {rpc-\n      config}.{id}\n    </description>\n  </property>\n  <property>\n    <name>yarn.client.failover-sleep-max-ms</name>\n    <value>15000</value>\n    <description>\n      When HA is enabled, the maximum sleep time (in milliseconds) between\n      failovers. When set, this overrides the yarn.resourcemanager.connect.*\n      settings. When not set, yarn.resourcemanager.connect.retry-interval.ms is\n      used instead.\n    </description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.zk-state-store.parent-path</name>\n    <value>/hadoop/rmstore</value>\n    <description>\n      Full path of the ZooKeeper znode where RM state will be     stored. This\n      must be supplied when using\n      org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore\n      as the value for yarn.resourcemanager.store.class\n    </description>\n  </property>\n  <property>\n    <name>yarn.log-aggregation-enable</name>\n    <value>false</value>\n    <description>Enable remote logs aggregation to the default FS.</description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.hostname.rm0</name>\n    <value>cluster-ec0a-m-0</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.hostname.rm1</name>\n    <value>cluster-ec0a-m-1</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.hostname.rm2</name>\n    <value>cluster-ec0a-m-2</value>\n  </property>\n  <property>\n    <name>yarn.client.failover-max-attempts</name>\n    <value>15</value>\n    <description>\n      When HA is enabled, the max number of times FailoverProxyProvider should\n      attempt failover. When set, this overrides the\n      yarn.resourcemanager.connect.max-wait.ms. When not set, this is inferred\n      from yarn.resourcemanager.connect.max-wait.ms.\n    </description>\n  </property>\n  <property>\n    <name>yarn.nodemanager.aux-services</name>\n    <value>mapreduce_shuffle,spark_shuffle</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.vmem-check-enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <description>\n      The maximum allocation for every container request at the RM,       in\n      terms of virtual CPU cores. Requests higher than this won't take\n      effect, and will get capped to this value.\n    </description>\n    <name>yarn.scheduler.maximum-allocation-vcores</name>\n    <value>32000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address.rm0</name>\n    <value>cluster-ec0a-m-0:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address.rm1</name>\n    <value>cluster-ec0a-m-1:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address.rm2</name>\n    <value>cluster-ec0a-m-2:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nodes.exclude-path</name>\n    <value>/etc/hadoop/conf/nodes_exclude</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.zk-timeout-ms</name>\n    <value>60000</value>\n    <description>\n      ZooKeeper session timeout in milliseconds. Session expiration is managed\n      by the ZooKeeper cluster itself, not by the client. This value is used by\n      the cluster to determine when the client's session expires. Expirations\n      happens when the cluster does not hear from the client within the\n      specified session timeout period (i.e. no heartbeat).\n    </description>\n  </property>\n  <property>\n    <name>yarn.client.failover-sleep-base-ms</name>\n    <value>500</value>\n    <description>\n      When HA is enabled, the sleep base (in milliseconds) to be used for\n      calculating the exponential delay between failovers. When set, this\n      overrides the yarn.resourcemanager.connect.* settings. When not set,\n      yarn.resourcemanager.connect.retry-interval.ms is used instead.\n    </description>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir</name>\n    <value>/yarn-logs/</value>\n    <description>\n      The remote path, on the default FS, to store logs.\n    </description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.recovery.enabled</name>\n    <value>true</value>\n    <description>Enable RM to recover state after starting.</description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.rm-ids</name>\n    <value>rm0,rm1,rm2</value>\n    <description>\n      The list of RM nodes in the cluster when HA is       enabled. See\n      description of yarn.resourcemanager.ha       .enabled for full details on\n      how this is used.\n    </description>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.bind-host</name>\n    <value>0.0.0.0</value>\n  </property>\n  <property>\n    <name>yarn.application.classpath</name>\n    <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,\n      $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_MAPRED_HOME/*,\n      $HADOOP_MAPRED_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,\n      /usr/local/share/google/dataproc/lib/*</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>\n    <value>org.apache.spark.network.yarn.YarnShuffleService</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.http-cross-origin.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.hostname</name>\n    <value>cluster-ec0a-m-0</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.bind-host</name>\n    <value>0.0.0.0</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.generic-application-history.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.ui-names</name>\n    <value>tez</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.ui-on-disk-path.tez</name>\n    <value>/usr/lib/tez/tez-ui-0.9.2.war</value>\n  </property>\n  <property>\n    <name>yarn.timeline-service.ui-web-path.tez</name>\n    <value>/tez-ui</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nodemanager-graceful-decommission-timeout-secs</name>\n    <value>86400</value>\n    <final>false</final>\n    <source>Dataproc Cluster Properties</source>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/driver-source/driver.kar",
    "content": ""
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/keytab/test.keytab",
    "content": ""
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/missing-info/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <!--Value commented out to test that value is not set in Named Cluster as \"localhost\"-->\n    <!--name>fs.defaultFS</name-->\n    <!--value>hdfs://cdh62n1.pentaho.net</value-->\n  </property>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>simple</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>authentication</value>\n  </property>\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/missing-info/hive-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>hive.metastore.uris</name>\n    <value>thrift://svqxbdcn6cdh514un4.pentahoqa.com:9083</value>\n  </property>\n  <property>\n    <name>hive.metastore.client.socket.timeout</name>\n    <value>300</value>\n  </property>\n  <property>\n    <name>hive.metastore.warehouse.dir</name>\n    <value>/user/hive/warehouse</value>\n  </property>\n  <property>\n    <name>hive.warehouse.subdir.inherit.perms</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join.noconditionaltask.size</name>\n    <value>20971520</value>\n  </property>\n  <property>\n    <name>hive.optimize.bucketmapjoin.sortedmerge</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.smbjoin.cache.rows</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.log.location</name>\n    <value>/var/log/hive/operation_logs</value>\n  </property>\n  <property>\n    <name>mapred.reduce.tasks</name>\n    <value>-1</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.bytes.per.reducer</name>\n    <value>67108864</value>\n  </property>\n  <property>\n    <name>hive.exec.copyfile.maxsize</name>\n    <value>104857600</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.max</name>\n    <value>1099</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.checkinterval</name>\n    <value>4096</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.flush.percent</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.compute.query.using.stats</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.reduce.enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.merge.mapfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.mapredfiles</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.cbo.enable</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion</name>\n    <value>minimal</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion.threshold</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.limit.pushdown.memory.usage</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.merge.sparkfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.smallfiles.avgsize</name>\n    <value>16777216</value>\n  </property>\n  <property>\n    <name>hive.merge.size.per.task</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication.min.reducer</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>hive.map.aggr</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.map.aggr.hash.percentmemory</name>\n    <value>0.5</value>\n  </property>\n  <property>\n    <name>hive.optimize.sort.dynamic.partition</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.execution.engine</name>\n    <value>mr</value>\n  </property>\n  <property>\n    <name>spark.executor.memory</name>\n    <value>1828926259</value>\n  </property>\n  <property>\n    <name>spark.driver.memory</name>\n    <value>966367641</value>\n  </property>\n  <property>\n    <name>spark.executor.cores</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>spark.yarn.driver.memoryOverhead</name>\n    <value>102</value>\n  </property>\n  <property>\n    <name>spark.yarn.executor.memoryOverhead</name>\n    <value>307</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.initialExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.minExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.maxExecutors</name>\n    <value>2147483647</value>\n  </property>\n  <property>\n    <name>hive.metastore.execute.setugi</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.support.concurrency</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.quorum</name>\n    <value>svqxbdcn6cdh514un1.pentahoqa.com,svqxbdcn6cdh514un5.pentahoqa.com,svqxbdcn6cdh514un4.pentahoqa.com,svqxbdcn6cdh514un2.pentahoqa.com,svqxbdcn6cdh514un3.pentahoqa.com</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.client.port</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.namespace</name>\n    <value>hive_zookeeper_namespace_hive</value>\n  </property>\n  <property>\n    <name>hbase.zookeeper.quorum</name>\n    <value>svqxbdcn6cdh514un1.pentahoqa.com,svqxbdcn6cdh514un5.pentahoqa.com,svqxbdcn6cdh514un4.pentahoqa.com,svqxbdcn6cdh514un2.pentahoqa.com,svqxbdcn6cdh514un3.pentahoqa.com</value>\n  </property>\n  <property>\n    <name>hbase.zookeeper.property.clientPort</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.cluster.delegation.token.store.class</name>\n    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>\n  </property>\n  <property>\n    <name>hive.server2.enable.doAs</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.use.SSL</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>spark.shuffle.service.enabled</name>\n    <value>true</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/missing-info/oozie-default.xml",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<!--\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License.\n-->\n<configuration>\n\n    <!-- ************************** VERY IMPORTANT  ************************** -->\n    <!-- This file is in the Oozie configuration directory only for reference. -->\n    <!-- It is not loaded by Oozie, Oozie uses its own privatecopy.            -->\n    <!-- ************************** VERY IMPORTANT  ************************** -->\n\n    <property>\n        <name>oozie.output.compression.codec</name>\n        <value>gz</value>\n        <description>\n            The name of the compression codec to use.\n            The implementation class for the codec needs to be specified through another property oozie.compression.codecs.\n            You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs\n            where codec class implements the interface org.apache.oozie.compression.CompressionCodec.\n            If oozie.compression.codecs is not specified, gz codec implementation is used by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.enable</name>\n        <value>false</value>\n        <description>\n            If the oozie functional metrics needs to be exposed to the metrics-server backend, set it to true\n            If set to true, the following properties has to be specified : oozie.metrics.server.name,\n            oozie.metrics.host, oozie.metrics.prefix, oozie.metrics.report.interval.sec, oozie.metrics.port\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.type</name>\n        <value>graphite</value>\n        <description>\n            The name of the server to which we want to send the metrics, would be graphite or ganglia.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.address</name>\n        <value>http://localhost:2020</value>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.metricPrefix</name>\n        <value>oozie</value>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.reporterIntervalSecs</name>\n        <value>60</value>\n    </property>\n\n    <property>\n        <name>oozie.jmx_monitoring.enable</name>\n        <value>false</value>\n        <description>\n            If the oozie functional metrics needs to be exposed via JMX interface, set it to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.uber.jar.enable</name>\n        <value>false</value>\n        <description>\n            If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an\n            uber jar in HDFS.  Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0.  If false, workflows\n            which specify the oozie.mapreduce.uber.jar configuration property will fail.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.dependency.deduplicate</name>\n        <value>false</value>\n        <description>\n            If true, then Oozie will remove all the duplicates from the list of dependencies when they are passed to\n            the jobtracker. Higher priority dependencies will remain as the following:\n            Original list: \"/a/a.jar#a.jar,/a/b.jar#b.jar,/b/a.jar,/b/b.jar,/c/d.jar\"\n            Deduplicated list: \"/a/a.jar#a.jar,/a/b.jar#b.jar,/c/d.jar\"\n            With other words, priority order is: action jar > user workflow libs > action libs > system lib,\n            where dependency with greater prio is used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.processing.timezone</name>\n        <value>UTC</value>\n        <description>\n            Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India\n            timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified\n            timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason\n            is changed, note that GMT(+/-)#### timezones do not observe DST changes.\n        </description>\n    </property>\n\n    <!-- Base Oozie URL: <SCHEME>://<HOST>:<PORT>/<CONTEXT> -->\n\n    <property>\n        <name>oozie.base.url</name>\n        <value>http://localhost:8080/oozie</value>\n        <description>\n            Base Oozie URL.\n        </description>\n    </property>\n\n    <!-- Services -->\n\n    <property>\n        <name>oozie.system.id</name>\n        <value>oozie-${user.name}</value>\n        <description>\n            The Oozie system ID.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.systemmode</name>\n        <value>NORMAL</value>\n        <description>\n            System mode for  Oozie at startup.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.delete.runtime.dir.on.shutdown</name>\n        <value>true</value>\n        <description>\n            If the runtime directory should be kept after Oozie shutdowns down.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.services</name>\n        <value>\n            org.apache.oozie.service.SchedulerService,\n            org.apache.oozie.service.MetricsInstrumentationService,\n            org.apache.oozie.service.MemoryLocksService,\n            org.apache.oozie.service.UUIDService,\n            org.apache.oozie.service.ELService,\n            org.apache.oozie.service.AuthorizationService,\n            org.apache.oozie.service.UserGroupInformationService,\n            org.apache.oozie.service.HadoopAccessorService,\n            org.apache.oozie.service.JobsConcurrencyService,\n            org.apache.oozie.service.URIHandlerService,\n            org.apache.oozie.service.DagXLogInfoService,\n            org.apache.oozie.service.SchemaService,\n            org.apache.oozie.service.LiteWorkflowAppService,\n            org.apache.oozie.service.JPAService,\n            org.apache.oozie.service.StoreService,\n            org.apache.oozie.service.DBLiteWorkflowStoreService,\n            org.apache.oozie.service.CallbackService,\n            org.apache.oozie.service.ActionService,\n            org.apache.oozie.service.ShareLibService,\n            org.apache.oozie.service.CallableQueueService,\n            org.apache.oozie.service.ActionCheckerService,\n            org.apache.oozie.service.RecoveryService,\n            org.apache.oozie.service.PurgeService,\n            org.apache.oozie.service.CoordinatorEngineService,\n            org.apache.oozie.service.BundleEngineService,\n            org.apache.oozie.service.DagEngineService,\n            org.apache.oozie.service.CoordMaterializeTriggerService,\n            org.apache.oozie.service.StatusTransitService,\n            org.apache.oozie.service.PauseTransitService,\n            org.apache.oozie.service.GroupsService,\n            org.apache.oozie.service.ProxyUserService,\n            org.apache.oozie.service.XLogStreamingService,\n            org.apache.oozie.service.JvmPauseMonitorService,\n            org.apache.oozie.service.SparkConfigurationService,\n            org.apache.oozie.service.SchemaCheckerService\n        </value>\n        <description>\n            All services to be created and managed by Oozie Services singleton.\n            Class names must be separated by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.services.ext</name>\n        <value> </value>\n        <description>\n            To add/replace services defined in 'oozie.services' with custom implementations.\n            Class names must be separated by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.buffer.len</name>\n        <value>4096</value>\n        <description>4K buffer for streaming the logs progressively\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.XLogStreamingService.error.buffer.len</name>\n        <value>2048</value>\n        <description>2K buffer for streaming the error logs\n            progressively\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.audit.buffer.len</name>\n        <value>3</value>\n        <description>Number of lines for streaming the audit logs\n            progressively\n        </description>\n    </property>\n\n    <!-- HCatAccessorService -->\n    <property>\n        <name>oozie.service.HCatAccessorService.jmsconnections</name>\n        <value>\n            default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory\n        </value>\n        <description>\n            Specify the map  of endpoints to JMS configuration properties. In general, endpoint\n            identifies the HCatalog server URL. \"default\" is used if no endpoint is mentioned\n            in the query. If some JMS property is not defined, the system will use the property\n            defined jndi.properties. jndi.properties files is retrieved from the application classpath.\n            Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers.\n            hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616\n        </description>\n    </property>\n\n    <!-- HCatAccessorService -->\n    <property>\n        <name>oozie.service.HCatAccessorService.jms.use.canonical.hostname</name>\n        <value>false</value>\n        <description>The JMS messages published from a HCat server usually contains the canonical hostname of the HCat server\n            in standalone mode or the canonical name of the VIP in a case of multiple nodes in a HA setup. This setting is used\n            to translate the HCat server hostname or its aliases specified by the user in the HCat URIs of the coordinator dependencies\n            to its canonical name so that they can be exactly matched with the JMS dependency availability notifications.\n        </description>\n    </property>\n\n    <!-- TopicService -->\n\n    <property>\n        <name>oozie.service.JMSTopicService.topic.name</name>\n        <value>\n            default=${username}\n        </value>\n        <description>\n            Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a\n            particular job type.\n            For e.g To have a fixed string topic for workflows, coordinators and bundles,\n            specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2}\n            where job type can be WORKFLOW, COORDINATOR or BUNDLE.\n            e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action,\n            bundle job and bundle action\n            WORKFLOW=workflow,\n            COORDINATOR=coordinator,\n            BUNDLE=bundle\n            For jobs with no defined topic, default topic will be ${username}\n        </description>\n    </property>\n\n    <!-- JMS Producer connection -->\n    <property>\n        <name>oozie.jms.producer.connection.properties</name>\n        <value>java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory</value>\n    </property>\n\n    <!-- JMSAccessorService -->\n    <property>\n        <name>oozie.service.JMSAccessorService.connectioncontext.impl</name>\n        <value>\n            org.apache.oozie.jms.DefaultConnectionContext\n        </value>\n        <description>\n            Specifies the Connection Context implementation\n        </description>\n    </property>\n\n\n    <!-- ConfigurationService -->\n\n    <property>\n        <name>oozie.service.ConfigurationService.ignore.system.properties</name>\n        <value>\n            oozie.service.AuthorizationService.security.enabled\n        </value>\n        <description>\n            Specifies \"oozie.*\" properties to cannot be overriden via Java system properties.\n            Property names must be separted by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ConfigurationService.verify.available.properties</name>\n        <value>true</value>\n        <description>\n            Specifies whether the available configurations check is enabled or not.\n        </description>\n    </property>\n\n    <!-- SchedulerService -->\n\n    <property>\n        <name>oozie.service.SchedulerService.threads</name>\n        <value>10</value>\n        <description>\n            The number of threads to be used by the SchedulerService to run deamon tasks.\n            If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available.\n        </description>\n    </property>\n\n    <!--  AuthorizationService -->\n\n    <property>\n        <name>oozie.service.AuthorizationService.authorization.enabled</name>\n        <value>false</value>\n        <description>\n            Specifies whether security (user name/admin role) is enabled or not.\n            If disabled any user can manage Oozie system and manage any job.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AuthorizationService.default.group.as.acl</name>\n        <value>false</value>\n        <description>\n            Enables old behavior where the User's default group is the job's ACL.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.serviceAuthorizationService.admin.users</name>\n        <value></value>\n        <description>\n            Comma separated list of users with admin access for the Authorization service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AuthorizationService.system.info.authorized.users</name>\n        <value></value>\n        <description>\n            Comma separated list of users authorized for web service calls to get system configuration.\n        </description>\n    </property>\n\n    <!-- InstrumentationService -->\n\n    <property>\n        <name>oozie.service.InstrumentationService.logging.interval</name>\n        <value>60</value>\n        <description>\n            Interval, in seconds, at which instrumentation should be logged by the InstrumentationService.\n            If set to 0 it will not log instrumentation data.\n        </description>\n    </property>\n\n    <!-- PurgeService -->\n    <property>\n        <name>oozie.service.PurgeService.older.than</name>\n        <value>30</value>\n        <description>\n            Completed workflow jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.coord.older.than</name>\n        <value>7</value>\n        <description>\n            Completed coordinator jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.bundle.older.than</name>\n        <value>7</value>\n        <description>\n            Completed bundle jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.old.coord.action</name>\n        <value>false</value>\n        <description>\n            Whether to purge completed workflows and their corresponding coordinator actions\n            of long running coordinator jobs if the completed workflow jobs are older than the value\n            specified in oozie.service.PurgeService.older.than.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.limit</name>\n        <value>100</value>\n        <description>\n            Batch size of individual DB operations used for building the list of items\n            to be purged and performing actual purge.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.interval</name>\n        <value>3600</value>\n        <description>\n            Interval at which the purge service will run, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.enable.command.line</name>\n        <value>true</value>\n        <description>\n            Enable/Disable oozie admin purge command. By default, it is enabled.\n        </description>\n    </property>\n\n    <!-- RecoveryService -->\n\n    <property>\n        <name>oozie.service.RecoveryService.wf.actions.older.than</name>\n        <value>120</value>\n        <description>\n            Age of the actions which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.wf.actions.created.time.interval</name>\n        <value>7</value>\n        <description>\n            Created time period of the actions which are eligible to be queued for recovery in days.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.push.dependency.interval</name>\n        <value>200</value>\n        <description>\n            This value determines the delay for push missing dependency command queueing\n            in Recovery Service\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.interval</name>\n        <value>60</value>\n        <description>\n            Interval at which the RecoverService will run, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.coord.older.than</name>\n        <value>600</value>\n        <description>\n            Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.bundle.older.than</name>\n        <value>600</value>\n        <description>\n            Age of the Bundle jobs which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <!-- CallableQueueService -->\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.size</name>\n        <value>10000</value>\n        <description>Max callable queue size</description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.threads</name>\n        <value>10</value>\n        <description>Number of threads used for executing callables</description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.delayedcallable.threads</name>\n        <value>1</value>\n        <description>\n            The number of threads where delayed tasks are executed. Upon expiration, the tasks are immediately\n            inserted into the main queue to properly handle priorities. This means that no actual business logic\n            is executed in this thread pool, so under normal circumstances, this value can be set to a low number.\n\n            Note that this property is completely unrelated to oozie.service.SchedulerService.threads which\n            tells how many scheduled background tasks can run in parallel at the same time (like PurgeService,\n            StatusTransitService, etc).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.newImpl</name>\n        <value>true</value>\n        <description>\n            If set to true, then CallableQueueService will use a faster, less CPU-intensive queuing mechanism to execute\n            asynchronous tasks internally.\n            The old implementation generates noticeable CPU load even if Oozie is completely idle, especially when\n            oozie.service.CallableQueueService.threads is set to a large number. The previous queuing mechanism is kept as a\n            fallback option.\n            This is an experimental feature in Oozie 5.1.0 that needs to be re-evaluated upon an upcoming minor release,\n            meaning the old implementation and this feature flag will also be removed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.awaitTermination.timeout.seconds</name>\n        <value>30</value>\n        <description>\n            Number of seconds while awaiting termination of ThreadPoolExecutor instances when CallableQueueService#destroy()\n            is called, in seconds.\n            The more elements you tend to have in your callable queue, the more you want CallableQueueService to wait\n            before shutting down its thread pools.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.callable.concurrency</name>\n        <value>3</value>\n        <description>\n            Maximum concurrency for a given callable type.\n            Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc).\n            Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc).\n            All commands that use action executors (action-start, action-end, action-kill and action-check) use\n            the action type as the callable type.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.callable.next.eligible</name>\n        <value>true</value>\n        <description>\n            If true, when a callable in the queue has already reached max concurrency,\n            Oozie continuously find next one which has not yet reach max concurrency.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.InterruptMapMaxSize</name>\n        <value>500</value>\n        <description>\n            Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.InterruptTypes</name>\n        <value>kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend</value>\n        <description>\n            Getting the types of XCommands that are considered to be of Interrupt type\n        </description>\n    </property>\n\n    <!--  CoordMaterializeTriggerService -->\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.lookup.interval\n        </name>\n        <value>300</value>\n        <description> Coordinator Job Lookup interval.(in seconds).\n        </description>\n    </property>\n\n    <!-- Enable this if you want different scheduling interval for CoordMaterializeTriggerService.\n    By default it will use lookup interval as scheduling interval\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.scheduling.interval\n        </name>\n        <value>300</value>\n        <description> The frequency at which the CoordMaterializeTriggerService will run.</description>\n    </property>\n    -->\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.materialization.window\n        </name>\n        <value>3600</value>\n        <description> Coordinator Job Lookup command materialized each\n            job for this next \"window\" duration\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.materialization.system.limit</name>\n        <value>50</value>\n        <description>\n            This value determines the number of coordinator jobs to be materialized at a given time.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.normal.default.timeout\n        </name>\n        <value>120</value>\n        <description>Default timeout for a coordinator action input check (in minutes) for normal job.\n            -1 means infinite timeout</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.max.timeout\n        </name>\n        <value>86400</value>\n        <description>Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.input.check.requeue.interval\n        </name>\n        <value>60000</value>\n        <description>Command re-queue interval for coordinator data input check (in millisecond).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.input.check.requeue.interval.additional.delay</name>\n        <value>0</value>\n        <description>This value (in seconds) will be added into oozie.service.coord.input.check.requeue.interval and resulting value\n            will be the requeue interval for the actions which are waiting for a long time without any input.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.push.check.requeue.interval\n        </name>\n        <value>600000</value>\n        <description>Command re-queue interval for push dependencies (in millisecond).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.concurrency\n        </name>\n        <value>1</value>\n        <description>Default concurrency for a coordinator job to determine how many maximum action should\n            be executed at the same time. -1 means infinite concurrency.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.throttle\n        </name>\n        <value>12</value>\n        <description>Default throttle for a coordinator job to determine how many maximum action should\n            be in WAITING state at the same time.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.materialization.throttling.factor\n        </name>\n        <value>0.05</value>\n        <description>Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by\n            this factor X the total queue size.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.check.maximum.frequency</name>\n        <value>true</value>\n        <description>\n            When true, Oozie will reject any coordinators with a frequency faster than 5 minutes.  It is not recommended to disable\n            this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and\n            additional system stress.\n        </description>\n    </property>\n\n    <!-- ELService -->\n    <!--  List of supported groups for ELService -->\n    <property>\n        <name>oozie.service.ELService.groups</name>\n        <value>job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout,bundle-submit,coord-job-submit-initial-instance</value>\n        <description>List of groups for different ELServices</description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.constants.job-submit</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.job-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.job-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.job-submit</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Workflow specifics -->\n    <property>\n        <name>oozie.service.ELService.constants.workflow</name>\n        <value>\n            KB=org.apache.oozie.util.ELConstantsFunctions#KB,\n            MB=org.apache.oozie.util.ELConstantsFunctions#MB,\n            GB=org.apache.oozie.util.ELConstantsFunctions#GB,\n            TB=org.apache.oozie.util.ELConstantsFunctions#TB,\n            PB=org.apache.oozie.util.ELConstantsFunctions#PB,\n            RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS,\n            MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN,\n            MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT,\n            REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN,\n            REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT,\n            GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.workflow</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.workflow</name>\n        <value>\n            firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull,\n            concat=org.apache.oozie.util.ELConstantsFunctions#concat,\n            replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll,\n            appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll,\n            trim=org.apache.oozie.util.ELConstantsFunctions#trim,\n            timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp,\n            urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode,\n            toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr,\n            toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr,\n            toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr,\n            wf:id=org.apache.oozie.DagELFunctions#wf_id,\n            wf:name=org.apache.oozie.DagELFunctions#wf_name,\n            wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath,\n            wf:conf=org.apache.oozie.DagELFunctions#wf_conf,\n            wf:user=org.apache.oozie.DagELFunctions#wf_user,\n            wf:group=org.apache.oozie.DagELFunctions#wf_group,\n            wf:callback=org.apache.oozie.DagELFunctions#wf_callback,\n            wf:transition=org.apache.oozie.DagELFunctions#wf_transition,\n            wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode,\n            wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode,\n            wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage,\n            wf:run=org.apache.oozie.DagELFunctions#wf_run,\n            wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData,\n            wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId,\n            wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri,\n            wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus,\n            hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists,\n            fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir,\n            fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize,\n            fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize,\n            fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize,\n            hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength</name>\n        <value>100000</value>\n        <description>\n            The maximum length of the workflow definition in bytes\n            An error will be reported if the length exceeds the given maximum\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.workflow</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Workflow job submission -->\n    <property>\n        <name>oozie.service.ELService.constants.wf-sla-submit</name>\n        <value>\n            MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.wf-sla-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.wf-sla-submit</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.wf-sla-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Coordinator specifics -->l\n    <!-- Phase 1 resolution during job submission -->\n    <!-- EL Evalautor setup to resolve mainly frequency tags -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-freq</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-freq</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-freq</name>\n        <value>\n            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,\n            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,\n            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,\n            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfWeeks,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-initial-instance</name>\n        <value>\n            ${oozie.service.ELService.functions.coord-job-submit-nofuncs},\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset\n        </value>\n        <description>\n            EL functions for coord job submit initial instance, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-freq</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-wait-timeout</name>\n        <value>\n            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,\n            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,\n            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,\n            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to resolve mainly all constants/variables - no EL functions is resolved -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-nofuncs</name>\n        <value>\n            MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE,\n            HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR,\n            DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY,\n            MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH,\n            YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-nofuncs</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-nofuncs</name>\n        <value>\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-nofuncs</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to **check** whether instances/start-instance/end-instances are valid\n     no EL functions will be resolved -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-instances</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-instances</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-instances</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-instances</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to **check** whether dataIn and dataOut are valid\n     no EL functions will be resolved -->\n\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-data</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-data</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-data</name>\n        <value>\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,\n            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,\n            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,\n            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-data</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Coordinator job submission -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-sla-submit</name>\n        <value>\n            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-sla-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.bundle-submit</name>\n        <value>bundle:conf=org.apache.oozie.bundle.BundleELFunctions#bundle_conf</value>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-sla-submit</name>\n        <value>\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-sla-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!--  Action creation for coordinator -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-create</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-create</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-create</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfDays_echo,\n            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-create</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n\n    <!--  Action creation for coordinator used to only evaluate instance number like ${current (daysInMonth())}. current will be echo-ed -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-create-inst</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-create-inst</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-create-inst</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,\n            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfDays_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-create-inst</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Action creation/materialization -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-sla-create</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-sla-create</name>\n        <value>\n            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS</value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-sla-create</name>\n        <value>\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-sla-create</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!--  Action start for coordinator -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-start</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-start</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-start</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange,\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_epochTime,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,\n            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,\n            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter,\n            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin,\n            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax,\n            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-start</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.latest-el.use-current-time</name>\n        <value>false</value>\n        <description>\n            Determine whether to use the current time to determine the latest dependency or the action creation time.\n            This is for backward compatibility with older oozie behaviour.\n        </description>\n    </property>\n\n    <!-- UUIDService -->\n\n    <property>\n        <name>oozie.service.UUIDService.generator</name>\n        <value>counter</value>\n        <description>\n            random : generated UUIDs will be random strings.\n            counter: generated UUIDs generated will be a counter postfixed with the system startup time.\n        </description>\n    </property>\n\n    <!-- DBLiteWorkflowStoreService -->\n\n    <property>\n        <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval</name>\n        <value>5</value>\n        <description> Workflow Status metrics collection interval in minutes.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.window</name>\n        <value>3600</value>\n        <description>\n            Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window.\n        </description>\n    </property>\n\n    <!-- DB Schema Info, used by DBLiteWorkflowStoreService -->\n\n    <property>\n        <name>oozie.db.schema.name</name>\n        <value>oozie</value>\n        <description>\n            Oozie DataBase Name\n        </description>\n    </property>\n\n    <!-- Database import CLI: batch size -->\n\n    <property>\n        <name>oozie.db.import.batch.size</name>\n        <value>1000</value>\n        <description>\n            How many entities are imported in a single transaction by the Oozie DB import CLI tool to avoid OutOfMemoryErrors.\n        </description>\n    </property>\n\n    <!-- StoreService -->\n\n    <property>\n        <name>oozie.service.JPAService.create.db.schema</name>\n        <value>false</value>\n        <description>\n            Creates Oozie DB.\n\n            If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.\n            If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection</name>\n        <value>true</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection.eviction.interval</name>\n        <value>300000</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep\n            between runs of the idle object evictor thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection.eviction.num</name>\n        <value>10</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            When validate db connection 'TestWhileIdle' is true, the number of objects to examine during\n            each run of the idle object evictor thread.\n        </description>\n    </property>\n\n\n    <property>\n        <name>oozie.service.JPAService.connection.data.source</name>\n        <value>org.apache.oozie.util.db.BasicDataSourceWrapper</value>\n        <description>\n            DataSource to be used for connection pooling. If you want the property\n            openJpa.connectionProperties=\"DriverClassName=...\" to have a real effect, set this to\n            org.apache.oozie.util.db.BasicDataSourceWrapper.\n            A DBCP bug (https://issues.apache.org/jira/browse/DBCP-333) prevents otherwise the JDBC driver\n            setting to have a real effect while using custom class loader.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.connection.properties</name>\n        <value> </value>\n        <description>\n            DataSource connection properties.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.driver</name>\n        <value>org.apache.derby.jdbc.EmbeddedDriver</value>\n        <description>\n            JDBC driver class.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.url</name>\n        <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value>\n        <description>\n            JDBC URL.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.username</name>\n        <value>sa</value>\n        <description>\n            DB user name.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.password</name>\n        <value> </value>\n        <description>\n            DB user password.\n\n            IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,\n            if empty Configuration assumes it is NULL.\n\n            IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in\n            the console.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.pool.max.active.conn</name>\n        <value>10</value>\n        <description>\n            Max number of connections.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.openjpa.BrokerImpl</name>\n        <value>non-finalizing</value>\n        <description>\n            The default OpenJPAEntityManager implementation automatically closes itself during instance finalization.\n            This guards against accidental resource leaks that may occur if a developer fails to explicitly close\n            EntityManagers when finished with them, but it also incurs a scalability bottleneck, since the JVM must\n            perform synchronization during instance creation, and since the finalizer thread will have more instances to monitor.\n            To avoid this overhead, set the openjpa.BrokerImpl configuration property to non-finalizing.\n            To use default implementation set it to empty space.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.initial-wait-time.ms</name>\n        <value>100</value>\n        <description>\n            Initial wait time in milliseconds between the first failed database operation and the re-attempted operation. The wait\n            time is doubled at each retry.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.maximum-wait-time.ms</name>\n        <value>30000</value>\n        <description>\n            Maximum wait time between database retry attempts.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.max-retries</name>\n        <value>10</value>\n        <description>\n            Maximum number of retries for a failed database operation.\n        </description>\n    </property>\n\n    <!-- SchemaService -->\n\n    <property>\n        <name>oozie.service.SchemaService.wf.schemas</name>\n        <value>\n            oozie-common-1.0.xsd,\n            oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd,\n            oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd,oozie-workflow-1.0.xsd,\n            shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,shell-action-1.0.xsd,\n            email-action-0.1.xsd,email-action-0.2.xsd,\n            hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,hive-action-0.6.xsd,hive-action-1.0.xsd,\n            sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,sqoop-action-1.0.xsd,\n            ssh-action-0.1.xsd,ssh-action-0.2.xsd,\n            distcp-action-0.1.xsd,distcp-action-0.2.xsd,distcp-action-1.0.xsd,\n            oozie-sla-0.1.xsd,oozie-sla-0.2.xsd,\n            hive2-action-0.1.xsd,hive2-action-0.2.xsd,hive2-action-1.0.xsd,\n            spark-action-0.1.xsd,spark-action-0.2.xsd,spark-action-1.0.xsd,\n            git-action-1.0.xsd\n        </value>\n        <description>\n            List of schemas for workflows (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.wf.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for workflows (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.coord.schemas</name>\n        <value>\n            oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd,\n            oozie-coordinator-0.5.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd\n        </value>\n        <description>\n            List of schemas for coordinators (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.coord.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for coordinators (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.bundle.schemas</name>\n        <value>\n            oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd\n        </value>\n        <description>\n            List of schemas for bundles (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.bundle.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for bundles (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.sla.schemas</name>\n        <value>\n            gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd\n        </value>\n        <description>\n            List of schemas for semantic validation for GMS SLA (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.sla.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for semantic validation for GMS SLA (separated by commas).\n        </description>\n    </property>\n\n    <!-- CallbackService -->\n\n    <property>\n        <name>oozie.service.CallbackService.base.url</name>\n        <value>${oozie.base.url}/callback</value>\n        <description>\n            Base callback URL used by ActionExecutors.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallbackService.early.requeue.max.retries</name>\n        <value>5</value>\n        <description>\n            If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times\n            to give the action time to transition to RUNNING.\n        </description>\n    </property>\n\n    <!-- CallbackServlet -->\n\n    <property>\n        <name>oozie.servlet.CallbackServlet.max.data.len</name>\n        <value>2048</value>\n        <description>\n            Max size in characters for the action completion data output.\n        </description>\n    </property>\n\n    <!-- External stats-->\n\n    <property>\n        <name>oozie.external.stats.max.size</name>\n        <value>-1</value>\n        <description>\n            Max size in bytes for action stats. -1 means infinite value.\n        </description>\n    </property>\n\n    <!-- JobCommand -->\n\n    <property>\n        <name>oozie.JobCommand.job.console.url</name>\n        <value>${oozie.base.url}?job=</value>\n        <description>\n            Base console URL for a workflow job.\n        </description>\n    </property>\n\n\n    <!-- ActionService -->\n\n    <property>\n        <name>oozie.service.ActionService.executor.classes</name>\n        <value>\n            org.apache.oozie.action.decision.DecisionActionExecutor,\n            org.apache.oozie.action.hadoop.JavaActionExecutor,\n            org.apache.oozie.action.hadoop.FsActionExecutor,\n            org.apache.oozie.action.hadoop.MapReduceActionExecutor,\n            org.apache.oozie.action.hadoop.PigActionExecutor,\n            org.apache.oozie.action.hadoop.HiveActionExecutor,\n            org.apache.oozie.action.hadoop.ShellActionExecutor,\n            org.apache.oozie.action.hadoop.SqoopActionExecutor,\n            org.apache.oozie.action.hadoop.DistcpActionExecutor,\n            org.apache.oozie.action.hadoop.Hive2ActionExecutor,\n            org.apache.oozie.action.ssh.SshActionExecutor,\n            org.apache.oozie.action.oozie.SubWorkflowActionExecutor,\n            org.apache.oozie.action.email.EmailActionExecutor,\n            org.apache.oozie.action.hadoop.SparkActionExecutor,\n            org.apache.oozie.action.hadoop.GitActionExecutor\n        </value>\n        <description>\n            List of ActionExecutors classes (separated by commas).\n            Only action types with associated executors can be used in workflows.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionService.executor.ext.classes</name>\n        <value> </value>\n        <description>\n            List of ActionExecutors extension classes (separated by commas). Only action types with associated\n            executors can be used in workflows. This property is a convenience property to add extensions to the built\n            in executors without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- ActionCheckerService -->\n\n    <property>\n        <name>oozie.service.ActionCheckerService.action.check.interval</name>\n        <value>60</value>\n        <description>\n            The frequency at which the ActionCheckService will run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionCheckerService.action.check.delay</name>\n        <value>600</value>\n        <description>\n            The time, in seconds, between an ActionCheck for the same action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionCheckerService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of actions which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <!-- StatusTransitService -->\n    <property>\n        <name>oozie.service.StatusTransitService.statusTransit.interval</name>\n        <value>60</value>\n        <description>\n            The frequency in seconds at which the StatusTransitService will run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.StatusTransitService.backward.support.for.coord.status</name>\n        <value>false</value>\n        <description>\n            true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit.\n            if set true,\n            1. SUCCEEDED state in coordinator job means materialization done.\n            2. No DONEWITHERROR state in coordinator job\n            3. No PAUSED or PREPPAUSED state in coordinator job\n            4. PREPSUSPENDED becomes SUSPENDED in coordinator job\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.StatusTransitService.backward.support.for.states.without.error</name>\n        <value>true</value>\n        <description>\n            true, if you want to keep Oozie 3.2 status transit.\n            Change it to false for Oozie 4.x releases.\n            if set true,\n            No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR\n            for coordinator and bundle\n        </description>\n    </property>\n\n    <!-- PauseTransitService -->\n    <property>\n        <name>oozie.service.PauseTransitService.PauseTransit.interval</name>\n        <value>60</value>\n        <description>\n            The frequency in seconds at which the PauseTransitService will run.\n        </description>\n    </property>\n\n    <!-- LauncherAMUtils -->\n    <property>\n        <name>oozie.action.max.output.data</name>\n        <value>2048</value>\n        <description>\n            Max size in characters for output data.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.fs.glob.max</name>\n        <value>50000</value>\n        <description>\n            Maximum number of globbed files.\n        </description>\n    </property>\n\n    <!-- JavaActionExecutor -->\n    <!-- This is common to the subclasses of action executors for Java (e.g. map-reduce, pig, hive, java, etc) -->\n\n    <property>\n        <name>oozie.action.launcher.am.restart.kill.childjobs</name>\n        <value>true</value>\n        <description>\n            Multiple instances of launcher jobs can happen due to RM non-work preserving recovery on RM restart, AM recovery\n            due to crashes or AM network connectivity loss. This could also lead to orphaned child jobs of the old AM attempts\n            leading to conflicting runs. This kills child jobs of previous attempts using YARN application tags.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.spark.setup.hadoop.conf.dir</name>\n        <value>false</value>\n        <description>\n            Oozie action.xml (oozie.action.conf.xml) contains all the hadoop configuration and user provided configurations.\n            This property will allow users to copy Oozie action.xml as hadoop *-site configurations files. The advantage is,\n            user need not to manage these files into spark sharelib. If user wants to manage the hadoop configurations\n            themselves, it should should disable it.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir</name>\n        <value>false</value>\n        <description>\n            The Shell action is commonly used to run programs that rely on HADOOP_CONF_DIR (e.g. hive, beeline, sqoop, etc).  With\n            YARN, HADOO_CONF_DIR is set to the NodeManager's copies of Hadoop's *-site.xml files, which can be problematic because\n            (a) they are for meant for the NM, not necessarily clients, and (b) they won't have any of the configs that Oozie, or\n            the user through Oozie, sets.  When this property is set to true, The Shell action will prepare the *-site.xml files\n            based on the correct config and set HADOOP_CONF_DIR to point to it.  Setting it to false will make Oozie leave\n            HADOOP_CONF_DIR alone.  This can also be set at the Action level by putting it in the Shell Action's configuration\n            section, which also has priorty.  That all said, it's recommended to use the appropriate action type when possible.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties</name>\n        <value>true</value>\n        <description>\n            Toggle to control if a log4j.properties file should be written into the configuration directory prepared when\n            oozie.action.shell.setup.hadoop.conf.dir is enabled. This is used to control logging behavior of log4j using commands\n            run within the shell action script, and to ensure logging does not impact output data capture if leaked to stdout.\n            Content of the written file is determined by the value of oozie.action.shell.setup.hadoop.conf.dir.log4j.content.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir.log4j.content</name>\n        <value>\n            log4j.rootLogger=INFO,console\n            log4j.appender.console=org.apache.log4j.ConsoleAppender\n            log4j.appender.console.target=System.err\n            log4j.appender.console.layout=org.apache.log4j.PatternLayout\n            log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n        </value>\n        <description>\n            The value to write into a log4j.properties file under the config directory created when\n            oozie.action.shell.setup.hadoop.conf.dir and oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties\n            properties are both enabled. The values must be properly newline separated and in format expected by Log4J.\n            Trailing and preceding whitespaces will be trimmed when reading this property.\n            This is used to control logging behavior of log4j using commands run within the shell action script.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.max-print-size-kb</name>\n        <value>128</value>\n        <description>\n            When an oozie shell action starts, the shell script will be printed. Scripts larger than the size configured here\n            (in KiB) will not be printed. If this value is less than or equal to zero, the script will not be printed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.launcher.yarn.timeline-service.enabled</name>\n        <value>false</value>\n        <description>\n            Enables/disables getting delegation tokens for ATS for the launcher job in\n            YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in\n            distributed cache.\n            This can be overridden on a per-action basis by setting\n            oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.pig.log.expandedscript</name>\n        <value>true</value>\n        <description>\n            Log the expanded pig script in launcher stdout log\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.rootlogger.log.level</name>\n        <value>INFO</value>\n        <description>\n            Logging level for root logger\n        </description>\n    </property>\n\n    <!-- HadoopActionExecutor -->\n    <!-- This is common to the subclasses action executors for map-reduce and pig -->\n\n    <property>\n        <name>oozie.action.retries.max</name>\n        <value>3</value>\n        <description>\n            The number of retries for executing an action in case of failure\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.retry.interval</name>\n        <value>10</value>\n        <description>\n            The interval between retries of an action in case of failure\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.retry.policy</name>\n        <value>periodic</value>\n        <description>\n            Retry policy of an action in case of failure. Possible values are periodic/exponential\n        </description>\n    </property>\n\n    <!-- SshActionExecutor -->\n\n    <property>\n        <name>oozie.action.ssh.delete.remote.tmp.dir</name>\n        <value>true</value>\n        <description>\n            If set to true, it will delete temporary directory at the end of execution of ssh action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.http.command</name>\n        <value>curl</value>\n        <description>\n            Command to use for callback to oozie, normally is 'curl' or 'wget'.\n            The command must available in PATH environment variable of the USER@HOST box shell.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.http.command.post.options</name>\n        <value>--data-binary @#stdout --request POST --header \"content-type:text/plain\"</value>\n        <description>\n            The callback command POST options.\n            Used when the ouptut of the ssh action is captured.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.allow.user.at.host</name>\n        <value>true</value>\n        <description>\n            Specifies whether the user specified by the ssh action is allowed or is to be replaced\n            by the Job user\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.check.retries.max</name>\n        <value>3</value>\n        <description>\n            Maximal retry count for ssh action status check\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.check.initial.retry.wait.time</name>\n        <value>3000</value>\n        <description>\n            init wait time that the first retry check needs to wait\n        </description>\n    </property>\n\n    <!-- SubworkflowActionExecutor -->\n\n    <property>\n        <name>oozie.action.subworkflow.max.depth</name>\n        <value>50</value>\n        <description>\n            The maximum depth for subworkflows.  For example, if set to 3, then a workflow can start subwf1, which can start subwf2,\n            which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail.  This is helpful in preventing\n            errant workflows from starting infintely recursive subworkflows.\n        </description>\n    </property>\n\n    <!-- HadoopAccessorService -->\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.kerberos.enabled</name>\n        <value>false</value>\n        <description>\n            Indicates if Oozie is configured to use Kerberos.\n        </description>\n    </property>\n\n    <property>\n        <name>local.realm</name>\n        <value>LOCALHOST</value>\n        <description>\n            Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.keytab.file</name>\n        <value>${user.home}/oozie.keytab</value>\n        <description>\n            Location of the Oozie user keytab file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.kerberos.principal</name>\n        <value>${user.name}/localhost@${local.realm}</value>\n        <description>\n            Kerberos principal for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>\n        <value> </value>\n        <description>\n            Whitelisted job tracker for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>\n        <value> </value>\n        <description>\n            Whitelisted job tracker for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>\n        <value>*=hadoop-conf</value>\n        <description>\n            Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is\n            used when there is no exact match for an authority. The HADOOP_CONF_DIR contains\n            the relevant Hadoop *-site.xml files. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute (i.e. to point\n            to Hadoop client conf/ directories in the local filesystem.\n        </description>\n    </property>\n\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.action.configurations</name>\n        <value>*=action-conf</value>\n        <description>\n            Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is\n            used when there is no exact match for an authority. The ACTION_CONF_DIR may contain\n            ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig',\n            'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used\n            as defaults properties for the action. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute (i.e. to point\n            to Hadoop client conf/ directories in the local filesystem.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.action.configurations.load.default.resources</name>\n        <value>true</value>\n        <description>\n            true means that default and site xml files of hadoop (core-default, core-site,\n            hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site)\n            are parsed into actionConf on Oozie server. false means that site xml files are\n            not loaded on server, instead loaded on launcher node.\n            This is only done for pig and hive actions which handle loading those files\n            automatically from the classpath on launcher task. It defaults to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.fs.s3a</name>\n        <value> </value>\n        <description>\n            You can configure custom s3a file system properties globally.\n            Value shall be a comma separated list of key=value pairs. For example:\n            fs.s3a.fast.upload.buffer=bytebuffer,fs.s3a.impl.disable.cache=true\n            Limitation: the custom file system properties cannot contain comma neither\n            in key nor in value.\n        </description>\n    </property>\n\n    <!-- Credentials -->\n    <property>\n        <name>oozie.credentials.credentialclasses</name>\n        <value> </value>\n        <description>\n            A list of credential class mapping for CredentialsProvider\n        </description>\n    </property>\n    <property>\n        <name>oozie.credentials.skip</name>\n        <value>false</value>\n        <description>\n            This determines if Oozie should skip getting credentials from the credential providers.  This can be overwritten at a\n            job-level or action-level.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.main.classnames</name>\n        <value>distcp=org.apache.hadoop.tools.DistCp</value>\n        <description>\n            A list of class name mapping for Action classes\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.WorkflowAppService.system.libpath</name>\n        <value>/user/${user.name}/share/lib</value>\n        <description>\n            System library path to use for workflow applications.\n            This path is added to workflow application if their job properties sets\n            the property 'oozie.use.system.libpath' to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.command.default.lock.timeout</name>\n        <value>5000</value>\n        <description>\n            Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.command.default.requeue.delay</name>\n        <value>10000</value>\n        <description>\n            Default time (in milliseconds) for commands that are requeued for delayed execution.\n        </description>\n    </property>\n\n    <!-- LiteWorkflowStoreService, Workflow Action Automatic Retry -->\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.max</name>\n        <value>3</value>\n        <description>\n            Automatic retry max count for workflow action is 3 in default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.inteval</name>\n        <value>10</value>\n        <description>\n            Automatic retry interval for workflow action is in minutes and the default value is 10 minutes.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.policy</name>\n        <value>periodic</value>\n        <description>\n            Automatic retry policy for workflow action. Possible values are periodic or exponential, periodic being the default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code</name>\n        <value>JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014</value>\n        <description>\n            Automatic retry interval for workflow action is handled for these specified error code:\n            FS009, FS008 is file exists error when using chmod in fs action.\n            FS014 is permission error in fs action\n            JA018 is output directory exists error in workflow map-reduce action.\n            JA019 is error while executing distcp action.\n            JA017 is job not exists error in action executor.\n            JA008 is FileNotFoundException in action executor.\n            JA009 is IOException in action executor.\n            ALL is the any kind of error in action executor.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext</name>\n        <value> </value>\n        <description>\n            Automatic retry interval for workflow action is handled for these specified extra error code:\n            ALL is the any kind of error in action executor.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.node.def.version</name>\n        <value>_oozie_inst_v_2</value>\n        <description>\n            NodeDef default version, _oozie_inst_v_0, _oozie_inst_v_1 or _oozie_inst_v_2\n        </description>\n    </property>\n\n    <!-- Oozie Authentication -->\n\n    <property>\n        <name>oozie.authentication.type</name>\n        <value>simple</value>\n        <description>\n            Defines authentication used for Oozie HTTP endpoint.\n            Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#\n        </description>\n    </property>\n    <property>\n        <name>oozie.server.authentication.type</name>\n        <value>${oozie.authentication.type}</value>\n        <description>\n            Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s).\n            Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME#\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.server.connection.timeout.seconds</name>\n        <value>180</value>\n        <description>\n            Defines connection timeout used for Oozie server communicating to other Oozie server over HTTP(s). Default is 3 min.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.token.validity</name>\n        <value>36000</value>\n        <description>\n            Indicates how long (in seconds) an authentication token is valid before it has\n            to be renewed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.cookie.domain</name>\n        <value></value>\n        <description>\n            The domain to use for the HTTP cookie that stores the authentication token.\n            In order to authentiation to work correctly across multiple hosts\n            the domain must be correctly set.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.simple.anonymous.allowed</name>\n        <value>true</value>\n        <description>\n            Indicates if anonymous requests are allowed when using 'simple' authentication.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.principal</name>\n        <value>HTTP/localhost@${local.realm}</value>\n        <description>\n            Indicates the Kerberos principal to be used for HTTP endpoint.\n            The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.keytab</name>\n        <value>${oozie.service.HadoopAccessorService.keytab.file}</value>\n        <description>\n            Location of the keytab file with the credentials for the principal.\n            Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.name.rules</name>\n        <value>DEFAULT</value>\n        <description>\n            The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's\n            KerberosName for more details.\n        </description>\n    </property>\n\n    <!-- Coordinator \"NONE\" execution order default time tolerance -->\n    <property>\n        <name>oozie.coord.execution.none.tolerance</name>\n        <value>1</value>\n        <description>\n            Default time tolerance in minutes after action nominal time for an action to be skipped\n            when execution order is \"NONE\"\n        </description>\n    </property>\n\n    <!-- Coordinator Actions default length -->\n    <property>\n        <name>oozie.coord.actions.default.length</name>\n        <value>1000</value>\n        <description>\n            Default number of coordinator actions to be retrieved by the info command\n        </description>\n    </property>\n\n    <!-- ForkJoin validation -->\n    <property>\n        <name>oozie.validate.ForkJoin</name>\n        <value>true</value>\n        <description>\n            If true, fork and join should be validated at wf submission time.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.workflow.parallel.fork.action.start</name>\n        <value>true</value>\n        <description>\n            Determines how Oozie processes starting of forked actions. If true, forked actions and their job submissions\n            are done in parallel which is best for performance. If false, they are submitted sequentially.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.coord.action.get.all.attributes</name>\n        <value>false</value>\n        <description>\n            Setting to true is not recommended as coord job/action info will bring all columns of the action in memory.\n            Set it true only if backward compatibility for action/job info is required.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.supported.filesystems</name>\n        <value>hdfs,hftp,webhdfs</value>\n        <description>\n            Enlist the different filesystems supported for federation. If wildcard \"*\" is specified,\n            then ALL file schemes will be allowed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.URIHandlerService.uri.handlers</name>\n        <value>org.apache.oozie.dependency.FSURIHandler</value>\n        <description>\n            Enlist the different uri handlers supported for data availability checks.\n        </description>\n    </property>\n    <!-- Oozie HTTP Notifications -->\n\n    <property>\n        <name>oozie.notification.url.connection.timeout</name>\n        <value>10000</value>\n        <description>\n            Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does\n            HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url',\n            'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url'\n            properties in their job.properties. Refer to section '5 Oozie Notifications' in the\n            Workflow specification for details.\n        </description>\n    </property>\n\n\n    <!-- Enable Distributed Cache workaround for Hadoop 2.0.2-alpha (MAPREDUCE-4820) -->\n    <property>\n        <name>oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache</name>\n        <value>false</value>\n        <description>\n            Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set\n            the distributed cache for the action job because the local JARs are implicitly\n            included triggering a duplicate check.\n            This flag removes the distributed cache files for the action as they'll be\n            included from the local JARs of the JobClient (MRApps) submitting the action\n            job from the launcher.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.filter.app.types</name>\n        <value>workflow_job, coordinator_action</value>\n        <description>\n            The app-types among workflow/coordinator/bundle job/action for which\n            for which events system is enabled.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.event.queue</name>\n        <value>org.apache.oozie.event.MemoryEventQueue</value>\n        <description>\n            The implementation for EventQueue in use by the EventHandlerService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.event.listeners</name>\n        <value>org.apache.oozie.jms.JMSJobEventListener</value>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.queue.size</name>\n        <value>10000</value>\n        <description>\n            Maximum number of events to be contained in the event queue.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.worker.interval</name>\n        <value>30</value>\n        <description>\n            The default interval (seconds) at which the worker threads will be scheduled to run\n            and process events.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.batch.size</name>\n        <value>10</value>\n        <description>\n            The batch size for batched draining per thread from the event queue.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.worker.threads</name>\n        <value>3</value>\n        <description>\n            Number of worker threads to be scheduled to run and process events.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.capacity</name>\n        <value>5000</value>\n        <description>\n            Maximum number of sla records to be contained in the memory structure.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.alert.events</name>\n        <value>END_MISS</value>\n        <description>\n            Default types of SLA events for being alerted of.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.calculator.impl</name>\n        <value>org.apache.oozie.sla.SLACalculatorMemory</value>\n        <description>\n            The implementation for SLACalculator in use by the SLAService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.job.event.latency</name>\n        <value>90000</value>\n        <description>\n            Time in milliseconds to account of latency of getting the job status event\n            to compare against and decide sla miss/met\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.check.interval</name>\n        <value>30</value>\n        <description>\n            Time interval, in seconds, at which SLA Worker will be scheduled to run\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.disable.alerts.older.than</name>\n        <value>48</value>\n        <description>\n            Time threshold, in HOURS, for disabling SLA alerting for jobs whose\n            nominal time is older than this.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.maximum.retry.count</name>\n        <value>3</value>\n        <description>\n            Number of times an SLA calculator status will be tried to get updated when any database related error occurs.\n            It's possible that multiple WorkflowJobBean / CoordActionBean instances being inserted won't have SLACalcStatus entries\n            inside SLACalculatorMemory#slaMap by the time written to database, and thus, no SLA will be tracked.\n            In those rare cases, preconfigured maximum retry count can be extended.\n        </description>\n    </property>\n\n    <!-- ZooKeeper configuration -->\n    <property>\n        <name>oozie.zookeeper.connection.string</name>\n        <value>localhost:2181</value>\n        <description>\n            Comma-separated values of host:port pairs of the ZooKeeper servers.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.namespace</name>\n        <value>oozie</value>\n        <description>\n            The namespace to use.  All of the Oozie Servers that are planning on talking to each other should have the same\n            namespace.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.connection.timeout</name>\n        <value>180</value>\n        <description>\n            Default ZK connection timeout (in sec).\n        </description>\n    </property>\n    <property>\n        <name>oozie.zookeeper.session.timeout</name>\n        <value>300</value>\n        <description>\n            Default ZK session timeout (in sec). If connection is lost even after retry, then Oozie server will shutdown\n            itself if oozie.zookeeper.server.shutdown.ontimeout is true.\n        </description>\n    </property>\n    <property>\n        <name>oozie.zookeeper.max.retries</name>\n        <value>10</value>\n        <description>\n            Maximum number of times to retry.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.server.shutdown.ontimeout</name>\n        <value>true</value>\n        <description>\n            If true, Oozie server will shutdown itself on ZK\n            connection timeout.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.lock.release.retry.time.limit.minutes</name>\n        <value>30</value>\n        <description>\n            On exception while releasing the lock, Oozie will exponentially retry till specified minutes before giving up.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.hostname</name>\n        <value>0.0.0.0</value>\n        <description>\n            Oozie server host name. The network interface Oozie server binds to as an IP address or a hostname.\n            Most users won't need to change this setting from the default value.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.port</name>\n        <value>11000</value>\n        <description>\n            Oozie server port.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.request.header.size</name>\n        <value>65536</value>\n        <description>\n            Oozie HTTP request header size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.response.header.size</name>\n        <value>65536</value>\n        <description>\n            Oozie HTTP response header size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.port</name>\n        <value>11443</value>\n        <description>\n            Oozie ssl server port.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.enabled</name>\n        <value>false</value>\n        <description>\n            Controls whether SSL encryption is enabled.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.truststore.file</name>\n        <value></value>\n        <description>\n            Path to a TrustStore file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.keystore.file</name>\n        <value></value>\n        <description>\n            Path to a KeyStore file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.keystore.pass</name>\n        <value></value>\n        <description>\n            Password to the KeyStore.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.include.protocols</name>\n        <value>TLSv1.1,TLSv1.2,TLSv1.3</value>\n        <description>\n            Enabled TLS protocols.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.exclude.protocols</name>\n        <value></value>\n        <description>\n            Disabled TLS protocols.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.include.cipher.suites</name>\n        <value></value>\n        <description>\n            List of Cipher suites to include.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.exclude.cipher.suites</name>\n        <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,SSL_RSA_WITH_RC4_128_MD5</value>\n        <description>\n            List of weak Cipher suites to exclude.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.hsts.max.age.seconds</name>\n        <value>31536000</value>\n        <description>\n            Strict Transport Security max age in seconds if SSL is enabled. Ideally it is set to one year (31536000 sec).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.jsp.tmp.dir</name>\n        <value>/tmp</value>\n        <description>\n            Temporary directory for compiling JSP pages.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.server.threadpool.max.threads</name>\n        <value>150</value>\n        <description>\n            Controls the threadpool size for the Oozie Server (if using embbedded Jetty)\n        </description>\n    </property>\n\n    <!-- Sharelib Configuration -->\n    <property>\n        <name>oozie.service.ShareLibService.mapping.file</name>\n        <value> </value>\n        <description>\n            Sharelib mapping files contains list of key=value,\n            where key will be the sharelib name for the action and value is a comma separated list of\n            DFS or local filesystem directories or jar files.\n            Example.\n            oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/\n            oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/\n            oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar\n            oozie.hive=file:///usr/local/oozie/share/lib/hive/\n        </description>\n\n    </property>\n    <property>\n        <name>oozie.service.ShareLibService.fail.fast.on.startup</name>\n        <value>false</value>\n        <description>\n            Fails server starup if sharelib initilzation fails.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ShareLibService.purge.interval</name>\n        <value>1</value>\n        <description>\n            How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name>\n        <value>7</value>\n        <description>\n            ShareLib retention time in days.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ship.launcher.jar</name>\n        <value>false</value>\n        <description>\n            Specifies whether launcher jar is shipped or not.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.jobinfo.enable</name>\n        <value>false</value>\n        <description>\n            JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have\n            property(oozie.job.info) which value is multiple key/value pair separated by \",\". This information can be used for\n            analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs,\n            etc from mapreduce job history logs and configuration.\n            User can also add custom workflow property to jobinfo by adding property which prefix with \"oozie.job.info.\"\n            Eg.\n            oozie.job.info=\"bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=,\n            wf.name=,action.name=,action.type=,launcher=true\"\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.max.log.scan.duration</name>\n        <value>-1</value>\n        <description>\n            Max log scan duration in hours. If log scan request end_date - start_date > value,\n            then exception is thrown to reduce the scan duration. -1 indicate no limit.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.actionlist.max.log.scan.duration</name>\n        <value>-1</value>\n        <description>\n            Max log scan duration in hours for coordinator job when list of actions are specified.\n            If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration.\n            -1 indicate no limit.\n            This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified.\n        </description>\n    </property>\n\n    <!-- JvmPauseMonitorService Configuration -->\n    <property>\n        <name>oozie.service.JvmPauseMonitorService.warn-threshold.ms</name>\n        <value>10000</value>\n        <description>\n            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate\n            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for\n            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log\n            a WARN level message; there is also a counter named \"jvm.pause.warn-threshold\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JvmPauseMonitorService.info-threshold.ms</name>\n        <value>1000</value>\n        <description>\n            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate\n            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for\n            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log\n            an INFO level message; there is also a counter named \"jvm.pause.info-threshold\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.locks.reaper.threshold</name>\n        <value>300</value>\n        <description>\n            The frequency at which the ChildReaper will run.\n            Duration should be in sec. Default is 5 min.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.locks.reaper.threads</name>\n        <value>2</value>\n        <description>\n            Number of fixed threads used by ChildReaper to\n            delete empty locks.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.check.interval\n        </name>\n        <value>1440</value>\n        <description>\n            Interval, in minutes, at which AbandonedCoordCheckerService should run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.check.delay\n        </name>\n        <value>60</value>\n        <description>\n            Delay, in minutes, at which AbandonedCoordCheckerService should run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.failure.limit\n        </name>\n        <value>25</value>\n        <description>\n            Failure limit. A job is considered to be abandoned/faulty if total number of actions in\n            failed/timedout/suspended >= \"Failure limit\" and there are no succeeded action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.kill.jobs\n        </name>\n        <value>false</value>\n        <description>\n            If true, AbandonedCoordCheckerService will kill abandoned coords.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.job.older.than</name>\n        <value>2880</value>\n        <description>\n            In minutes, job will be considered as abandoned/faulty if job is older than this value.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.notification.proxy</name>\n        <value></value>\n        <description>\n            System level proxy setting for job notifications.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.wf.rerun.disablechild</name>\n        <value>false</value>\n        <description>\n            By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and\n            it will only rerun through parent.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.use.system.libpath</name>\n        <value>false</value>\n        <description>\n            Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath=\n            in the job.properties and this value is true and Oozie will include sharelib jars for workflow.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PauseTransitService.callable.batch.size\n        </name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <!-- XConfiguration -->\n    <property>\n        <name>oozie.configuration.substitute.depth</name>\n        <value>20</value>\n        <description>\n            This value determines the depth of substitution in configurations.\n            If set -1, No limitation on substitution.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations</name>\n        <value>*=spark-conf</value>\n        <description>\n            Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the ResourceManager of a YARN cluster. The wildcard '*' configuration is\n            used when there is no exact match for an authority. The SPARK_CONF_DIR contains\n            the relevant spark-defaults.conf properties file. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute.  This is only used\n            when the Spark master is set to either \"yarn-client\" or \"yarn-cluster\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations.blacklist</name>\n        <value>spark.yarn.jar,spark.yarn.jars</value>\n        <description>\n            Comma separated list of properties to ignore from any Spark configurations specified in\n            oozie.service.SparkConfigurationService.spark.configurations property.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations.ignore.spark.yarn.jar</name>\n        <value>true</value>\n        <description>\n            Deprecated. Use oozie.service.SparkConfigurationService.spark.configurations.blacklist instead.\n            If true, Oozie will ignore the \"spark.yarn.jar\" property from any Spark configurations specified in\n            oozie.service.SparkConfigurationService.spark.configurations.  If false, Oozie will not ignore it.  It is recommended\n            to leave this as true because it can interfere with the jars in the Spark sharelib.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.attachment.enabled</name>\n        <value>true</value>\n        <description>\n            This value determines whether to support email attachment of a file on HDFS.\n            Set it false if there is any security concern.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.host</name>\n        <value>localhost</value>\n        <description>\n            The host where the email action may find the SMTP server.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.port</name>\n        <value>25</value>\n        <description>\n            The port to connect to for the SMTP server, for email actions.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.auth</name>\n        <value>false</value>\n        <description>\n            Boolean property that toggles if authentication is to be done or not when using email actions.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.starttls.enable</name>\n        <value>false</value>\n        <description>\n            Boolean property that toggles if use TLS in communication or not.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.username</name>\n        <value></value>\n        <description>\n            If authentication is enabled for email actions, the username to login as (to the SMTP server).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.password</name>\n        <value></value>\n        <description>\n            If authentication is enabled for email actions, the password to login with (to the SMTP server).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.from.address</name>\n        <value>oozie@localhost</value>\n        <description>\n            The from address to be used for mailing all emails done via the email action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.socket.timeout.ms</name>\n        <value>10000</value>\n        <description>\n            The timeout to apply over all SMTP server socket operations done during the email action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.name-node</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;name-node&gt; element in applicable action types.  This value will be used when\n            neither the action itself nor the global section specifies a &lt;name-node&gt;.  As expected, it should be of the form\n            \"hdfs://HOST:PORT\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.job-tracker</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;job-tracker&gt; element in applicable action types.  This value will be used when\n            neither the action itself nor the global section specifies a &lt;job-tracker&gt;.  As expected, it should be of the form\n            \"HOST:PORT\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.resource-manager</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;resource-manager&gt; element in applicable action types.  This value will be used\n            when neither the action itself nor the global section specifies a &lt;resource-managerr&gt;.  As expected, it should\n            be of the form \"HOST:PORT\". If both oozie.actions.default.job-tracker and oozie.actions.default.resource-manager are\n            specified, oozie.actions.default.resource-manager takes precedence.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaCheckerService.check.interval</name>\n        <value>168</value>\n        <description>\n            This is the interval at which Oozie will check the database schema, in hours.\n            A zero or negative value will disable the checker.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaCheckerService.ignore.extras</name>\n        <value>false</value>\n        <description>\n            When set to false, the schema checker will consider extra (unused) tables, columns, and indexes to be incorrect.  When\n            set to true, these will be ignored.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.hcat.uri.regex.pattern</name>\n        <value>([a-z]+://[\\w\\.\\-]+:\\d+[,]*)+/\\w+/\\w+/?[\\w+=;\\-]*</value>\n        <description>Regex pattern for HCat URIs. The regex can be modified by users as per requirement\n            for parsing/splitting the HCat URIs.</description>\n    </property>\n\n    <property>\n        <name>oozie.action.null.args.allowed</name>\n        <value>true</value>\n        <description>\n            When set to true, empty arguments (like &lt;arg&gt;&lt;/arg&gt;) will be passed as \"null\" to the main method of a\n            given action. That is, the args[] array will contain \"null\" elements. When set to false, then \"nulls\" are removed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.javax.xml.parsers.DocumentBuilderFactory</name>\n        <value>org.apache.xerces.jaxp.DocumentBuilderFactoryImpl</value>\n        <description>\n            Oozie will set the javax.xml.parsers.DocumentBuilderFactory Java System Property to this value.  This helps speed up\n            XML handling because the JVM doesn't have to search for the proper class every time.  An empty or whitespace value\n            skips setting the System Property.  The default implementation that Oozie uses is Xerces.\n            Most users should not have to change this.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.graphviz.timeout.seconds</name>\n        <value>60</value>\n        <description>\n            The default number of seconds Graphviz graph generation will timeout.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.vcores</name>\n        <value>1</value>\n        <description>\n            The default number of vcores that are allocated for the Launcher AMs\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.memory.mb</name>\n        <value>2048</value>\n        <description>\n            The default amount of memory in MBs that is allocated for the Launcher AMs\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.priority</name>\n        <value>0</value>\n        <description>\n            The default YARN priority of the Launcher AM\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.queue</name>\n        <value>default</value>\n        <description>\n            The default YARN queue where the Launcher AM is placed\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.max.attempts</name>\n        <value>2</value>\n        <description>\n            The default YARN maximal attempt count of the Launcher AM\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override</name>\n        <value>true</value>\n        <description>\n            Whether oozie.launcher.override.* and oozie.launcher.prepend.* parameters have to be considered when submitting a YARN\n            LauncherAM. That is, existing MapReduce v1, MapReduce v2, or YARN parameters used in the action configuration should be\n            populated to the Application Master launcher configuration, or not. Generally, first &lt;launcher/&gt; tag specific user\n            settings, then YARN configuration settings, then MapReduce v2, and at last, MapReduce v1 properties are copied to\n            launcher configuration.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.max.attempts</name>\n        <value>mapreduce.map.maxattempts,mapred.map.max.attempts</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override the max attempts of the MapReduce\n            Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.memory.mb</name>\n        <value>yarn.app.mapreduce.am.resource.mb,mapreduce.map.memory.mb,mapred.job.map.memory.mb</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the memory amount in MB of the\n            MapReduce Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.vcores</name>\n        <value>yarn.app.mapreduce.am.resource.cpu-vcores,mapreduce.map.cpu.vcores</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the CPU vcore count of the\n            MapReduce Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.log.level</name>\n        <value>mapreduce.map.log.level,mapred.map.child.log.level</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the logging level of the MapReduce\n            Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.javaopts</name>\n        <value>yarn.app.mapreduce.am.command-opts,mapreduce.map.java.opts,mapred.child.java.opts</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override MapReduce Application Master JVM\n            options. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.prepend.javaopts</name>\n        <value>yarn.app.mapreduce.am.admin-command-opts</value>\n        <description>\n            A comma separated list of YARN properties to prepend to MapReduce Application Master JVM options. The first one that is\n            found will be prepended to the list of JVM options.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.env</name>\n        <value>yarn.app.mapreduce.am.env,mapreduce.map.env,mapred.child.env</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override MapReduce Application Master\n            environment variable settings. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.prepend.env</name>\n        <value>yarn.app.mapreduce.am.admin.user.env</value>\n        <description>\n            A comma separated list of YARN properties to prepend to MapReduce Application Master environment settings. The first one\n            that is found will be prepended to the list of environment settings.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.priority</name>\n        <value>mapreduce.job.priority,mapred.job.priority</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 to override MapReduce Application Master job priority. The first\n            one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.queue</name>\n        <value>mapreduce.job.queuename,mapred.job.queue.name</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce Application Master job queue\n            name. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.view.acl</name>\n        <value>mapreduce.job.acl-view-job</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce View ACL settings.\n            The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.modify.acl</name>\n        <value>mapreduce.job.acl-modify-job</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce Modify ACL settings.\n            The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.distcp</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the DistCp action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.hive</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Hive action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.hive2</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Hive2 action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.java</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Java action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.map-reduce</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Map-Reduce action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.pig</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Pig action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.sqoop</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Sqoop action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.sqoop.shellsplitter</name>\n        <value>false</value>\n        <description>\n            Whether to use shell splitter instead of the space-based tokenizer during sqoop command splitting.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.fluent-job-api.generated.path</name>\n        <value>/user/${user.name}/oozie-fluent-job-api-generated</value>\n        <description>\n            HDFS path to store workflow / coordinator / bundle definitions generated by fluent-job-api artifact.\n            The XML files are first generated out of the fluent-job-api JARs submitted by the user at command line, then stored\n            under this HDFS folder structure for later retrieval / resubmit / check.\n            Note that the submitting user needs r/w permissions under this HDFS folder.\n            Note further that this folder structure, when does not exist, will be created.\n        </description>\n    </property>\n\n</configuration>"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/missing-info/yarn-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>yarn.acl.enable</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.admin.acl</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.address</name>\n    <!--Port not provided to test that value is not set in Named Cluster as -1-->\n    <value>cdh62n2.pentaho</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.address</name>\n    <value>cdh62n2.pentaho.net:8033</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.address</name>\n    <value>cdh62n2.pentaho.net:8030</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.address</name>\n    <value>cdh62n2.pentaho.net:8031</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address</name>\n    <value>cdh62n2.pentaho.net:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.https.address</name>\n    <value>cdh62n2.pentaho.net:8090</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.client.thread-count</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-mb</name>\n    <value>1024</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-mb</name>\n    <value>512</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-mb</name>\n    <value>2273</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-vcores</name>\n    <value>8</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.am.max-attempts</name>\n    <value>2</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.application.classpath</name>\n    <value>$HADOOP_CLIENT_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.class</name>\n    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.capacity.resource-calculator</name>\n    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.max-completed-applications</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir</name>\n    <value>/tmp/logs</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>\n    <value>logs</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/secured/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>fs.defaultFS</name>\n    <value>hdfs://CDH62Secure</value>\n  </property>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>kerberos</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>privacy</value>\n  </property>\n  <property>\n    <name>hadoop.security.key.provider.path</name>\n    <value>kms://https@cdh62secn1.pentaho.net:16000/kms</value>\n  </property>\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/secured/hive-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>hive.metastore.uris</name>\n    <value>thrift://cdh62secn3.pentaho.net:9083</value>\n  </property>\n  <property>\n    <name>hive.metastore.client.socket.timeout</name>\n    <value>300</value>\n  </property>\n  <property>\n    <name>hive.metastore.warehouse.dir</name>\n    <value>/user/hive/warehouse</value>\n  </property>\n  <property>\n    <name>hive.warehouse.subdir.inherit.perms</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join.noconditionaltask.size</name>\n    <value>20971520</value>\n  </property>\n  <property>\n    <name>hive.optimize.bucketmapjoin.sortedmerge</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.smbjoin.cache.rows</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.log.location</name>\n    <value>/var/log/hive/operation_logs</value>\n  </property>\n  <property>\n    <name>mapred.reduce.tasks</name>\n    <value>-1</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.bytes.per.reducer</name>\n    <value>67108864</value>\n  </property>\n  <property>\n    <name>hive.exec.copyfile.maxsize</name>\n    <value>104857600</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.max</name>\n    <value>1099</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.checkinterval</name>\n    <value>4096</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.flush.percent</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.compute.query.using.stats</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.reduce.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.use.vectorized.input.format</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.use.checked.expressions</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.use.vector.serde.deserialize</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.vectorized.adaptor.usage.mode</name>\n    <value>chosen</value>\n  </property>\n  <property>\n    <name>hive.vectorized.input.format.excludes</name>\n    <value>org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat</value>\n  </property>\n  <property>\n    <name>hive.merge.mapfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.mapredfiles</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.cbo.enable</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion</name>\n    <value>minimal</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion.threshold</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.limit.pushdown.memory.usage</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.merge.sparkfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.smallfiles.avgsize</name>\n    <value>16777216</value>\n  </property>\n  <property>\n    <name>hive.merge.size.per.task</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication.min.reducer</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>hive.map.aggr</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.map.aggr.hash.percentmemory</name>\n    <value>0.5</value>\n  </property>\n  <property>\n    <name>hive.optimize.sort.dynamic.partition</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.execution.engine</name>\n    <value>mr</value>\n  </property>\n  <property>\n    <name>spark.executor.memory</name>\n    <value>743781171b</value>\n  </property>\n  <property>\n    <name>spark.driver.memory</name>\n    <value>966367641b</value>\n  </property>\n  <property>\n    <name>spark.executor.cores</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>spark.yarn.driver.memoryOverhead</name>\n    <value>102m</value>\n  </property>\n  <property>\n    <name>spark.yarn.executor.memoryOverhead</name>\n    <value>125m</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.initialExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.minExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.maxExecutors</name>\n    <value>2147483647</value>\n  </property>\n  <property>\n    <name>hive.metastore.execute.setugi</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.support.concurrency</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.quorum</name>\n    <value>cdh62secn2.pentaho.net,cdh62secn1.pentaho.net,cdh62secn3.pentaho.net</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.client.port</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.namespace</name>\n    <value>hive_zookeeper_namespace_hive</value>\n  </property>\n  <property>\n    <name>hive.cluster.delegation.token.store.class</name>\n    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>\n  </property>\n  <property>\n    <name>hive.server2.enable.doAs</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.metastore.sasl.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.authentication</name>\n    <value>kerberos</value>\n  </property>\n  <property>\n    <name>hive.metastore.kerberos.principal</name>\n    <value>hive/_HOST@PENTAHO.NET</value>\n  </property>\n  <property>\n    <name>hive.server2.authentication.kerberos.principal</name>\n    <value>hive/_HOST@PENTAHO.NET</value>\n  </property>\n  <property>\n    <name>hive.server2.use.SSL</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>spark.shuffle.service.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.strict.checks.orderby.no.limit</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.strict.checks.no.partition.filter</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.strict.checks.type.safety</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.strict.checks.cartesian.product</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.strict.checks.bucketing</name>\n    <value>true</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/secured/yarn-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>yarn.acl.enable</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.admin.acl</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.recovery.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.zk-address</name>\n    <value>cdh62secn2.pentaho.net:2181,cdh62secn1.pentaho.net:2181,cdh62secn3.pentaho.net:2181</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.store.class</name>\n    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>\n  </property>\n  <property>\n    <name>yarn.client.failover-sleep-base-ms</name>\n    <value>100</value>\n  </property>\n  <property>\n    <name>yarn.client.failover-sleep-max-ms</name>\n    <value>2000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.cluster-id</name>\n    <value>yarnRM</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8032</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8030</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8031</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8033</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.https.address.rm218</name>\n    <value>cdh62secn1.pentaho.net:8090</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8032</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8030</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8031</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8033</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.https.address.rm104</name>\n    <value>cdh62secn2.pentaho.net:8090</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.ha.rm-ids</name>\n    <value>rm218,rm104</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.client.thread-count</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-mb</name>\n    <value>1024</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-mb</name>\n    <value>512</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-mb</name>\n    <value>16384</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-vcores</name>\n    <value>8</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.am.max-attempts</name>\n    <value>2</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.application.classpath</name>\n    <value>$HADOOP_CLIENT_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.class</name>\n    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.capacity.resource-calculator</name>\n    <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.max-completed-applications</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir</name>\n    <value>/tmp/logs</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>\n    <value>logs</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.principal</name>\n    <value>yarn/_HOST@PENTAHO.NET</value>\n  </property>\n  <property>\n    <name>yarn.http.policy</name>\n    <value>HTTPS_ONLY</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/unsecured/core-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>fs.defaultFS</name>\n    <value>hdfs://svqxbdcn6cdh514un4.pentahoqa.com:8020</value>\n  </property>\n  <property>\n    <name>fs.trash.interval</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>io.compression.codecs</name>\n    <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.Lz4Codec</value>\n  </property>\n  <property>\n    <name>hadoop.security.authentication</name>\n    <value>simple</value>\n  </property>\n  <property>\n    <name>hadoop.security.authorization</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.rpc.protection</name>\n    <value>authentication</value>\n  </property>\n  <property>\n    <name>hadoop.security.auth_to_local</name>\n    <value>DEFAULT</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.oozie.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.mapred.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.mapred.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.flume.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.HTTP.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hive.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hue.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.httpfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.hdfs.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.hosts</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.proxyuser.yarn.groups</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>hadoop.security.group.mapping</name>\n    <value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value>\n  </property>\n  <property>\n    <name>hadoop.security.instrumentation.requires.admin</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>net.topology.script.file.name</name>\n    <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value>\n  </property>\n  <property>\n    <name>io.file.buffer.size</name>\n    <value>65536</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hadoop.ssl.require.client.cert</name>\n    <value>false</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.keystores.factory.class</name>\n    <value>org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.server.conf</name>\n    <value>ssl-server.xml</value>\n    <final>true</final>\n  </property>\n  <property>\n    <name>hadoop.ssl.client.conf</name>\n    <value>ssl-client.xml</value>\n    <final>true</final>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/unsecured/hive-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>hive.metastore.uris</name>\n    <value>thrift://svqxbdcn6cdh514un4.pentahoqa.com:9083</value>\n  </property>\n  <property>\n    <name>hive.metastore.client.socket.timeout</name>\n    <value>300</value>\n  </property>\n  <property>\n    <name>hive.metastore.warehouse.dir</name>\n    <value>/user/hive/warehouse</value>\n  </property>\n  <property>\n    <name>hive.warehouse.subdir.inherit.perms</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.auto.convert.join.noconditionaltask.size</name>\n    <value>20971520</value>\n  </property>\n  <property>\n    <name>hive.optimize.bucketmapjoin.sortedmerge</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.smbjoin.cache.rows</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.logging.operation.log.location</name>\n    <value>/var/log/hive/operation_logs</value>\n  </property>\n  <property>\n    <name>mapred.reduce.tasks</name>\n    <value>-1</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.bytes.per.reducer</name>\n    <value>67108864</value>\n  </property>\n  <property>\n    <name>hive.exec.copyfile.maxsize</name>\n    <value>104857600</value>\n  </property>\n  <property>\n    <name>hive.exec.reducers.max</name>\n    <value>1099</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.checkinterval</name>\n    <value>4096</value>\n  </property>\n  <property>\n    <name>hive.vectorized.groupby.flush.percent</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.compute.query.using.stats</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.vectorized.execution.reduce.enabled</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.merge.mapfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.mapredfiles</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.cbo.enable</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion</name>\n    <value>minimal</value>\n  </property>\n  <property>\n    <name>hive.fetch.task.conversion.threshold</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.limit.pushdown.memory.usage</name>\n    <value>0.1</value>\n  </property>\n  <property>\n    <name>hive.merge.sparkfiles</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.merge.smallfiles.avgsize</name>\n    <value>16777216</value>\n  </property>\n  <property>\n    <name>hive.merge.size.per.task</name>\n    <value>268435456</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.optimize.reducededuplication.min.reducer</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>hive.map.aggr</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.map.aggr.hash.percentmemory</name>\n    <value>0.5</value>\n  </property>\n  <property>\n    <name>hive.optimize.sort.dynamic.partition</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>hive.execution.engine</name>\n    <value>mr</value>\n  </property>\n  <property>\n    <name>spark.executor.memory</name>\n    <value>1828926259</value>\n  </property>\n  <property>\n    <name>spark.driver.memory</name>\n    <value>966367641</value>\n  </property>\n  <property>\n    <name>spark.executor.cores</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>spark.yarn.driver.memoryOverhead</name>\n    <value>102</value>\n  </property>\n  <property>\n    <name>spark.yarn.executor.memoryOverhead</name>\n    <value>307</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.enabled</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.initialExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.minExecutors</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>spark.dynamicAllocation.maxExecutors</name>\n    <value>2147483647</value>\n  </property>\n  <property>\n    <name>hive.metastore.execute.setugi</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.support.concurrency</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.quorum</name>\n    <value>svqxbdcn6cdh514un1.pentahoqa.com,svqxbdcn6cdh514un5.pentahoqa.com,svqxbdcn6cdh514un4.pentahoqa.com,svqxbdcn6cdh514un2.pentahoqa.com,svqxbdcn6cdh514un3.pentahoqa.com</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.client.port</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.zookeeper.namespace</name>\n    <value>hive_zookeeper_namespace_hive</value>\n  </property>\n  <property>\n    <name>hbase.zookeeper.quorum</name>\n    <value>svqxbdcn6cdh514un1.pentahoqa.com,svqxbdcn6cdh514un5.pentahoqa.com,svqxbdcn6cdh514un4.pentahoqa.com,svqxbdcn6cdh514un2.pentahoqa.com,svqxbdcn6cdh514un3.pentahoqa.com</value>\n  </property>\n  <property>\n    <name>hbase.zookeeper.property.clientPort</name>\n    <value>2181</value>\n  </property>\n  <property>\n    <name>hive.cluster.delegation.token.store.class</name>\n    <value>org.apache.hadoop.hive.thrift.MemoryTokenStore</value>\n  </property>\n  <property>\n    <name>hive.server2.enable.doAs</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>hive.server2.use.SSL</name>\n    <value>false</value>\n  </property>\n  <property>\n    <name>spark.shuffle.service.enabled</name>\n    <value>true</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/unsecured/oozie-default.xml",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet type=\"text/xsl\" href=\"configuration.xsl\"?>\n<!--\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing, software\n  distributed under the License is distributed on an \"AS IS\" BASIS,\n  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  See the License for the specific language governing permissions and\n  limitations under the License.\n-->\n<configuration>\n\n    <!-- ************************** VERY IMPORTANT  ************************** -->\n    <!-- This file is in the Oozie configuration directory only for reference. -->\n    <!-- It is not loaded by Oozie, Oozie uses its own privatecopy.            -->\n    <!-- ************************** VERY IMPORTANT  ************************** -->\n\n    <property>\n        <name>oozie.output.compression.codec</name>\n        <value>gz</value>\n        <description>\n            The name of the compression codec to use.\n            The implementation class for the codec needs to be specified through another property oozie.compression.codecs.\n            You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs\n            where codec class implements the interface org.apache.oozie.compression.CompressionCodec.\n            If oozie.compression.codecs is not specified, gz codec implementation is used by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.enable</name>\n        <value>false</value>\n        <description>\n            If the oozie functional metrics needs to be exposed to the metrics-server backend, set it to true\n            If set to true, the following properties has to be specified : oozie.metrics.server.name,\n            oozie.metrics.host, oozie.metrics.prefix, oozie.metrics.report.interval.sec, oozie.metrics.port\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.type</name>\n        <value>graphite</value>\n        <description>\n            The name of the server to which we want to send the metrics, would be graphite or ganglia.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.address</name>\n        <value>http://localhost:2020</value>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.metricPrefix</name>\n        <value>oozie</value>\n    </property>\n\n    <property>\n        <name>oozie.external_monitoring.reporterIntervalSecs</name>\n        <value>60</value>\n    </property>\n\n    <property>\n        <name>oozie.jmx_monitoring.enable</name>\n        <value>false</value>\n        <description>\n            If the oozie functional metrics needs to be exposed via JMX interface, set it to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.uber.jar.enable</name>\n        <value>false</value>\n        <description>\n            If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an\n            uber jar in HDFS.  Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0.  If false, workflows\n            which specify the oozie.mapreduce.uber.jar configuration property will fail.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.dependency.deduplicate</name>\n        <value>false</value>\n        <description>\n            If true, then Oozie will remove all the duplicates from the list of dependencies when they are passed to\n            the jobtracker. Higher priority dependencies will remain as the following:\n            Original list: \"/a/a.jar#a.jar,/a/b.jar#b.jar,/b/a.jar,/b/b.jar,/c/d.jar\"\n            Deduplicated list: \"/a/a.jar#a.jar,/a/b.jar#b.jar,/c/d.jar\"\n            With other words, priority order is: action jar > user workflow libs > action libs > system lib,\n            where dependency with greater prio is used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.processing.timezone</name>\n        <value>UTC</value>\n        <description>\n            Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India\n            timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified\n            timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason\n            is changed, note that GMT(+/-)#### timezones do not observe DST changes.\n        </description>\n    </property>\n\n    <!-- Base Oozie URL: <SCHEME>://<HOST>:<PORT>/<CONTEXT> -->\n\n    <property>\n        <name>oozie.base.url</name>\n        <value>http://localhost:8080/oozie</value>\n        <description>\n            Base Oozie URL.\n        </description>\n    </property>\n\n    <!-- Services -->\n\n    <property>\n        <name>oozie.system.id</name>\n        <value>oozie-${user.name}</value>\n        <description>\n            The Oozie system ID.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.systemmode</name>\n        <value>NORMAL</value>\n        <description>\n            System mode for  Oozie at startup.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.delete.runtime.dir.on.shutdown</name>\n        <value>true</value>\n        <description>\n            If the runtime directory should be kept after Oozie shutdowns down.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.services</name>\n        <value>\n            org.apache.oozie.service.SchedulerService,\n            org.apache.oozie.service.MetricsInstrumentationService,\n            org.apache.oozie.service.MemoryLocksService,\n            org.apache.oozie.service.UUIDService,\n            org.apache.oozie.service.ELService,\n            org.apache.oozie.service.AuthorizationService,\n            org.apache.oozie.service.UserGroupInformationService,\n            org.apache.oozie.service.HadoopAccessorService,\n            org.apache.oozie.service.JobsConcurrencyService,\n            org.apache.oozie.service.URIHandlerService,\n            org.apache.oozie.service.DagXLogInfoService,\n            org.apache.oozie.service.SchemaService,\n            org.apache.oozie.service.LiteWorkflowAppService,\n            org.apache.oozie.service.JPAService,\n            org.apache.oozie.service.StoreService,\n            org.apache.oozie.service.DBLiteWorkflowStoreService,\n            org.apache.oozie.service.CallbackService,\n            org.apache.oozie.service.ActionService,\n            org.apache.oozie.service.ShareLibService,\n            org.apache.oozie.service.CallableQueueService,\n            org.apache.oozie.service.ActionCheckerService,\n            org.apache.oozie.service.RecoveryService,\n            org.apache.oozie.service.PurgeService,\n            org.apache.oozie.service.CoordinatorEngineService,\n            org.apache.oozie.service.BundleEngineService,\n            org.apache.oozie.service.DagEngineService,\n            org.apache.oozie.service.CoordMaterializeTriggerService,\n            org.apache.oozie.service.StatusTransitService,\n            org.apache.oozie.service.PauseTransitService,\n            org.apache.oozie.service.GroupsService,\n            org.apache.oozie.service.ProxyUserService,\n            org.apache.oozie.service.XLogStreamingService,\n            org.apache.oozie.service.JvmPauseMonitorService,\n            org.apache.oozie.service.SparkConfigurationService,\n            org.apache.oozie.service.SchemaCheckerService\n        </value>\n        <description>\n            All services to be created and managed by Oozie Services singleton.\n            Class names must be separated by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.services.ext</name>\n        <value> </value>\n        <description>\n            To add/replace services defined in 'oozie.services' with custom implementations.\n            Class names must be separated by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.buffer.len</name>\n        <value>4096</value>\n        <description>4K buffer for streaming the logs progressively\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.XLogStreamingService.error.buffer.len</name>\n        <value>2048</value>\n        <description>2K buffer for streaming the error logs\n            progressively\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.audit.buffer.len</name>\n        <value>3</value>\n        <description>Number of lines for streaming the audit logs\n            progressively\n        </description>\n    </property>\n\n    <!-- HCatAccessorService -->\n    <property>\n        <name>oozie.service.HCatAccessorService.jmsconnections</name>\n        <value>\n            default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory\n        </value>\n        <description>\n            Specify the map  of endpoints to JMS configuration properties. In general, endpoint\n            identifies the HCatalog server URL. \"default\" is used if no endpoint is mentioned\n            in the query. If some JMS property is not defined, the system will use the property\n            defined jndi.properties. jndi.properties files is retrieved from the application classpath.\n            Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers.\n            hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616\n        </description>\n    </property>\n\n    <!-- HCatAccessorService -->\n    <property>\n        <name>oozie.service.HCatAccessorService.jms.use.canonical.hostname</name>\n        <value>false</value>\n        <description>The JMS messages published from a HCat server usually contains the canonical hostname of the HCat server\n            in standalone mode or the canonical name of the VIP in a case of multiple nodes in a HA setup. This setting is used\n            to translate the HCat server hostname or its aliases specified by the user in the HCat URIs of the coordinator dependencies\n            to its canonical name so that they can be exactly matched with the JMS dependency availability notifications.\n        </description>\n    </property>\n\n    <!-- TopicService -->\n\n    <property>\n        <name>oozie.service.JMSTopicService.topic.name</name>\n        <value>\n            default=${username}\n        </value>\n        <description>\n            Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a\n            particular job type.\n            For e.g To have a fixed string topic for workflows, coordinators and bundles,\n            specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2}\n            where job type can be WORKFLOW, COORDINATOR or BUNDLE.\n            e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action,\n            bundle job and bundle action\n            WORKFLOW=workflow,\n            COORDINATOR=coordinator,\n            BUNDLE=bundle\n            For jobs with no defined topic, default topic will be ${username}\n        </description>\n    </property>\n\n    <!-- JMS Producer connection -->\n    <property>\n        <name>oozie.jms.producer.connection.properties</name>\n        <value>java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory</value>\n    </property>\n\n    <!-- JMSAccessorService -->\n    <property>\n        <name>oozie.service.JMSAccessorService.connectioncontext.impl</name>\n        <value>\n            org.apache.oozie.jms.DefaultConnectionContext\n        </value>\n        <description>\n            Specifies the Connection Context implementation\n        </description>\n    </property>\n\n\n    <!-- ConfigurationService -->\n\n    <property>\n        <name>oozie.service.ConfigurationService.ignore.system.properties</name>\n        <value>\n            oozie.service.AuthorizationService.security.enabled\n        </value>\n        <description>\n            Specifies \"oozie.*\" properties to cannot be overriden via Java system properties.\n            Property names must be separted by commas.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ConfigurationService.verify.available.properties</name>\n        <value>true</value>\n        <description>\n            Specifies whether the available configurations check is enabled or not.\n        </description>\n    </property>\n\n    <!-- SchedulerService -->\n\n    <property>\n        <name>oozie.service.SchedulerService.threads</name>\n        <value>10</value>\n        <description>\n            The number of threads to be used by the SchedulerService to run deamon tasks.\n            If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available.\n        </description>\n    </property>\n\n    <!--  AuthorizationService -->\n\n    <property>\n        <name>oozie.service.AuthorizationService.authorization.enabled</name>\n        <value>false</value>\n        <description>\n            Specifies whether security (user name/admin role) is enabled or not.\n            If disabled any user can manage Oozie system and manage any job.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AuthorizationService.default.group.as.acl</name>\n        <value>false</value>\n        <description>\n            Enables old behavior where the User's default group is the job's ACL.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.serviceAuthorizationService.admin.users</name>\n        <value></value>\n        <description>\n            Comma separated list of users with admin access for the Authorization service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AuthorizationService.system.info.authorized.users</name>\n        <value></value>\n        <description>\n            Comma separated list of users authorized for web service calls to get system configuration.\n        </description>\n    </property>\n\n    <!-- InstrumentationService -->\n\n    <property>\n        <name>oozie.service.InstrumentationService.logging.interval</name>\n        <value>60</value>\n        <description>\n            Interval, in seconds, at which instrumentation should be logged by the InstrumentationService.\n            If set to 0 it will not log instrumentation data.\n        </description>\n    </property>\n\n    <!-- PurgeService -->\n    <property>\n        <name>oozie.service.PurgeService.older.than</name>\n        <value>30</value>\n        <description>\n            Completed workflow jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.coord.older.than</name>\n        <value>7</value>\n        <description>\n            Completed coordinator jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.bundle.older.than</name>\n        <value>7</value>\n        <description>\n            Completed bundle jobs older than this value, in days, will be purged by the PurgeService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.old.coord.action</name>\n        <value>false</value>\n        <description>\n            Whether to purge completed workflows and their corresponding coordinator actions\n            of long running coordinator jobs if the completed workflow jobs are older than the value\n            specified in oozie.service.PurgeService.older.than.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.limit</name>\n        <value>100</value>\n        <description>\n            Batch size of individual DB operations used for building the list of items\n            to be purged and performing actual purge.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.purge.interval</name>\n        <value>3600</value>\n        <description>\n            Interval at which the purge service will run, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PurgeService.enable.command.line</name>\n        <value>true</value>\n        <description>\n            Enable/Disable oozie admin purge command. By default, it is enabled.\n        </description>\n    </property>\n\n    <!-- RecoveryService -->\n\n    <property>\n        <name>oozie.service.RecoveryService.wf.actions.older.than</name>\n        <value>120</value>\n        <description>\n            Age of the actions which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.wf.actions.created.time.interval</name>\n        <value>7</value>\n        <description>\n            Created time period of the actions which are eligible to be queued for recovery in days.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.push.dependency.interval</name>\n        <value>200</value>\n        <description>\n            This value determines the delay for push missing dependency command queueing\n            in Recovery Service\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.interval</name>\n        <value>60</value>\n        <description>\n            Interval at which the RecoverService will run, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.coord.older.than</name>\n        <value>600</value>\n        <description>\n            Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.RecoveryService.bundle.older.than</name>\n        <value>600</value>\n        <description>\n            Age of the Bundle jobs which are eligible to be queued for recovery, in seconds.\n        </description>\n    </property>\n\n    <!-- CallableQueueService -->\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.size</name>\n        <value>10000</value>\n        <description>Max callable queue size</description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.threads</name>\n        <value>10</value>\n        <description>Number of threads used for executing callables</description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.delayedcallable.threads</name>\n        <value>1</value>\n        <description>\n            The number of threads where delayed tasks are executed. Upon expiration, the tasks are immediately\n            inserted into the main queue to properly handle priorities. This means that no actual business logic\n            is executed in this thread pool, so under normal circumstances, this value can be set to a low number.\n\n            Note that this property is completely unrelated to oozie.service.SchedulerService.threads which\n            tells how many scheduled background tasks can run in parallel at the same time (like PurgeService,\n            StatusTransitService, etc).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.newImpl</name>\n        <value>true</value>\n        <description>\n            If set to true, then CallableQueueService will use a faster, less CPU-intensive queuing mechanism to execute\n            asynchronous tasks internally.\n            The old implementation generates noticeable CPU load even if Oozie is completely idle, especially when\n            oozie.service.CallableQueueService.threads is set to a large number. The previous queuing mechanism is kept as a\n            fallback option.\n            This is an experimental feature in Oozie 5.1.0 that needs to be re-evaluated upon an upcoming minor release,\n            meaning the old implementation and this feature flag will also be removed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.queue.awaitTermination.timeout.seconds</name>\n        <value>30</value>\n        <description>\n            Number of seconds while awaiting termination of ThreadPoolExecutor instances when CallableQueueService#destroy()\n            is called, in seconds.\n            The more elements you tend to have in your callable queue, the more you want CallableQueueService to wait\n            before shutting down its thread pools.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.callable.concurrency</name>\n        <value>3</value>\n        <description>\n            Maximum concurrency for a given callable type.\n            Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc).\n            Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc).\n            All commands that use action executors (action-start, action-end, action-kill and action-check) use\n            the action type as the callable type.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.callable.next.eligible</name>\n        <value>true</value>\n        <description>\n            If true, when a callable in the queue has already reached max concurrency,\n            Oozie continuously find next one which has not yet reach max concurrency.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.InterruptMapMaxSize</name>\n        <value>500</value>\n        <description>\n            Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallableQueueService.InterruptTypes</name>\n        <value>kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend</value>\n        <description>\n            Getting the types of XCommands that are considered to be of Interrupt type\n        </description>\n    </property>\n\n    <!--  CoordMaterializeTriggerService -->\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.lookup.interval\n        </name>\n        <value>300</value>\n        <description> Coordinator Job Lookup interval.(in seconds).\n        </description>\n    </property>\n\n    <!-- Enable this if you want different scheduling interval for CoordMaterializeTriggerService.\n    By default it will use lookup interval as scheduling interval\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.scheduling.interval\n        </name>\n        <value>300</value>\n        <description> The frequency at which the CoordMaterializeTriggerService will run.</description>\n    </property>\n    -->\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.materialization.window\n        </name>\n        <value>3600</value>\n        <description> Coordinator Job Lookup command materialized each\n            job for this next \"window\" duration\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CoordMaterializeTriggerService.materialization.system.limit</name>\n        <value>50</value>\n        <description>\n            This value determines the number of coordinator jobs to be materialized at a given time.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.normal.default.timeout\n        </name>\n        <value>120</value>\n        <description>Default timeout for a coordinator action input check (in minutes) for normal job.\n            -1 means infinite timeout</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.max.timeout\n        </name>\n        <value>86400</value>\n        <description>Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.input.check.requeue.interval\n        </name>\n        <value>60000</value>\n        <description>Command re-queue interval for coordinator data input check (in millisecond).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.input.check.requeue.interval.additional.delay</name>\n        <value>0</value>\n        <description>This value (in seconds) will be added into oozie.service.coord.input.check.requeue.interval and resulting value\n            will be the requeue interval for the actions which are waiting for a long time without any input.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.push.check.requeue.interval\n        </name>\n        <value>600000</value>\n        <description>Command re-queue interval for push dependencies (in millisecond).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.concurrency\n        </name>\n        <value>1</value>\n        <description>Default concurrency for a coordinator job to determine how many maximum action should\n            be executed at the same time. -1 means infinite concurrency.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.default.throttle\n        </name>\n        <value>12</value>\n        <description>Default throttle for a coordinator job to determine how many maximum action should\n            be in WAITING state at the same time.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.materialization.throttling.factor\n        </name>\n        <value>0.05</value>\n        <description>Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by\n            this factor X the total queue size.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.coord.check.maximum.frequency</name>\n        <value>true</value>\n        <description>\n            When true, Oozie will reject any coordinators with a frequency faster than 5 minutes.  It is not recommended to disable\n            this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and\n            additional system stress.\n        </description>\n    </property>\n\n    <!-- ELService -->\n    <!--  List of supported groups for ELService -->\n    <property>\n        <name>oozie.service.ELService.groups</name>\n        <value>job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout,bundle-submit,coord-job-submit-initial-instance</value>\n        <description>List of groups for different ELServices</description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.constants.job-submit</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.job-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.job-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.job-submit</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Workflow specifics -->\n    <property>\n        <name>oozie.service.ELService.constants.workflow</name>\n        <value>\n            KB=org.apache.oozie.util.ELConstantsFunctions#KB,\n            MB=org.apache.oozie.util.ELConstantsFunctions#MB,\n            GB=org.apache.oozie.util.ELConstantsFunctions#GB,\n            TB=org.apache.oozie.util.ELConstantsFunctions#TB,\n            PB=org.apache.oozie.util.ELConstantsFunctions#PB,\n            RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS,\n            MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN,\n            MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT,\n            REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN,\n            REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT,\n            GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.workflow</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.workflow</name>\n        <value>\n            firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull,\n            concat=org.apache.oozie.util.ELConstantsFunctions#concat,\n            replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll,\n            appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll,\n            trim=org.apache.oozie.util.ELConstantsFunctions#trim,\n            timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp,\n            urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode,\n            toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr,\n            toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr,\n            toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr,\n            wf:id=org.apache.oozie.DagELFunctions#wf_id,\n            wf:name=org.apache.oozie.DagELFunctions#wf_name,\n            wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath,\n            wf:conf=org.apache.oozie.DagELFunctions#wf_conf,\n            wf:user=org.apache.oozie.DagELFunctions#wf_user,\n            wf:group=org.apache.oozie.DagELFunctions#wf_group,\n            wf:callback=org.apache.oozie.DagELFunctions#wf_callback,\n            wf:transition=org.apache.oozie.DagELFunctions#wf_transition,\n            wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode,\n            wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode,\n            wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage,\n            wf:run=org.apache.oozie.DagELFunctions#wf_run,\n            wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData,\n            wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId,\n            wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri,\n            wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus,\n            hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists,\n            fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir,\n            fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize,\n            fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize,\n            fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize,\n            hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength</name>\n        <value>100000</value>\n        <description>\n            The maximum length of the workflow definition in bytes\n            An error will be reported if the length exceeds the given maximum\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.workflow</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Workflow job submission -->\n    <property>\n        <name>oozie.service.ELService.constants.wf-sla-submit</name>\n        <value>\n            MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.wf-sla-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.wf-sla-submit</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.wf-sla-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Coordinator specifics -->l\n    <!-- Phase 1 resolution during job submission -->\n    <!-- EL Evalautor setup to resolve mainly frequency tags -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-freq</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-freq</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-freq</name>\n        <value>\n            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,\n            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,\n            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,\n            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfWeeks,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-initial-instance</name>\n        <value>\n            ${oozie.service.ELService.functions.coord-job-submit-nofuncs},\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset\n        </value>\n        <description>\n            EL functions for coord job submit initial instance, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-freq</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-wait-timeout</name>\n        <value>\n            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,\n            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,\n            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,\n            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-wait-timeout</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to resolve mainly all constants/variables - no EL functions is resolved -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-nofuncs</name>\n        <value>\n            MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE,\n            HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR,\n            DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY,\n            MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH,\n            YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-nofuncs</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-nofuncs</name>\n        <value>\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-nofuncs</name>\n        <value> </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to **check** whether instances/start-instance/end-instances are valid\n     no EL functions will be resolved -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-instances</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-instances</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-instances</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-instances</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- EL Evalautor setup to **check** whether dataIn and dataOut are valid\n     no EL functions will be resolved -->\n\n    <property>\n        <name>oozie.service.ELService.constants.coord-job-submit-data</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-job-submit-data</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-job-submit-data</name>\n        <value>\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,\n            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,\n            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,\n            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-job-submit-data</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Coordinator job submission -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-sla-submit</name>\n        <value>\n            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-sla-submit</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.bundle-submit</name>\n        <value>bundle:conf=org.apache.oozie.bundle.BundleELFunctions#bundle_conf</value>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-sla-submit</name>\n        <value>\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_epochTime_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-sla-submit</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!--  Action creation for coordinator -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-create</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-create</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-create</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfDays_echo,\n            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-create</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n\n    <!--  Action creation for coordinator used to only evaluate instance number like ${current (daysInMonth())}. current will be echo-ed -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-create-inst</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-create-inst</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-create-inst</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,\n            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo,\n            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo,\n            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,\n            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,\n            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfMonths_echo,\n            coord:endOfWeeks=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfWeeks_echo,\n            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph2_coord_endOfDays_echo,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-create-inst</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!-- Resolve SLA information during Action creation/materialization -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-sla-create</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-sla-create</name>\n        <value>\n            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,\n            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,\n            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS</value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-sla-create</name>\n        <value>\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_epochTime,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-sla-create</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <!--  Action start for coordinator -->\n    <property>\n        <name>oozie.service.ELService.constants.coord-action-start</name>\n        <value>\n        </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.constants.coord-action-start</name>\n        <value> </value>\n        <description>\n            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.functions.coord-action-start</name>\n        <value>\n            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay,\n            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth,\n            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset,\n            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest,\n            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange,\n            coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future,\n            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange,\n            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn,\n            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,\n            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime,\n            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime,\n            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset,\n            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset,\n            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime,\n            coord:epochTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_epochTime,\n            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId,\n            coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name,\n            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,\n            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,\n            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn,\n            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,\n            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn,\n            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,\n            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter,\n            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin,\n            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax,\n            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions,\n            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,\n            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,\n            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.ext.functions.coord-action-start</name>\n        <value>\n        </value>\n        <description>\n            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.\n            This property is a convenience property to add extensions to the built in executors without having to\n            include all the built in ones.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ELService.latest-el.use-current-time</name>\n        <value>false</value>\n        <description>\n            Determine whether to use the current time to determine the latest dependency or the action creation time.\n            This is for backward compatibility with older oozie behaviour.\n        </description>\n    </property>\n\n    <!-- UUIDService -->\n\n    <property>\n        <name>oozie.service.UUIDService.generator</name>\n        <value>counter</value>\n        <description>\n            random : generated UUIDs will be random strings.\n            counter: generated UUIDs generated will be a counter postfixed with the system startup time.\n        </description>\n    </property>\n\n    <!-- DBLiteWorkflowStoreService -->\n\n    <property>\n        <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval</name>\n        <value>5</value>\n        <description> Workflow Status metrics collection interval in minutes.</description>\n    </property>\n\n    <property>\n        <name>oozie.service.DBLiteWorkflowStoreService.status.metrics.window</name>\n        <value>3600</value>\n        <description>\n            Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window.\n        </description>\n    </property>\n\n    <!-- DB Schema Info, used by DBLiteWorkflowStoreService -->\n\n    <property>\n        <name>oozie.db.schema.name</name>\n        <value>oozie</value>\n        <description>\n            Oozie DataBase Name\n        </description>\n    </property>\n\n    <!-- Database import CLI: batch size -->\n\n    <property>\n        <name>oozie.db.import.batch.size</name>\n        <value>1000</value>\n        <description>\n            How many entities are imported in a single transaction by the Oozie DB import CLI tool to avoid OutOfMemoryErrors.\n        </description>\n    </property>\n\n    <!-- StoreService -->\n\n    <property>\n        <name>oozie.service.JPAService.create.db.schema</name>\n        <value>false</value>\n        <description>\n            Creates Oozie DB.\n\n            If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.\n            If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection</name>\n        <value>true</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection.eviction.interval</name>\n        <value>300000</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep\n            between runs of the idle object evictor thread.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.validate.db.connection.eviction.num</name>\n        <value>10</value>\n        <description>\n            Validates DB connections from the DB connection pool.\n            When validate db connection 'TestWhileIdle' is true, the number of objects to examine during\n            each run of the idle object evictor thread.\n        </description>\n    </property>\n\n\n    <property>\n        <name>oozie.service.JPAService.connection.data.source</name>\n        <value>org.apache.oozie.util.db.BasicDataSourceWrapper</value>\n        <description>\n            DataSource to be used for connection pooling. If you want the property\n            openJpa.connectionProperties=\"DriverClassName=...\" to have a real effect, set this to\n            org.apache.oozie.util.db.BasicDataSourceWrapper.\n            A DBCP bug (https://issues.apache.org/jira/browse/DBCP-333) prevents otherwise the JDBC driver\n            setting to have a real effect while using custom class loader.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.connection.properties</name>\n        <value> </value>\n        <description>\n            DataSource connection properties.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.driver</name>\n        <value>org.apache.derby.jdbc.EmbeddedDriver</value>\n        <description>\n            JDBC driver class.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.url</name>\n        <value>jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true</value>\n        <description>\n            JDBC URL.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.username</name>\n        <value>sa</value>\n        <description>\n            DB user name.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.jdbc.password</name>\n        <value> </value>\n        <description>\n            DB user password.\n\n            IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,\n            if empty Configuration assumes it is NULL.\n\n            IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in\n            the console.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.pool.max.active.conn</name>\n        <value>10</value>\n        <description>\n            Max number of connections.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.openjpa.BrokerImpl</name>\n        <value>non-finalizing</value>\n        <description>\n            The default OpenJPAEntityManager implementation automatically closes itself during instance finalization.\n            This guards against accidental resource leaks that may occur if a developer fails to explicitly close\n            EntityManagers when finished with them, but it also incurs a scalability bottleneck, since the JVM must\n            perform synchronization during instance creation, and since the finalizer thread will have more instances to monitor.\n            To avoid this overhead, set the openjpa.BrokerImpl configuration property to non-finalizing.\n            To use default implementation set it to empty space.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.initial-wait-time.ms</name>\n        <value>100</value>\n        <description>\n            Initial wait time in milliseconds between the first failed database operation and the re-attempted operation. The wait\n            time is doubled at each retry.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.maximum-wait-time.ms</name>\n        <value>30000</value>\n        <description>\n            Maximum wait time between database retry attempts.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JPAService.retry.max-retries</name>\n        <value>10</value>\n        <description>\n            Maximum number of retries for a failed database operation.\n        </description>\n    </property>\n\n    <!-- SchemaService -->\n\n    <property>\n        <name>oozie.service.SchemaService.wf.schemas</name>\n        <value>\n            oozie-common-1.0.xsd,\n            oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd,\n            oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd,oozie-workflow-1.0.xsd,\n            shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,shell-action-1.0.xsd,\n            email-action-0.1.xsd,email-action-0.2.xsd,\n            hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,hive-action-0.6.xsd,hive-action-1.0.xsd,\n            sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,sqoop-action-1.0.xsd,\n            ssh-action-0.1.xsd,ssh-action-0.2.xsd,\n            distcp-action-0.1.xsd,distcp-action-0.2.xsd,distcp-action-1.0.xsd,\n            oozie-sla-0.1.xsd,oozie-sla-0.2.xsd,\n            hive2-action-0.1.xsd,hive2-action-0.2.xsd,hive2-action-1.0.xsd,\n            spark-action-0.1.xsd,spark-action-0.2.xsd,spark-action-1.0.xsd,\n            git-action-1.0.xsd\n        </value>\n        <description>\n            List of schemas for workflows (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.wf.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for workflows (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.coord.schemas</name>\n        <value>\n            oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd,\n            oozie-coordinator-0.5.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd\n        </value>\n        <description>\n            List of schemas for coordinators (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.coord.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for coordinators (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.bundle.schemas</name>\n        <value>\n            oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd\n        </value>\n        <description>\n            List of schemas for bundles (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.bundle.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for bundles (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.sla.schemas</name>\n        <value>\n            gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd\n        </value>\n        <description>\n            List of schemas for semantic validation for GMS SLA (separated by commas).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaService.sla.ext.schemas</name>\n        <value> </value>\n        <description>\n            List of additional schemas for semantic validation for GMS SLA (separated by commas).\n        </description>\n    </property>\n\n    <!-- CallbackService -->\n\n    <property>\n        <name>oozie.service.CallbackService.base.url</name>\n        <value>${oozie.base.url}/callback</value>\n        <description>\n            Base callback URL used by ActionExecutors.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.CallbackService.early.requeue.max.retries</name>\n        <value>5</value>\n        <description>\n            If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times\n            to give the action time to transition to RUNNING.\n        </description>\n    </property>\n\n    <!-- CallbackServlet -->\n\n    <property>\n        <name>oozie.servlet.CallbackServlet.max.data.len</name>\n        <value>2048</value>\n        <description>\n            Max size in characters for the action completion data output.\n        </description>\n    </property>\n\n    <!-- External stats-->\n\n    <property>\n        <name>oozie.external.stats.max.size</name>\n        <value>-1</value>\n        <description>\n            Max size in bytes for action stats. -1 means infinite value.\n        </description>\n    </property>\n\n    <!-- JobCommand -->\n\n    <property>\n        <name>oozie.JobCommand.job.console.url</name>\n        <value>${oozie.base.url}?job=</value>\n        <description>\n            Base console URL for a workflow job.\n        </description>\n    </property>\n\n\n    <!-- ActionService -->\n\n    <property>\n        <name>oozie.service.ActionService.executor.classes</name>\n        <value>\n            org.apache.oozie.action.decision.DecisionActionExecutor,\n            org.apache.oozie.action.hadoop.JavaActionExecutor,\n            org.apache.oozie.action.hadoop.FsActionExecutor,\n            org.apache.oozie.action.hadoop.MapReduceActionExecutor,\n            org.apache.oozie.action.hadoop.PigActionExecutor,\n            org.apache.oozie.action.hadoop.HiveActionExecutor,\n            org.apache.oozie.action.hadoop.ShellActionExecutor,\n            org.apache.oozie.action.hadoop.SqoopActionExecutor,\n            org.apache.oozie.action.hadoop.DistcpActionExecutor,\n            org.apache.oozie.action.hadoop.Hive2ActionExecutor,\n            org.apache.oozie.action.ssh.SshActionExecutor,\n            org.apache.oozie.action.oozie.SubWorkflowActionExecutor,\n            org.apache.oozie.action.email.EmailActionExecutor,\n            org.apache.oozie.action.hadoop.SparkActionExecutor,\n            org.apache.oozie.action.hadoop.GitActionExecutor\n        </value>\n        <description>\n            List of ActionExecutors classes (separated by commas).\n            Only action types with associated executors can be used in workflows.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionService.executor.ext.classes</name>\n        <value> </value>\n        <description>\n            List of ActionExecutors extension classes (separated by commas). Only action types with associated\n            executors can be used in workflows. This property is a convenience property to add extensions to the built\n            in executors without having to include all the built in ones.\n        </description>\n    </property>\n\n    <!-- ActionCheckerService -->\n\n    <property>\n        <name>oozie.service.ActionCheckerService.action.check.interval</name>\n        <value>60</value>\n        <description>\n            The frequency at which the ActionCheckService will run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionCheckerService.action.check.delay</name>\n        <value>600</value>\n        <description>\n            The time, in seconds, between an ActionCheck for the same action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ActionCheckerService.callable.batch.size</name>\n        <value>10</value>\n        <description>\n            This value determines the number of actions which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <!-- StatusTransitService -->\n    <property>\n        <name>oozie.service.StatusTransitService.statusTransit.interval</name>\n        <value>60</value>\n        <description>\n            The frequency in seconds at which the StatusTransitService will run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.StatusTransitService.backward.support.for.coord.status</name>\n        <value>false</value>\n        <description>\n            true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit.\n            if set true,\n            1. SUCCEEDED state in coordinator job means materialization done.\n            2. No DONEWITHERROR state in coordinator job\n            3. No PAUSED or PREPPAUSED state in coordinator job\n            4. PREPSUSPENDED becomes SUSPENDED in coordinator job\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.StatusTransitService.backward.support.for.states.without.error</name>\n        <value>true</value>\n        <description>\n            true, if you want to keep Oozie 3.2 status transit.\n            Change it to false for Oozie 4.x releases.\n            if set true,\n            No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR\n            for coordinator and bundle\n        </description>\n    </property>\n\n    <!-- PauseTransitService -->\n    <property>\n        <name>oozie.service.PauseTransitService.PauseTransit.interval</name>\n        <value>60</value>\n        <description>\n            The frequency in seconds at which the PauseTransitService will run.\n        </description>\n    </property>\n\n    <!-- LauncherAMUtils -->\n    <property>\n        <name>oozie.action.max.output.data</name>\n        <value>2048</value>\n        <description>\n            Max size in characters for output data.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.fs.glob.max</name>\n        <value>50000</value>\n        <description>\n            Maximum number of globbed files.\n        </description>\n    </property>\n\n    <!-- JavaActionExecutor -->\n    <!-- This is common to the subclasses of action executors for Java (e.g. map-reduce, pig, hive, java, etc) -->\n\n    <property>\n        <name>oozie.action.launcher.am.restart.kill.childjobs</name>\n        <value>true</value>\n        <description>\n            Multiple instances of launcher jobs can happen due to RM non-work preserving recovery on RM restart, AM recovery\n            due to crashes or AM network connectivity loss. This could also lead to orphaned child jobs of the old AM attempts\n            leading to conflicting runs. This kills child jobs of previous attempts using YARN application tags.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.spark.setup.hadoop.conf.dir</name>\n        <value>false</value>\n        <description>\n            Oozie action.xml (oozie.action.conf.xml) contains all the hadoop configuration and user provided configurations.\n            This property will allow users to copy Oozie action.xml as hadoop *-site configurations files. The advantage is,\n            user need not to manage these files into spark sharelib. If user wants to manage the hadoop configurations\n            themselves, it should should disable it.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir</name>\n        <value>false</value>\n        <description>\n            The Shell action is commonly used to run programs that rely on HADOOP_CONF_DIR (e.g. hive, beeline, sqoop, etc).  With\n            YARN, HADOO_CONF_DIR is set to the NodeManager's copies of Hadoop's *-site.xml files, which can be problematic because\n            (a) they are for meant for the NM, not necessarily clients, and (b) they won't have any of the configs that Oozie, or\n            the user through Oozie, sets.  When this property is set to true, The Shell action will prepare the *-site.xml files\n            based on the correct config and set HADOOP_CONF_DIR to point to it.  Setting it to false will make Oozie leave\n            HADOOP_CONF_DIR alone.  This can also be set at the Action level by putting it in the Shell Action's configuration\n            section, which also has priorty.  That all said, it's recommended to use the appropriate action type when possible.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties</name>\n        <value>true</value>\n        <description>\n            Toggle to control if a log4j.properties file should be written into the configuration directory prepared when\n            oozie.action.shell.setup.hadoop.conf.dir is enabled. This is used to control logging behavior of log4j using commands\n            run within the shell action script, and to ensure logging does not impact output data capture if leaked to stdout.\n            Content of the written file is determined by the value of oozie.action.shell.setup.hadoop.conf.dir.log4j.content.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.setup.hadoop.conf.dir.log4j.content</name>\n        <value>\n            log4j.rootLogger=INFO,console\n            log4j.appender.console=org.apache.logging.log4j.core.appender.ConsoleAppender\n            log4j.appender.console.target=System.err\n            log4j.appender.console.layout=org.apache.logging.log4j.core.layout.PatternLayout\n            log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\n        </value>\n        <description>\n            The value to write into a log4j.properties file under the config directory created when\n            oozie.action.shell.setup.hadoop.conf.dir and oozie.action.shell.setup.hadoop.conf.dir.write.log4j.properties\n            properties are both enabled. The values must be properly newline separated and in format expected by Log4J.\n            Trailing and preceding whitespaces will be trimmed when reading this property.\n            This is used to control logging behavior of log4j using commands run within the shell action script.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.shell.max-print-size-kb</name>\n        <value>128</value>\n        <description>\n            When an oozie shell action starts, the shell script will be printed. Scripts larger than the size configured here\n            (in KiB) will not be printed. If this value is less than or equal to zero, the script will not be printed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.launcher.yarn.timeline-service.enabled</name>\n        <value>false</value>\n        <description>\n            Enables/disables getting delegation tokens for ATS for the launcher job in\n            YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in\n            distributed cache.\n            This can be overridden on a per-action basis by setting\n            oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.pig.log.expandedscript</name>\n        <value>true</value>\n        <description>\n            Log the expanded pig script in launcher stdout log\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.rootlogger.log.level</name>\n        <value>INFO</value>\n        <description>\n            Logging level for root logger\n        </description>\n    </property>\n\n    <!-- HadoopActionExecutor -->\n    <!-- This is common to the subclasses action executors for map-reduce and pig -->\n\n    <property>\n        <name>oozie.action.retries.max</name>\n        <value>3</value>\n        <description>\n            The number of retries for executing an action in case of failure\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.retry.interval</name>\n        <value>10</value>\n        <description>\n            The interval between retries of an action in case of failure\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.retry.policy</name>\n        <value>periodic</value>\n        <description>\n            Retry policy of an action in case of failure. Possible values are periodic/exponential\n        </description>\n    </property>\n\n    <!-- SshActionExecutor -->\n\n    <property>\n        <name>oozie.action.ssh.delete.remote.tmp.dir</name>\n        <value>true</value>\n        <description>\n            If set to true, it will delete temporary directory at the end of execution of ssh action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.http.command</name>\n        <value>curl</value>\n        <description>\n            Command to use for callback to oozie, normally is 'curl' or 'wget'.\n            The command must available in PATH environment variable of the USER@HOST box shell.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.http.command.post.options</name>\n        <value>--data-binary @#stdout --request POST --header \"content-type:text/plain\"</value>\n        <description>\n            The callback command POST options.\n            Used when the ouptut of the ssh action is captured.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.allow.user.at.host</name>\n        <value>true</value>\n        <description>\n            Specifies whether the user specified by the ssh action is allowed or is to be replaced\n            by the Job user\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.check.retries.max</name>\n        <value>3</value>\n        <description>\n            Maximal retry count for ssh action status check\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ssh.check.initial.retry.wait.time</name>\n        <value>3000</value>\n        <description>\n            init wait time that the first retry check needs to wait\n        </description>\n    </property>\n\n    <!-- SubworkflowActionExecutor -->\n\n    <property>\n        <name>oozie.action.subworkflow.max.depth</name>\n        <value>50</value>\n        <description>\n            The maximum depth for subworkflows.  For example, if set to 3, then a workflow can start subwf1, which can start subwf2,\n            which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail.  This is helpful in preventing\n            errant workflows from starting infintely recursive subworkflows.\n        </description>\n    </property>\n\n    <!-- HadoopAccessorService -->\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.kerberos.enabled</name>\n        <value>false</value>\n        <description>\n            Indicates if Oozie is configured to use Kerberos.\n        </description>\n    </property>\n\n    <property>\n        <name>local.realm</name>\n        <value>LOCALHOST</value>\n        <description>\n            Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.keytab.file</name>\n        <value>${user.home}/oozie.keytab</value>\n        <description>\n            Location of the Oozie user keytab file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.kerberos.principal</name>\n        <value>${user.name}/localhost@${local.realm}</value>\n        <description>\n            Kerberos principal for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>\n        <value> </value>\n        <description>\n            Whitelisted job tracker for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>\n        <value> </value>\n        <description>\n            Whitelisted job tracker for Oozie service.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>\n        <value>*=hadoop-conf</value>\n        <description>\n            Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is\n            used when there is no exact match for an authority. The HADOOP_CONF_DIR contains\n            the relevant Hadoop *-site.xml files. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute (i.e. to point\n            to Hadoop client conf/ directories in the local filesystem.\n        </description>\n    </property>\n\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.action.configurations</name>\n        <value>*=action-conf</value>\n        <description>\n            Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is\n            used when there is no exact match for an authority. The ACTION_CONF_DIR may contain\n            ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig',\n            'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used\n            as defaults properties for the action. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute (i.e. to point\n            to Hadoop client conf/ directories in the local filesystem.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.action.configurations.load.default.resources</name>\n        <value>true</value>\n        <description>\n            true means that default and site xml files of hadoop (core-default, core-site,\n            hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site)\n            are parsed into actionConf on Oozie server. false means that site xml files are\n            not loaded on server, instead loaded on launcher node.\n            This is only done for pig and hive actions which handle loading those files\n            automatically from the classpath on launcher task. It defaults to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.fs.s3a</name>\n        <value> </value>\n        <description>\n            You can configure custom s3a file system properties globally.\n            Value shall be a comma separated list of key=value pairs. For example:\n            fs.s3a.fast.upload.buffer=bytebuffer,fs.s3a.impl.disable.cache=true\n            Limitation: the custom file system properties cannot contain comma neither\n            in key nor in value.\n        </description>\n    </property>\n\n    <!-- Credentials -->\n    <property>\n        <name>oozie.credentials.credentialclasses</name>\n        <value> </value>\n        <description>\n            A list of credential class mapping for CredentialsProvider\n        </description>\n    </property>\n    <property>\n        <name>oozie.credentials.skip</name>\n        <value>false</value>\n        <description>\n            This determines if Oozie should skip getting credentials from the credential providers.  This can be overwritten at a\n            job-level or action-level.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.main.classnames</name>\n        <value>distcp=org.apache.hadoop.tools.DistCp</value>\n        <description>\n            A list of class name mapping for Action classes\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.WorkflowAppService.system.libpath</name>\n        <value>/user/${user.name}/share/lib</value>\n        <description>\n            System library path to use for workflow applications.\n            This path is added to workflow application if their job properties sets\n            the property 'oozie.use.system.libpath' to true.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.command.default.lock.timeout</name>\n        <value>5000</value>\n        <description>\n            Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.command.default.requeue.delay</name>\n        <value>10000</value>\n        <description>\n            Default time (in milliseconds) for commands that are requeued for delayed execution.\n        </description>\n    </property>\n\n    <!-- LiteWorkflowStoreService, Workflow Action Automatic Retry -->\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.max</name>\n        <value>3</value>\n        <description>\n            Automatic retry max count for workflow action is 3 in default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.inteval</name>\n        <value>10</value>\n        <description>\n            Automatic retry interval for workflow action is in minutes and the default value is 10 minutes.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.policy</name>\n        <value>periodic</value>\n        <description>\n            Automatic retry policy for workflow action. Possible values are periodic or exponential, periodic being the default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code</name>\n        <value>JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014</value>\n        <description>\n            Automatic retry interval for workflow action is handled for these specified error code:\n            FS009, FS008 is file exists error when using chmod in fs action.\n            FS014 is permission error in fs action\n            JA018 is output directory exists error in workflow map-reduce action.\n            JA019 is error while executing distcp action.\n            JA017 is job not exists error in action executor.\n            JA008 is FileNotFoundException in action executor.\n            JA009 is IOException in action executor.\n            ALL is the any kind of error in action executor.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext</name>\n        <value> </value>\n        <description>\n            Automatic retry interval for workflow action is handled for these specified extra error code:\n            ALL is the any kind of error in action executor.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.LiteWorkflowStoreService.node.def.version</name>\n        <value>_oozie_inst_v_2</value>\n        <description>\n            NodeDef default version, _oozie_inst_v_0, _oozie_inst_v_1 or _oozie_inst_v_2\n        </description>\n    </property>\n\n    <!-- Oozie Authentication -->\n\n    <property>\n        <name>oozie.authentication.type</name>\n        <value>simple</value>\n        <description>\n            Defines authentication used for Oozie HTTP endpoint.\n            Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#\n        </description>\n    </property>\n    <property>\n        <name>oozie.server.authentication.type</name>\n        <value>${oozie.authentication.type}</value>\n        <description>\n            Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s).\n            Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME#\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.server.connection.timeout.seconds</name>\n        <value>180</value>\n        <description>\n            Defines connection timeout used for Oozie server communicating to other Oozie server over HTTP(s). Default is 3 min.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.token.validity</name>\n        <value>36000</value>\n        <description>\n            Indicates how long (in seconds) an authentication token is valid before it has\n            to be renewed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.cookie.domain</name>\n        <value></value>\n        <description>\n            The domain to use for the HTTP cookie that stores the authentication token.\n            In order to authentiation to work correctly across multiple hosts\n            the domain must be correctly set.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.simple.anonymous.allowed</name>\n        <value>true</value>\n        <description>\n            Indicates if anonymous requests are allowed when using 'simple' authentication.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.principal</name>\n        <value>HTTP/localhost@${local.realm}</value>\n        <description>\n            Indicates the Kerberos principal to be used for HTTP endpoint.\n            The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.keytab</name>\n        <value>${oozie.service.HadoopAccessorService.keytab.file}</value>\n        <description>\n            Location of the keytab file with the credentials for the principal.\n            Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.authentication.kerberos.name.rules</name>\n        <value>DEFAULT</value>\n        <description>\n            The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's\n            KerberosName for more details.\n        </description>\n    </property>\n\n    <!-- Coordinator \"NONE\" execution order default time tolerance -->\n    <property>\n        <name>oozie.coord.execution.none.tolerance</name>\n        <value>1</value>\n        <description>\n            Default time tolerance in minutes after action nominal time for an action to be skipped\n            when execution order is \"NONE\"\n        </description>\n    </property>\n\n    <!-- Coordinator Actions default length -->\n    <property>\n        <name>oozie.coord.actions.default.length</name>\n        <value>1000</value>\n        <description>\n            Default number of coordinator actions to be retrieved by the info command\n        </description>\n    </property>\n\n    <!-- ForkJoin validation -->\n    <property>\n        <name>oozie.validate.ForkJoin</name>\n        <value>true</value>\n        <description>\n            If true, fork and join should be validated at wf submission time.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.workflow.parallel.fork.action.start</name>\n        <value>true</value>\n        <description>\n            Determines how Oozie processes starting of forked actions. If true, forked actions and their job submissions\n            are done in parallel which is best for performance. If false, they are submitted sequentially.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.coord.action.get.all.attributes</name>\n        <value>false</value>\n        <description>\n            Setting to true is not recommended as coord job/action info will bring all columns of the action in memory.\n            Set it true only if backward compatibility for action/job info is required.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.HadoopAccessorService.supported.filesystems</name>\n        <value>hdfs,hftp,webhdfs</value>\n        <description>\n            Enlist the different filesystems supported for federation. If wildcard \"*\" is specified,\n            then ALL file schemes will be allowed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.URIHandlerService.uri.handlers</name>\n        <value>org.apache.oozie.dependency.FSURIHandler</value>\n        <description>\n            Enlist the different uri handlers supported for data availability checks.\n        </description>\n    </property>\n    <!-- Oozie HTTP Notifications -->\n\n    <property>\n        <name>oozie.notification.url.connection.timeout</name>\n        <value>10000</value>\n        <description>\n            Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does\n            HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url',\n            'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url'\n            properties in their job.properties. Refer to section '5 Oozie Notifications' in the\n            Workflow specification for details.\n        </description>\n    </property>\n\n\n    <!-- Enable Distributed Cache workaround for Hadoop 2.0.2-alpha (MAPREDUCE-4820) -->\n    <property>\n        <name>oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache</name>\n        <value>false</value>\n        <description>\n            Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set\n            the distributed cache for the action job because the local JARs are implicitly\n            included triggering a duplicate check.\n            This flag removes the distributed cache files for the action as they'll be\n            included from the local JARs of the JobClient (MRApps) submitting the action\n            job from the launcher.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.filter.app.types</name>\n        <value>workflow_job, coordinator_action</value>\n        <description>\n            The app-types among workflow/coordinator/bundle job/action for which\n            for which events system is enabled.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.event.queue</name>\n        <value>org.apache.oozie.event.MemoryEventQueue</value>\n        <description>\n            The implementation for EventQueue in use by the EventHandlerService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.event.listeners</name>\n        <value>org.apache.oozie.jms.JMSJobEventListener</value>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.queue.size</name>\n        <value>10000</value>\n        <description>\n            Maximum number of events to be contained in the event queue.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.worker.interval</name>\n        <value>30</value>\n        <description>\n            The default interval (seconds) at which the worker threads will be scheduled to run\n            and process events.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.batch.size</name>\n        <value>10</value>\n        <description>\n            The batch size for batched draining per thread from the event queue.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.EventHandlerService.worker.threads</name>\n        <value>3</value>\n        <description>\n            Number of worker threads to be scheduled to run and process events.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.capacity</name>\n        <value>5000</value>\n        <description>\n            Maximum number of sla records to be contained in the memory structure.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.alert.events</name>\n        <value>END_MISS</value>\n        <description>\n            Default types of SLA events for being alerted of.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.calculator.impl</name>\n        <value>org.apache.oozie.sla.SLACalculatorMemory</value>\n        <description>\n            The implementation for SLACalculator in use by the SLAService.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.job.event.latency</name>\n        <value>90000</value>\n        <description>\n            Time in milliseconds to account of latency of getting the job status event\n            to compare against and decide sla miss/met\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.check.interval</name>\n        <value>30</value>\n        <description>\n            Time interval, in seconds, at which SLA Worker will be scheduled to run\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.disable.alerts.older.than</name>\n        <value>48</value>\n        <description>\n            Time threshold, in HOURS, for disabling SLA alerting for jobs whose\n            nominal time is older than this.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.sla.service.SLAService.maximum.retry.count</name>\n        <value>3</value>\n        <description>\n            Number of times an SLA calculator status will be tried to get updated when any database related error occurs.\n            It's possible that multiple WorkflowJobBean / CoordActionBean instances being inserted won't have SLACalcStatus entries\n            inside SLACalculatorMemory#slaMap by the time written to database, and thus, no SLA will be tracked.\n            In those rare cases, preconfigured maximum retry count can be extended.\n        </description>\n    </property>\n\n    <!-- ZooKeeper configuration -->\n    <property>\n        <name>oozie.zookeeper.connection.string</name>\n        <value>localhost:2181</value>\n        <description>\n            Comma-separated values of host:port pairs of the ZooKeeper servers.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.namespace</name>\n        <value>oozie</value>\n        <description>\n            The namespace to use.  All of the Oozie Servers that are planning on talking to each other should have the same\n            namespace.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.connection.timeout</name>\n        <value>180</value>\n        <description>\n            Default ZK connection timeout (in sec).\n        </description>\n    </property>\n    <property>\n        <name>oozie.zookeeper.session.timeout</name>\n        <value>300</value>\n        <description>\n            Default ZK session timeout (in sec). If connection is lost even after retry, then Oozie server will shutdown\n            itself if oozie.zookeeper.server.shutdown.ontimeout is true.\n        </description>\n    </property>\n    <property>\n        <name>oozie.zookeeper.max.retries</name>\n        <value>10</value>\n        <description>\n            Maximum number of times to retry.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.zookeeper.server.shutdown.ontimeout</name>\n        <value>true</value>\n        <description>\n            If true, Oozie server will shutdown itself on ZK\n            connection timeout.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.lock.release.retry.time.limit.minutes</name>\n        <value>30</value>\n        <description>\n            On exception while releasing the lock, Oozie will exponentially retry till specified minutes before giving up.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.hostname</name>\n        <value>0.0.0.0</value>\n        <description>\n            Oozie server host name. The network interface Oozie server binds to as an IP address or a hostname.\n            Most users won't need to change this setting from the default value.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.port</name>\n        <value>11000</value>\n        <description>\n            Oozie server port.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.request.header.size</name>\n        <value>65536</value>\n        <description>\n            Oozie HTTP request header size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.http.response.header.size</name>\n        <value>65536</value>\n        <description>\n            Oozie HTTP response header size.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.port</name>\n        <value>11443</value>\n        <description>\n            Oozie ssl server port.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.enabled</name>\n        <value>false</value>\n        <description>\n            Controls whether SSL encryption is enabled.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.truststore.file</name>\n        <value></value>\n        <description>\n            Path to a TrustStore file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.keystore.file</name>\n        <value></value>\n        <description>\n            Path to a KeyStore file.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.keystore.pass</name>\n        <value></value>\n        <description>\n            Password to the KeyStore.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.include.protocols</name>\n        <value>TLSv1.1,TLSv1.2,TLSv1.3</value>\n        <description>\n            Enabled TLS protocols.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.exclude.protocols</name>\n        <value></value>\n        <description>\n            Disabled TLS protocols.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.include.cipher.suites</name>\n        <value></value>\n        <description>\n            List of Cipher suites to include.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.https.exclude.cipher.suites</name>\n        <value>TLS_ECDHE_RSA_WITH_RC4_128_SHA,SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA,SSL_RSA_WITH_DES_CBC_SHA,SSL_DHE_RSA_WITH_DES_CBC_SHA,SSL_RSA_EXPORT_WITH_RC4_40_MD5,SSL_RSA_EXPORT_WITH_DES40_CBC_SHA,SSL_RSA_WITH_RC4_128_MD5</value>\n        <description>\n            List of weak Cipher suites to exclude.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.hsts.max.age.seconds</name>\n        <value>31536000</value>\n        <description>\n            Strict Transport Security max age in seconds if SSL is enabled. Ideally it is set to one year (31536000 sec).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.jsp.tmp.dir</name>\n        <value>/tmp</value>\n        <description>\n            Temporary directory for compiling JSP pages.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.server.threadpool.max.threads</name>\n        <value>150</value>\n        <description>\n            Controls the threadpool size for the Oozie Server (if using embbedded Jetty)\n        </description>\n    </property>\n\n    <!-- Sharelib Configuration -->\n    <property>\n        <name>oozie.service.ShareLibService.mapping.file</name>\n        <value> </value>\n        <description>\n            Sharelib mapping files contains list of key=value,\n            where key will be the sharelib name for the action and value is a comma separated list of\n            DFS or local filesystem directories or jar files.\n            Example.\n            oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/\n            oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/\n            oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar\n            oozie.hive=file:///usr/local/oozie/share/lib/hive/\n        </description>\n\n    </property>\n    <property>\n        <name>oozie.service.ShareLibService.fail.fast.on.startup</name>\n        <value>false</value>\n        <description>\n            Fails server starup if sharelib initilzation fails.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ShareLibService.purge.interval</name>\n        <value>1</value>\n        <description>\n            How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ShareLibService.temp.sharelib.retention.days</name>\n        <value>7</value>\n        <description>\n            ShareLib retention time in days.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.ship.launcher.jar</name>\n        <value>false</value>\n        <description>\n            Specifies whether launcher jar is shipped or not.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.jobinfo.enable</name>\n        <value>false</value>\n        <description>\n            JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have\n            property(oozie.job.info) which value is multiple key/value pair separated by \",\". This information can be used for\n            analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs,\n            etc from mapreduce job history logs and configuration.\n            User can also add custom workflow property to jobinfo by adding property which prefix with \"oozie.job.info.\"\n            Eg.\n            oozie.job.info=\"bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=,\n            wf.name=,action.name=,action.type=,launcher=true\"\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.max.log.scan.duration</name>\n        <value>-1</value>\n        <description>\n            Max log scan duration in hours. If log scan request end_date - start_date > value,\n            then exception is thrown to reduce the scan duration. -1 indicate no limit.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.XLogStreamingService.actionlist.max.log.scan.duration</name>\n        <value>-1</value>\n        <description>\n            Max log scan duration in hours for coordinator job when list of actions are specified.\n            If log streaming request end_date - start_date > value, then exception is thrown to reduce the scan duration.\n            -1 indicate no limit.\n            This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified.\n        </description>\n    </property>\n\n    <!-- JvmPauseMonitorService Configuration -->\n    <property>\n        <name>oozie.service.JvmPauseMonitorService.warn-threshold.ms</name>\n        <value>10000</value>\n        <description>\n            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate\n            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for\n            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log\n            a WARN level message; there is also a counter named \"jvm.pause.warn-threshold\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.JvmPauseMonitorService.info-threshold.ms</name>\n        <value>1000</value>\n        <description>\n            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate\n            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for\n            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log\n            an INFO level message; there is also a counter named \"jvm.pause.info-threshold\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.locks.reaper.threshold</name>\n        <value>300</value>\n        <description>\n            The frequency at which the ChildReaper will run.\n            Duration should be in sec. Default is 5 min.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.ZKLocksService.locks.reaper.threads</name>\n        <value>2</value>\n        <description>\n            Number of fixed threads used by ChildReaper to\n            delete empty locks.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.check.interval\n        </name>\n        <value>1440</value>\n        <description>\n            Interval, in minutes, at which AbandonedCoordCheckerService should run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.check.delay\n        </name>\n        <value>60</value>\n        <description>\n            Delay, in minutes, at which AbandonedCoordCheckerService should run.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.failure.limit\n        </name>\n        <value>25</value>\n        <description>\n            Failure limit. A job is considered to be abandoned/faulty if total number of actions in\n            failed/timedout/suspended >= \"Failure limit\" and there are no succeeded action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.kill.jobs\n        </name>\n        <value>false</value>\n        <description>\n            If true, AbandonedCoordCheckerService will kill abandoned coords.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.AbandonedCoordCheckerService.job.older.than</name>\n        <value>2880</value>\n        <description>\n            In minutes, job will be considered as abandoned/faulty if job is older than this value.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.notification.proxy</name>\n        <value></value>\n        <description>\n            System level proxy setting for job notifications.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.wf.rerun.disablechild</name>\n        <value>false</value>\n        <description>\n            By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and\n            it will only rerun through parent.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.use.system.libpath</name>\n        <value>false</value>\n        <description>\n            Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath=\n            in the job.properties and this value is true and Oozie will include sharelib jars for workflow.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.PauseTransitService.callable.batch.size\n        </name>\n        <value>10</value>\n        <description>\n            This value determines the number of callable which will be batched together\n            to be executed by a single thread.\n        </description>\n    </property>\n\n    <!-- XConfiguration -->\n    <property>\n        <name>oozie.configuration.substitute.depth</name>\n        <value>20</value>\n        <description>\n            This value determines the depth of substitution in configurations.\n            If set -1, No limitation on substitution.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations</name>\n        <value>*=spark-conf</value>\n        <description>\n            Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of\n            the ResourceManager of a YARN cluster. The wildcard '*' configuration is\n            used when there is no exact match for an authority. The SPARK_CONF_DIR contains\n            the relevant spark-defaults.conf properties file. If the path is relative is looked within\n            the Oozie configuration directory; though the path can be absolute.  This is only used\n            when the Spark master is set to either \"yarn-client\" or \"yarn-cluster\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations.blacklist</name>\n        <value>spark.yarn.jar,spark.yarn.jars</value>\n        <description>\n            Comma separated list of properties to ignore from any Spark configurations specified in\n            oozie.service.SparkConfigurationService.spark.configurations property.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SparkConfigurationService.spark.configurations.ignore.spark.yarn.jar</name>\n        <value>true</value>\n        <description>\n            Deprecated. Use oozie.service.SparkConfigurationService.spark.configurations.blacklist instead.\n            If true, Oozie will ignore the \"spark.yarn.jar\" property from any Spark configurations specified in\n            oozie.service.SparkConfigurationService.spark.configurations.  If false, Oozie will not ignore it.  It is recommended\n            to leave this as true because it can interfere with the jars in the Spark sharelib.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.attachment.enabled</name>\n        <value>true</value>\n        <description>\n            This value determines whether to support email attachment of a file on HDFS.\n            Set it false if there is any security concern.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.host</name>\n        <value>localhost</value>\n        <description>\n            The host where the email action may find the SMTP server.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.port</name>\n        <value>25</value>\n        <description>\n            The port to connect to for the SMTP server, for email actions.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.auth</name>\n        <value>false</value>\n        <description>\n            Boolean property that toggles if authentication is to be done or not when using email actions.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.starttls.enable</name>\n        <value>false</value>\n        <description>\n            Boolean property that toggles if use TLS in communication or not.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.username</name>\n        <value></value>\n        <description>\n            If authentication is enabled for email actions, the username to login as (to the SMTP server).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.password</name>\n        <value></value>\n        <description>\n            If authentication is enabled for email actions, the password to login with (to the SMTP server).\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.from.address</name>\n        <value>oozie@localhost</value>\n        <description>\n            The from address to be used for mailing all emails done via the email action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.email.smtp.socket.timeout.ms</name>\n        <value>10000</value>\n        <description>\n            The timeout to apply over all SMTP server socket operations done during the email action.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.name-node</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;name-node&gt; element in applicable action types.  This value will be used when\n            neither the action itself nor the global section specifies a &lt;name-node&gt;.  As expected, it should be of the form\n            \"hdfs://HOST:PORT\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.job-tracker</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;job-tracker&gt; element in applicable action types.  This value will be used when\n            neither the action itself nor the global section specifies a &lt;job-tracker&gt;.  As expected, it should be of the form\n            \"HOST:PORT\".\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.actions.default.resource-manager</name>\n        <value> </value>\n        <description>\n            The default value to use for the &lt;resource-manager&gt; element in applicable action types.  This value will be used\n            when neither the action itself nor the global section specifies a &lt;resource-managerr&gt;.  As expected, it should\n            be of the form \"HOST:PORT\". If both oozie.actions.default.job-tracker and oozie.actions.default.resource-manager are\n            specified, oozie.actions.default.resource-manager takes precedence.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaCheckerService.check.interval</name>\n        <value>168</value>\n        <description>\n            This is the interval at which Oozie will check the database schema, in hours.\n            A zero or negative value will disable the checker.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.service.SchemaCheckerService.ignore.extras</name>\n        <value>false</value>\n        <description>\n            When set to false, the schema checker will consider extra (unused) tables, columns, and indexes to be incorrect.  When\n            set to true, these will be ignored.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.hcat.uri.regex.pattern</name>\n        <value>([a-z]+://[\\w\\.\\-]+:\\d+[,]*)+/\\w+/\\w+/?[\\w+=;\\-]*</value>\n        <description>Regex pattern for HCat URIs. The regex can be modified by users as per requirement\n            for parsing/splitting the HCat URIs.</description>\n    </property>\n\n    <property>\n        <name>oozie.action.null.args.allowed</name>\n        <value>true</value>\n        <description>\n            When set to true, empty arguments (like &lt;arg&gt;&lt;/arg&gt;) will be passed as \"null\" to the main method of a\n            given action. That is, the args[] array will contain \"null\" elements. When set to false, then \"nulls\" are removed.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.javax.xml.parsers.DocumentBuilderFactory</name>\n        <value>org.apache.xerces.jaxp.DocumentBuilderFactoryImpl</value>\n        <description>\n            Oozie will set the javax.xml.parsers.DocumentBuilderFactory Java System Property to this value.  This helps speed up\n            XML handling because the JVM doesn't have to search for the proper class every time.  An empty or whitespace value\n            skips setting the System Property.  The default implementation that Oozie uses is Xerces.\n            Most users should not have to change this.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.graphviz.timeout.seconds</name>\n        <value>60</value>\n        <description>\n            The default number of seconds Graphviz graph generation will timeout.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.vcores</name>\n        <value>1</value>\n        <description>\n            The default number of vcores that are allocated for the Launcher AMs\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.memory.mb</name>\n        <value>2048</value>\n        <description>\n            The default amount of memory in MBs that is allocated for the Launcher AMs\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.priority</name>\n        <value>0</value>\n        <description>\n            The default YARN priority of the Launcher AM\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.queue</name>\n        <value>default</value>\n        <description>\n            The default YARN queue where the Launcher AM is placed\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.default.max.attempts</name>\n        <value>2</value>\n        <description>\n            The default YARN maximal attempt count of the Launcher AM\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override</name>\n        <value>true</value>\n        <description>\n            Whether oozie.launcher.override.* and oozie.launcher.prepend.* parameters have to be considered when submitting a YARN\n            LauncherAM. That is, existing MapReduce v1, MapReduce v2, or YARN parameters used in the action configuration should be\n            populated to the Application Master launcher configuration, or not. Generally, first &lt;launcher/&gt; tag specific user\n            settings, then YARN configuration settings, then MapReduce v2, and at last, MapReduce v1 properties are copied to\n            launcher configuration.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.max.attempts</name>\n        <value>mapreduce.map.maxattempts,mapred.map.max.attempts</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override the max attempts of the MapReduce\n            Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.memory.mb</name>\n        <value>yarn.app.mapreduce.am.resource.mb,mapreduce.map.memory.mb,mapred.job.map.memory.mb</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the memory amount in MB of the\n            MapReduce Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.vcores</name>\n        <value>yarn.app.mapreduce.am.resource.cpu-vcores,mapreduce.map.cpu.vcores</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the CPU vcore count of the\n            MapReduce Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.log.level</name>\n        <value>mapreduce.map.log.level,mapred.map.child.log.level</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override the logging level of the MapReduce\n            Application Master. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.javaopts</name>\n        <value>yarn.app.mapreduce.am.command-opts,mapreduce.map.java.opts,mapred.child.java.opts</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override MapReduce Application Master JVM\n            options. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.prepend.javaopts</name>\n        <value>yarn.app.mapreduce.am.admin-command-opts</value>\n        <description>\n            A comma separated list of YARN properties to prepend to MapReduce Application Master JVM options. The first one that is\n            found will be prepended to the list of JVM options.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.env</name>\n        <value>yarn.app.mapreduce.am.env,mapreduce.map.env,mapred.child.env</value>\n        <description>\n            A comma separated list of MapReduce v1, MapReduce v2, and YARN properties to override MapReduce Application Master\n            environment variable settings. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.prepend.env</name>\n        <value>yarn.app.mapreduce.am.admin.user.env</value>\n        <description>\n            A comma separated list of YARN properties to prepend to MapReduce Application Master environment settings. The first one\n            that is found will be prepended to the list of environment settings.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.priority</name>\n        <value>mapreduce.job.priority,mapred.job.priority</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 to override MapReduce Application Master job priority. The first\n            one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.queue</name>\n        <value>mapreduce.job.queuename,mapred.job.queue.name</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce Application Master job queue\n            name. The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.view.acl</name>\n        <value>mapreduce.job.acl-view-job</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce View ACL settings.\n            The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.launcher.override.modify.acl</name>\n        <value>mapreduce.job.acl-modify-job</value>\n        <description>\n            A comma separated list of MapReduce v1 and MapReduce v2 properties to override MapReduce Modify ACL settings.\n            The first one that is found will be used.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.distcp</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the DistCp action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.hive</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Hive action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.hive2</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Hive2 action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.java</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Java action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.map-reduce</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Map-Reduce action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.pig</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Pig action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.mapreduce.needed.for.sqoop</name>\n        <value>true</value>\n        <description>\n            Whether to add MapReduce jars to the Sqoop action's classpath's by default.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.action.sqoop.shellsplitter</name>\n        <value>false</value>\n        <description>\n            Whether to use shell splitter instead of the space-based tokenizer during sqoop command splitting.\n        </description>\n    </property>\n\n    <property>\n        <name>oozie.fluent-job-api.generated.path</name>\n        <value>/user/${user.name}/oozie-fluent-job-api-generated</value>\n        <description>\n            HDFS path to store workflow / coordinator / bundle definitions generated by fluent-job-api artifact.\n            The XML files are first generated out of the fluent-job-api JARs submitted by the user at command line, then stored\n            under this HDFS folder structure for later retrieval / resubmit / check.\n            Note that the submitting user needs r/w permissions under this HDFS folder.\n            Note further that this folder structure, when does not exist, will be created.\n        </description>\n    </property>\n\n</configuration>"
  },
  {
    "path": "kettle-plugins/hadoop-cluster/ui/src/test/resources/unsecured/yarn-site.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--Autogenerated by Cloudera Manager-->\n<configuration>\n  <property>\n    <name>yarn.acl.enable</name>\n    <value>true</value>\n  </property>\n  <property>\n    <name>yarn.admin.acl</name>\n    <value>*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8032</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8033</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8030</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8031</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8088</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.webapp.https.address</name>\n    <value>svqxbdcn6cdh514un3.pentahoqa.com:8090</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.admin.client.thread-count</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-mb</name>\n    <value>1024</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-mb</name>\n    <value>512</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-mb</name>\n    <value>3784</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.minimum-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.increment-allocation-vcores</name>\n    <value>1</value>\n  </property>\n  <property>\n    <name>yarn.scheduler.maximum-allocation-vcores</name>\n    <value>4</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.amliveliness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.am.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.am.max-attempts</name>\n    <value>2</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.container.liveness-monitor.interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.nm.liveness-monitor.interval-ms</name>\n    <value>1000</value>\n  </property>\n  <property>\n    <name>yarn.nm.liveness-monitor.expiry-interval-ms</name>\n    <value>600000</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.resource-tracker.client.thread-count</name>\n    <value>50</value>\n  </property>\n  <property>\n    <name>yarn.application.classpath</name>\n    <value>$HADOOP_CLIENT_CONF_DIR,$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.scheduler.class</name>\n    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>\n  </property>\n  <property>\n    <name>yarn.resourcemanager.max-completed-applications</name>\n    <value>10000</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir</name>\n    <value>/tmp/logs</value>\n  </property>\n  <property>\n    <name>yarn.nodemanager.remote-app-log-dir-suffix</name>\n    <value>logs</value>\n  </property>\n</configuration>\n"
  },
  {
    "path": "kettle-plugins/hbase/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>hbase-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-hbase-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI HBase Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hbase-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hbase/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-hbase-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-hbase-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-hbase-core:*</exclude>\n      </excludes>\n      <includes>\n        <include>pentaho:pentaho-big-data-kettle-plugins-hbase-meta</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/hbase/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/hbase/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hbase</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>hbase-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI HBase Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hbase/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hbase</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-hbase-core</artifactId>\n  <name>PDI Hbase Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n  <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,org.pentaho.di.trans.steps.named.cluster,*</Import-Package>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <!--\n  hbase depends on \"hbase-meta\". Should \"hbase-meta\" be merged with \"hbase\"?\n  -->\n\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n\n      <scope>provided</scope>\n\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-inline</artifactId>\n      <version>${mockito-inline.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-hbase-meta</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.hadoop.shims</groupId>\n      <artifactId>pentaho-hadoop-shims-common-base</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/FilterDefinition.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilter;\nimport org.pentaho.di.core.exception.KettleValueException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionTypeConverter;\n\npublic class FilterDefinition {\n\n  @Injection( name = \"ALIAS\", group = \"FILTER\" )\n  private String alias;\n\n  @Injection( name = \"FIELD_TYPE\", group = \"FILTER\" )\n  private String fieldType;\n\n  @Injection( name = \"COMPARISON_TYPE\", group = \"FILTER\", converter = ComparisonTypeConverter.class )\n  private ColumnFilter.ComparisonType comparisonType;\n\n  @Injection( name = \"SIGNED_COMPARISON\", group = \"FILTER\" )\n  private boolean signedComparison;\n\n  @Injection( name = \"COMPARISON_VALUE\", group = \"FILTER\" )\n  private String constant;\n\n  @Injection( name = \"FORMAT\", group = \"FILTER\" )\n  private String format;\n\n  public String getAlias() {\n    return alias;\n  }\n\n  public void setAlias( String alias ) {\n    this.alias = alias;\n  }\n\n  public String getFieldType() {\n    return fieldType;\n  }\n\n  public void setFieldType( String fieldType ) {\n    this.fieldType = fieldType;\n  }\n\n  public ColumnFilter.ComparisonType getComparisonType() {\n    return comparisonType;\n  }\n\n  public void setComparisonType( ColumnFilter.ComparisonType comparisonType ) {\n    this.comparisonType = comparisonType;\n  }\n\n  public boolean isSignedComparison() {\n    return signedComparison;\n  }\n\n  public void setSignedComparison( boolean signedComparison ) {\n    this.signedComparison = signedComparison;\n  }\n\n  public String getConstant() {\n    return constant;\n  }\n\n  public void setConstant( String constant ) {\n    this.constant = constant;\n  }\n\n  public String getFormat() {\n    return format;\n  }\n\n  public void setFormat( String format ) {\n    this.format = format;\n  }\n\n  public static class ComparisonTypeConverter extends InjectionTypeConverter {\n    @Override\n    public ColumnFilter.ComparisonType string2enum( Class<?> enumClass, String value ) throws KettleValueException {\n      return ColumnFilter.ComparisonType.stringToOpp( value );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/HBaseConnectionException.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\n/**\n * @author Tatsiana_Kasiankova\n *\n */\npublic class HBaseConnectionException extends Exception {\n\n  private static final long serialVersionUID = -6215675067801506240L;\n\n  public HBaseConnectionException( String message, Throwable cause ) {\n    super( message, cause );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/HbaseUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.pentaho.di.core.util.StringUtil;\n\npublic class HbaseUtil {\n  public static final String HBASE_NAMESPACE_DELIMITER = \":\";\n  public static final String HBASE_DEFAULT_NAMESPACE = \"default\";\n\n  private HbaseUtil() {\n  }\n\n  public static String parseNamespaceFromTableName( String tableName ) {\n    return parseNamespaceFromTableName( tableName, HBASE_DEFAULT_NAMESPACE );\n  }\n\n  public static String parseNamespaceFromTableName( String tableName, String defaultNamespaceIfNoneSpecified ) {\n    String nameSpace = null;\n    if ( tableName.contains( HBASE_NAMESPACE_DELIMITER ) ) {\n      nameSpace = tableName.substring( 0, tableName.indexOf( HBASE_NAMESPACE_DELIMITER ) ).trim();\n    }\n    if ( nameSpace == null || nameSpace.isEmpty() ) {\n      return defaultNamespaceIfNoneSpecified;\n    } else {\n      return nameSpace;\n    }\n  }\n\n  public static String parseQualifierFromTableName( String tableName ) {\n    if ( tableName.contains( HBASE_NAMESPACE_DELIMITER ) ) {\n      return tableName.substring( tableName.indexOf( HBASE_NAMESPACE_DELIMITER ) + 1 ).trim();\n    } else {\n      return tableName.trim();\n    }\n  }\n\n  /**\n   * Force the namespace on the qualifier received.  If the qualifier already has a namespace, ignore it.\n   *\n   * @param namespace\n   * @param qualifier\n   * @return\n   */\n  public static String expandTableName( String namespace, String qualifier ) {\n    if ( namespace == null || namespace.isEmpty() || qualifier == null ) {\n      throw new IllegalArgumentException( \"Namespace must have a value, qualifier must not be null\" );\n    }\n    if ( qualifier.indexOf( HBASE_NAMESPACE_DELIMITER ) > -1 ) {\n      return namespace + HBASE_NAMESPACE_DELIMITER + qualifier\n        .substring( qualifier.indexOf( HBASE_NAMESPACE_DELIMITER ) + 1 );\n    }\n    return namespace + HBASE_NAMESPACE_DELIMITER + qualifier;\n  }\n\n  /**\n   * returns a fully qualified table name.  If the incoming name has a namespace it will honor it, otherwise it will\n   * return the default namespace.\n   *\n   * @param qualifier\n   * @return namespace:qualifier\n   */\n  public static String expandTableName( String qualifier ) {\n    if ( qualifier == null ) {\n      return HBASE_DEFAULT_NAMESPACE + HBASE_NAMESPACE_DELIMITER;\n    }\n    int pos = qualifier.indexOf( HBASE_NAMESPACE_DELIMITER );\n    if ( pos > 0 ) {\n      return qualifier;\n    }\n    if ( pos == 0 ) {\n      return HBASE_DEFAULT_NAMESPACE + qualifier;\n    }\n    return HBASE_DEFAULT_NAMESPACE + HBASE_NAMESPACE_DELIMITER + qualifier;\n  }\n\n  public static String expandLegacyTableNameOnLoad( String qualifier ) {\n    if ( qualifier == null ) {\n      return expandTableName( \"\" );\n    }\n    int pos = Math.min( positionOfString( qualifier, StringUtil.UNIX_OPEN ),\n      positionOfString( qualifier, StringUtil.WINDOWS_OPEN ) );\n    if ( pos == qualifier.length() ) {\n      // No variables in qualifier\n      return expandTableName( qualifier );\n    }\n\n    int delimPos = qualifier.indexOf( HBASE_NAMESPACE_DELIMITER );\n    if ( delimPos > -1 && delimPos < pos ) {\n      // hard delimeter exists before the variables, so ok to parse\n      return expandTableName( qualifier );\n    }\n    // variable could be the namespace, or not, we can't tell without substitution\n    return qualifier;\n  }\n\n  private static int positionOfString( String target, String search ) {\n    int pos = target.indexOf( search );\n    if ( pos == -1 ) {\n      return target.length();\n    }\n    return pos;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/MappingDefinition.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport java.util.List;\n\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\n\npublic class MappingDefinition {\n\n  @Injection( name = \"TABLE_NAME\", group = \"MAPPING\" )\n  private String tableName;\n\n  @Injection( name = \"MAPPING_NAME\", group = \"MAPPING\" )\n  private String mappingName;\n\n  @InjectionDeep\n  private List<MappingColumn> mappingColumns;\n\n  public String getTableName() {\n    return tableName;\n  }\n\n  public void setTableName( String tableName ) {\n    this.tableName = tableName;\n  }\n\n  public String getMappingName() {\n    return mappingName;\n  }\n\n  public void setMappingName( String mappingName ) {\n    this.mappingName = mappingName;\n  }\n\n  public List<MappingColumn> getMappingColumns() {\n    return mappingColumns;\n  }\n\n  public void setMappingColumns( List<MappingColumn> mappingColumns ) {\n    this.mappingColumns = mappingColumns;\n  }\n\n  public static class MappingColumn {\n\n    @Injection( name = \"MAPPING_ALIAS\", group = \"MAPPING\" )\n    private String alias;\n\n    @Injection( name = \"MAPPING_KEY\", group = \"MAPPING\" )\n    private boolean key;\n\n    @Injection( name = \"MAPPING_COLUMN_FAMILY\", group = \"MAPPING\" )\n    private String columnFamily;\n\n    @Injection( name = \"MAPPING_COLUMN_NAME\", group = \"MAPPING\" )\n    private String columnName;\n\n    @Injection( name = \"MAPPING_TYPE\", group = \"MAPPING\" )\n    private String type;\n\n    @Injection( name = \"MAPPING_INDEXED_VALUES\", group = \"MAPPING\" )\n    private String indexedValues;\n\n    public String getAlias() {\n      return alias;\n    }\n\n    public void setAlias( String alias ) {\n      this.alias = alias;\n    }\n\n    public boolean isKey() {\n      return key;\n    }\n\n    public void setKey( boolean key ) {\n      this.key = key;\n    }\n\n    public String getColumnFamily() {\n      return columnFamily;\n    }\n\n    public void setColumnFamily( String columnFamily ) {\n      this.columnFamily = columnFamily;\n    }\n\n    public String getColumnName() {\n      return columnName;\n    }\n\n    public void setColumnName( String columnName ) {\n      this.columnName = columnName;\n    }\n\n    public String getType() {\n      return type;\n    }\n\n    public void setType( String type ) {\n      this.type = type;\n    }\n\n    public String getIndexedValues() {\n      return indexedValues;\n    }\n\n    public void setIndexedValues( String indexedValues ) {\n      this.indexedValues = indexedValues;\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/NamedClusterLoadSaveUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.w3c.dom.Node;\n\n/**\n * Created by bryan on 1/19/16.\n */\npublic class NamedClusterLoadSaveUtil {\n  public static final String CLUSTER_NAME = \"cluster_name\";\n  public static final String ZOOKEEPER_HOSTS = \"zookeeper_hosts\";\n  public static final String ZOOKEEPER_PORT = \"zookeeper_port\";\n\n  public NamedCluster loadClusterConfig( NamedClusterService namedClusterService, ObjectId id_jobentry, Repository rep,\n                                         IMetaStore metaStore, Node entrynode,\n                                         LogChannelInterface logChannelInterface ) {\n    // load from system first, then fall back to copy stored with job (AbstractMeta)\n    NamedCluster nc = null;\n    String clusterName = null;\n    try {\n      // attempt to load from named cluster\n      if ( entrynode != null ) {\n        clusterName = XMLHandler.getTagValue( entrynode, CLUSTER_NAME ); //$NON-NLS-1$\n      } else if ( rep != null ) {\n        clusterName = rep.getJobEntryAttributeString( id_jobentry, CLUSTER_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n      }\n\n      if ( !StringUtils.isEmpty( clusterName ) ) {\n        nc = namedClusterService.getNamedClusterByName( clusterName, metaStore );\n      }\n\n      if ( nc != null ) {\n        return nc;\n      }\n    } catch ( Throwable t ) {\n      logChannelInterface.logDebug( t.getMessage(), t );\n    }\n\n    nc = namedClusterService.getClusterTemplate();\n    if ( !StringUtils.isEmpty( clusterName ) ) {\n      nc.setName( clusterName );\n    }\n    if ( entrynode != null ) {\n      // load default values for cluster & legacy fallback\n      nc.setZooKeeperHost( XMLHandler.getTagValue( entrynode, ZOOKEEPER_HOSTS ) ); //$NON-NLS-1$\n      nc.setZooKeeperPort( XMLHandler.getTagValue( entrynode, ZOOKEEPER_PORT ) ); //$NON-NLS-1$\n    } else if ( rep != null ) {\n      // load default values for cluster & legacy fallback\n      try {\n        nc.setZooKeeperHost( rep.getJobEntryAttributeString( id_jobentry, ZOOKEEPER_HOSTS ) );\n        nc.setZooKeeperPort( rep.getJobEntryAttributeString( id_jobentry, ZOOKEEPER_PORT ) ); //$NON-NLS-1$\n      } catch ( KettleException ke ) {\n        logChannelInterface.logError( ke.getMessage(), ke );\n      }\n    }\n    return nc;\n  }\n\n  public void getXml( StringBuilder retval, NamedClusterService namedClusterService, NamedCluster namedCluster,\n                      IMetaStore metaStore, LogChannelInterface logChannelInterface ) {\n    String namedClusterName = namedCluster.getName();\n    String m_zookeeperHosts = namedCluster.getZooKeeperHost();\n    String m_zookeeperPort = namedCluster.getZooKeeperPort();\n\n    if ( !StringUtils.isEmpty( namedClusterName ) ) {\n      retval.append( \"\\n    \" )\n        .append( XMLHandler.addTagValue( CLUSTER_NAME, namedClusterName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n      try {\n        if ( metaStore != null && namedClusterService.contains( namedClusterName, metaStore ) ) {\n          // pull config from NamedCluster\n          NamedCluster nc = namedClusterService.read( namedClusterName, metaStore );\n          if ( nc != null ) {\n            m_zookeeperHosts = nc.getZooKeeperHost();\n            m_zookeeperPort = nc.getZooKeeperPort();\n          }\n        }\n      } catch ( MetaStoreException e ) {\n        logChannelInterface.logDebug( e.getMessage(), e );\n      }\n    }\n\n    if ( !Utils.isEmpty( m_zookeeperHosts ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( ZOOKEEPER_HOSTS, m_zookeeperHosts ) );\n    }\n    if ( !Utils.isEmpty( m_zookeeperPort ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( ZOOKEEPER_PORT, m_zookeeperPort ) );\n    }\n  }\n\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step,\n                       NamedClusterService namedClusterService, NamedCluster namedCluster,\n                       LogChannelInterface logChannelInterface )\n    throws KettleException {\n    String namedClusterName = namedCluster.getName();\n    String m_zookeeperHosts = namedCluster.getZooKeeperHost();\n    String m_zookeeperPort = namedCluster.getZooKeeperPort();\n\n    if ( !StringUtils.isEmpty( namedClusterName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, CLUSTER_NAME, namedClusterName ); //$NON-NLS-1$\n      try {\n        if ( namedClusterService.contains( namedClusterName, metaStore ) ) {\n          // pull config from NamedCluster\n          NamedCluster nc = namedClusterService.read( namedClusterName, metaStore );\n          m_zookeeperHosts = nc.getZooKeeperHost();\n          m_zookeeperPort = nc.getZooKeeperPort();\n        }\n      } catch ( MetaStoreException e ) {\n        logChannelInterface.logDebug( e.getMessage(), e );\n      }\n    }\n\n    if ( !Utils.isEmpty( m_zookeeperHosts ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, ZOOKEEPER_HOSTS, m_zookeeperHosts );\n    }\n    if ( !Utils.isEmpty( m_zookeeperPort ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, ZOOKEEPER_PORT, m_zookeeperPort );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/ServiceStatus.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\n\n/**\n * Helper class that shows HBaseService status of a Step.\n */\npublic class ServiceStatus {\n\n  public static ServiceStatus OK = new ServiceStatus();\n\n  private boolean ok = true;\n  private Exception exception;\n\n  private ServiceStatus() {\n  }\n\n  private ServiceStatus( Exception exception ) {\n    this.ok = false;\n    this.exception = exception;\n  }\n\n  public boolean isOk() {\n    return ok;\n  }\n\n  public Exception getException() {\n    return exception;\n  }\n\n  public static ServiceStatus notOk( Exception e ) {\n    return new ServiceStatus( e );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.HBaseRowToKettleTuple;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.Result;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTable;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScanner;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScannerBuilder;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\n/**\n * Class providing an input step for reading data from an HBase table according to meta data mapping info stored in a\n * separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the meta data\n * format.\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class HBaseInput extends BaseStep implements StepInterface {\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n\n  protected HBaseInputMeta m_meta;\n  protected HBaseInputData m_data;\n  private HBaseService hBaseService;\n  private HBaseTable m_hbAdminTable;\n  private ResultScanner resultScanner;\n  private HBaseValueMetaInterfaceFactory hBaseValueMetaInterfaceFactory;\n\n  public HBaseInput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                     Trans trans, NamedClusterServiceLocator namedClusterServiceLocator ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  /** Connection/admin object for interacting with HBase */\n  protected HBaseConnection m_hbAdmin;\n\n  /** Byte utilities */\n  protected ByteConversionUtil m_bytesUtil;\n\n  /** The mapping admin object for interacting with mapping information */\n  protected MappingAdmin m_mappingAdmin;\n\n  /** The mapping information to use in order to decode HBase column values */\n  protected Mapping m_tableMapping;\n\n  /** Information from the mapping */\n  protected Map<String, HBaseValueMetaInterface> m_columnsMappedByAlias;\n\n  /** User-selected columns from the mapping (null indicates output all columns) */\n  protected List<HBaseValueMetaInterface> m_userOutputColumns;\n\n  /**\n   * Used when decoding columns to <key, family, column, value, time stamp> tuples\n   */\n  protected HBaseRowToKettleTuple m_tupleHandler;\n\n  @Override\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n\n    if ( first ) {\n      first = false;\n      m_meta = (HBaseInputMeta) smi;\n      m_data = (HBaseInputData) sdi;\n\n      // Get the connection to HBase\n      try {\n        List<String> connectionMessages = new ArrayList<String>();\n        hBaseService = namedClusterServiceLocator.getService( m_meta.getNamedCluster(), HBaseService.class );\n        m_hbAdmin = hBaseService.getHBaseConnection( this, environmentSubstitute( m_meta.getCoreConfigURL() ),\n          environmentSubstitute( m_meta.getDefaultConfigURL() ), log );\n        m_bytesUtil = hBaseService.getByteConversionUtil();\n        hBaseValueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n\n        if ( connectionMessages.size() > 0 ) {\n          for ( String m : connectionMessages ) {\n            logBasic( m );\n          }\n        }\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.UnableToObtainConnection\" ), ex );\n      }\n      try {\n        m_mappingAdmin = new MappingAdmin( m_hbAdmin );\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.UnableToCreateAMappingAdminConnection\" ), ex );\n      }\n\n      // check on the existence and readiness of the target table\n      String sourceName = environmentSubstitute( m_meta.getSourceTableName() );\n      if ( StringUtil.isEmpty( sourceName ) ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.TableName.Missing\" ) );\n      }\n      HBaseTable hBaseTable;\n      try {\n        hBaseTable = m_hbAdmin.getTable( sourceName );\n      } catch ( IOException e ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.Error.CantGetTable\", sourceName ), e );\n      }\n      try {\n        if ( !hBaseTable.exists() ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.SourceTableDoesNotExist\", sourceName ) );\n        }\n\n        if ( hBaseTable.disabled() || !hBaseTable.available() ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.SourceTableIsNotAvailable\", sourceName ) );\n        }\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.AvailabilityReadinessProblem\", sourceName ), ex );\n      }\n\n      if ( m_meta.getMapping() != null && Const.isEmpty( m_meta.getSourceMappingName() ) ) {\n        // use embedded mapping\n        m_tableMapping = m_meta.getMapping();\n      } else {\n        // Otherwise get mapping details for the source table from HBase\n        if ( Const.isEmpty( m_meta.getSourceMappingName() ) ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.Error.NoMappingName\" ) );\n        }\n        try {\n          m_tableMapping =\n              m_mappingAdmin.getMapping( environmentSubstitute( m_meta.getSourceTableName() ),\n                  environmentSubstitute( m_meta.getSourceMappingName() ) );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.UnableToRetrieveMapping\", environmentSubstitute( m_meta.getSourceMappingName() ),\n              environmentSubstitute( m_meta.getSourceTableName() ) ), ex );\n        }\n      }\n      HBaseValueMetaInterface vm2 = hBaseValueMetaInterfaceFactory\n        .createHBaseValueMetaInterface( null, null, m_tableMapping.getKeyName(),\n          getKettleTypeByKeyType( m_tableMapping.getKeyType() ), -1, -1 );\n      vm2.setKey( true );\n      try {\n        m_tableMapping.addMappedColumn( vm2, m_tableMapping.isTupleMapping() );\n      } catch ( Exception exception ) {\n        exception.printStackTrace();\n      }\n      m_columnsMappedByAlias = m_tableMapping.getMappedColumns();\n\n      if ( m_tableMapping.isTupleMapping() ) {\n        m_tupleHandler = new HBaseRowToKettleTuple( m_bytesUtil );\n      }\n\n      // conversion mask to use for user specified key values in range scan.\n      // This can come from user-specified field information OR it can be\n      // provided in the keyStart/keyStop values by suffixing the value with\n      // \"@converionMask\"\n      String dateOrNumberConversionMaskForKey = null;\n\n      // if there are any user-chosen output fields in the meta data then\n      // check them against table mapping. All selected fields must be present\n      // in the mapping\n      m_userOutputColumns = m_meta.getOutputFields();\n      if ( m_userOutputColumns != null && m_userOutputColumns.size() > 0 ) {\n        for ( HBaseValueMetaInterface vm : m_userOutputColumns ) {\n          if ( !vm.isKey() ) {\n            if ( m_columnsMappedByAlias.get( vm.getAlias() ) == null ) {\n              throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n                \"HBaseInput.Error.UnableToFindUserSelectedColumn\", vm.getAlias(), m_tableMapping.getFriendlyName() ) );\n            }\n          } else {\n            dateOrNumberConversionMaskForKey = vm.getConversionMask();\n          }\n        }\n      }\n\n      try {\n        m_hbAdminTable = m_hbAdmin.getTable( sourceName );\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.UnableToSetSourceTableForScan\" ), ex );\n      }\n\n      ResultScannerBuilder scannerBuilder = m_hbAdminTable\n        .createScannerBuilder( m_tableMapping, dateOrNumberConversionMaskForKey, m_meta.getKeyStartValue(),\n          m_meta.getKeyStopValue(), m_meta.getScannerCacheSize(), log, this );\n\n      // LIMIT THE SCAN TO JUST THE COLUMNS IN THE MAPPING\n      // User-selected output columns?\n      if ( m_userOutputColumns != null && m_userOutputColumns.size() > 0 && !m_tableMapping.isTupleMapping() ) {\n        HBaseInputData.setScanColumns( scannerBuilder, m_userOutputColumns, m_tableMapping );\n      }\n\n      // set any filters\n      if ( m_meta.getColumnFilters() != null && m_meta.getColumnFilters().size() > 0 ) {\n        HBaseInputData.setScanFilters( scannerBuilder, m_meta.getColumnFilters(), m_meta.getMatchAnyFilter(),\n          m_columnsMappedByAlias, this );\n      }\n\n      if ( !isStopped() ) {\n        try {\n          resultScanner = scannerBuilder.build();\n        } catch ( Exception e ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.UnableToExecuteSourceTableScan\" ), e );\n        }\n\n        // set up the output fields (using the mapping)\n        m_data.setOutputRowMeta( new RowMeta() );\n        m_meta.getFields( getTransMeta().getBowl(), m_data.getOutputRowMeta(), getStepname(), null, null, this,\n          repository, metaStore );\n      }\n    }\n\n    Result next = null;\n    if ( !isStopped() ) {\n      try {\n        next = resultScanner.next();\n      } catch ( Exception e ) {\n        throw new KettleException( e.getMessage(), e );\n      }\n    }\n\n    if ( next == null ) {\n      try {\n        m_hbAdminTable.close();\n        m_hbAdmin.close();\n      } catch ( Exception e ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.ProblemClosingConnection\", e.getMessage() ), e );\n      }\n      setOutputDone();\n      return false;\n    }\n\n    if ( m_tableMapping.isTupleMapping() ) {\n      List<Object[]> tupleRows =\n          HBaseInputData.getTupleOutputRows( hBaseService, next, m_userOutputColumns, m_columnsMappedByAlias, m_tableMapping,\n              m_tupleHandler, m_data.getOutputRowMeta() );\n\n      for ( Object[] tuple : tupleRows ) {\n        putRow( m_data.getOutputRowMeta(), tuple );\n      }\n      return true;\n    } else {\n      Object[] outRowData =\n          HBaseInputData.getOutputRow( next, m_userOutputColumns, m_columnsMappedByAlias, m_tableMapping, m_data\n              .getOutputRowMeta() );\n      putRow( m_data.getOutputRowMeta(), outRowData );\n      return true;\n    }\n  }\n\n  @Override\n  public boolean init( StepMetaInterface smi, StepDataInterface sdi ) {\n    if ( super.init( smi, sdi ) ) {\n      HBaseInputMeta meta = (HBaseInputMeta) smi;\n      try {\n        // Set Embedded NamedCluter MetatStore Provider Key so that it can be passed to VFS\n        if ( getTransMeta().getNamedClusterEmbedManager() != null ) {\n          getTransMeta().getNamedClusterEmbedManager().passEmbeddedMetastoreKey( getTransMeta(),\n            getTransMeta().getEmbeddedMetastoreProviderKey() );\n          meta.applyInjection( this );\n        }\n        return true;\n\n      } catch ( KettleException e ) {\n        logError( \"Error while injecting properties\", e );\n      }\n    }\n    return false;\n  }\n\n  public static int getKettleTypeByKeyType( Mapping.KeyType keyType ) {\n    if ( keyType == null ) {\n      return ValueMetaInterface.TYPE_NONE;\n    }\n    switch ( keyType ) {\n      case BINARY:\n        return ValueMetaInterface.TYPE_BINARY;\n      case STRING:\n        return ValueMetaInterface.TYPE_STRING;\n      case UNSIGNED_LONG:\n      case UNSIGNED_INTEGER:\n      case LONG:\n      case INTEGER:\n        return ValueMetaInterface.TYPE_NUMBER;\n      case UNSIGNED_DATE:\n      case DATE:\n        return ValueMetaInterface.TYPE_DATE;\n      default:\n        return ValueMetaInterface.TYPE_NONE;\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   * \n   * @see org.pentaho.di.trans.step.BaseStep#setStopped(boolean)\n   */\n  @Override\n  public void setStopped( boolean stopped ) {\n    if ( isStopped() && stopped == true ) {\n      return;\n    }\n    super.setStopped( stopped );\n\n    if ( stopped && m_hbAdmin != null ) {\n      logBasic( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.ClosingConnection\" ) );\n      try {\n        m_hbAdmin.close();\n      } catch ( IOException ex ) {\n        logError( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.Error.ProblemClosingConnection1\", ex ) );\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.HBaseRowToKettleTuple;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilter;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.Result;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScannerBuilder;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowDataUtil;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\n\n/**\n * Class providing an input step for reading data from an HBase table according to meta data mapping info stored in a\n * separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the meta data\n * format.\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n * \n */\npublic class HBaseInputData extends BaseStepData implements StepDataInterface {\n\n  /** The output data format */\n  protected RowMetaInterface m_outputRowMeta;\n\n  /**\n   * Get the output row format\n   * \n   * @return the output row format\n   */\n  public RowMetaInterface getOutputRowMeta() {\n    return m_outputRowMeta;\n  }\n\n  /**\n   * Set the output row format\n   * \n   * @param rmi\n   *          the output row format\n   */\n  public void setOutputRowMeta( RowMetaInterface rmi ) {\n    m_outputRowMeta = rmi;\n  }\n\n  /**\n   * Utility method to covert a string to a URL object.\n   * \n   * @param pathOrURL\n   *          file or http URL as a string\n   * @return a URL\n   * @throws MalformedURLException\n   *           if there is a problem with the URL.\n   */\n  public static URL stringToURL( String pathOrURL ) throws MalformedURLException {\n    URL result = null;\n\n    if ( !Const.isEmpty( pathOrURL ) ) {\n      if ( pathOrURL.toLowerCase().startsWith( \"http://\" ) || pathOrURL.toLowerCase().startsWith( \"file://\" ) ) {\n        result = new URL( pathOrURL );\n      } else {\n        String c = \"file://\" + pathOrURL;\n        result = new URL( c );\n      }\n    }\n\n    return result;\n  }\n\n  /**\n   * Set the specific columns to be returned by the scan.\n   * \n   * @param resultScannerBuilder\n   *          the resultScannerBuilder\n   * @param limitCols\n   *          the columns to limit the scan to\n   * @param tableMapping\n   *          the mapping information\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public static void setScanColumns( ResultScannerBuilder resultScannerBuilder, List<HBaseValueMetaInterface> limitCols, Mapping tableMapping )\n    throws KettleException {\n    for ( HBaseValueMetaInterface currentCol : limitCols ) {\n      if ( !currentCol.isKey() ) {\n        String colFamilyName = currentCol.getColumnFamily();\n        String qualifier = currentCol.getColumnName();\n\n        boolean binaryColName = false;\n        if ( qualifier.startsWith( \"@@@binary@@@\" ) ) {\n          qualifier = qualifier.replace( \"@@@binary@@@\", \"\" );\n          binaryColName = true;\n        }\n\n        try {\n          resultScannerBuilder.addColumnToScan( colFamilyName, qualifier, binaryColName );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.UnableToAddColumnToScan\" ), ex );\n        }\n      }\n    }\n  }\n\n  /**\n   * Set column filters to apply server-side to the scan results.\n   * \n   * @param resultScannerBuilder\n   *          the resultScannerBuilder\n   * @param columnFilters\n   *          the column filters to apply\n   * @param matchAnyFilter\n   *          if true then a row will be returned if any of the filters match (otherwise all have to match)\n   * @param columnsMappedByAlias\n   *          the columns defined in the mapping\n   * @param vars\n   *          variables to use\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public static void setScanFilters( ResultScannerBuilder resultScannerBuilder, Collection<ColumnFilter> columnFilters,\n                                     boolean matchAnyFilter, Map<String, HBaseValueMetaInterface> columnsMappedByAlias, VariableSpace vars )\n    throws KettleException {\n\n    for ( ColumnFilter cf : columnFilters ) {\n      String fieldAliasS = vars.environmentSubstitute( cf.getFieldAlias() );\n      HBaseValueMetaInterface mappedCol = columnsMappedByAlias.get( fieldAliasS );\n      if ( mappedCol == null ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.ColumnFilterIsNotInTheMapping\", fieldAliasS ) );\n      }\n\n      // check the type (if set in the ColumnFilter) against the type\n      // of this field in the mapping\n      String fieldTypeS = vars.environmentSubstitute( cf.getFieldType() );\n      if ( !Const.isEmpty( fieldTypeS ) ) {\n        if ( !mappedCol.getHBaseTypeDesc().equalsIgnoreCase( fieldTypeS ) ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInput.Error.FieldTypeMismatch\",\n              fieldTypeS, fieldAliasS, mappedCol.getHBaseTypeDesc() ) );\n        }\n      }\n\n      try {\n        resultScannerBuilder.addColumnFilterToScan( cf, mappedCol, vars, matchAnyFilter );\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n            \"HBaseInput.Error.UnableToAddColumnFilterToScan\" ), ex );\n      }\n    }\n  }\n\n  /**\n   * Convert/decode the current hbase row into a list of \"tuple\" kettle rows\n   * \n   * @param hBaseService\n   *          the hBaseService\n   * @param result\n   *          the result to use\n   * @param userOutputColumns\n   *          user-specified subset of columns (if any) from the mapping\n   * @param columnsMappedByAlias\n   *          columns in the mapping keyed by alias\n   * @param tableMapping\n   *          the mapping to use\n   * @param tupleHandler\n   *          the HBaseRowToKettleTuple to delegate to\n   * @param outputRowMeta\n   *          the outgoing row meta\n   * @return a list of kettle rows\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public static List<Object[]> getTupleOutputRows( HBaseService hBaseService, Result result, List<HBaseValueMetaInterface> userOutputColumns,\n                                                   Map<String, HBaseValueMetaInterface> columnsMappedByAlias, Mapping tableMapping, HBaseRowToKettleTuple tupleHandler,\n                                                   RowMetaInterface outputRowMeta ) throws KettleException {\n\n    if ( userOutputColumns != null && userOutputColumns.size() > 0 ) {\n      return tupleHandler.hbaseRowToKettleTupleMode( result, tableMapping, userOutputColumns, outputRowMeta );\n    } else {\n      return tupleHandler.hbaseRowToKettleTupleMode( hBaseService.getHBaseValueMetaInterfaceFactory(), result, tableMapping, columnsMappedByAlias, outputRowMeta );\n    }\n  }\n\n  /**\n   * Convert/decode the current hbase row into a kettle row\n   * \n   * @param result\n   *          the result to use\n   * @param userOutputColumns\n   *          user-specified subset of columns (if any) from the mapping\n   * @param columnsMappedByAlias\n   *          columns in the mapping keyed by alias\n   * @param tableMapping\n   *          the mapping to use\n   * @param outputRowMeta\n   *          the outgoing row meta\n   * @return a kettle row\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public static Object[] getOutputRow( Result result, List<HBaseValueMetaInterface> userOutputColumns,\n      Map<String, HBaseValueMetaInterface> columnsMappedByAlias, Mapping tableMapping, RowMetaInterface outputRowMeta ) throws KettleException {\n\n    int size = ( userOutputColumns != null && userOutputColumns.size() > 0 ) ? userOutputColumns.size()\n      : tableMapping.numMappedColumns() + 1; // + 1 for the key\n\n    Object[] outputRowData = RowDataUtil.allocateRowData( size );\n\n    // User-selected output columns?\n    if ( userOutputColumns != null && userOutputColumns.size() > 0 ) {\n      for ( HBaseValueMetaInterface currentCol : userOutputColumns ) {\n        if ( currentCol.isKey() ) {\n          byte[] rawKey = null;\n          try {\n            rawKey = result.getRow();\n          } catch ( Exception e ) {\n            throw new KettleException( e );\n          }\n          Object decodedKey = tableMapping.decodeKeyValue( rawKey );\n          int keyIndex = outputRowMeta.indexOfValue( currentCol.getAlias() );\n          outputRowData[keyIndex] = decodedKey;\n        } else {\n          String colFamilyName = currentCol.getColumnFamily();\n          String qualifier = currentCol.getColumnName();\n\n          boolean binaryColName = false;\n          if ( qualifier.startsWith( \"@@@binary@@@\" ) ) {\n            qualifier = qualifier.replace( \"@@@binary@@@\", \"\" );\n            // assume hex encoded\n            binaryColName = true;\n          }\n\n          byte[] kv = null;\n          try {\n            kv = result.getValue( colFamilyName, qualifier, binaryColName );\n          } catch ( Exception e ) {\n            throw new KettleException( e );\n          }\n\n          int outputIndex = outputRowMeta.indexOfValue( currentCol.getAlias() );\n          if ( outputIndex < 0 ) {\n            throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n                \"HBaseInput.Error.ColumnNotDefinedInOutput\", currentCol.getAlias() ) );\n          }\n\n          Object decodedVal = currentCol.decodeColumnValue( ( kv == null ) ? null : kv );\n\n          outputRowData[outputIndex] = decodedVal;\n        }\n      }\n    } else {\n      // do the key first\n      byte[] rawKey = null;\n      try {\n        rawKey = result.getRow();\n      } catch ( Exception e ) {\n        throw new KettleException( e );\n      }\n\n      Object decodedKey = tableMapping.decodeKeyValue( rawKey );\n      int keyIndex = outputRowMeta.indexOfValue( tableMapping.getKeyName() );\n      outputRowData[keyIndex] = decodedKey;\n\n      Set<String> aliasSet = columnsMappedByAlias.keySet();\n\n      for ( String name : aliasSet ) {\n        HBaseValueMetaInterface currentCol = columnsMappedByAlias.get( name );\n        String colFamilyName = currentCol.getColumnFamily();\n        String qualifier = currentCol.getColumnName();\n        if ( currentCol.isKey() ) {\n          // skip key as it has already been processed \n          // and is not in the scan's columns \n          continue;\n        }\n\n        boolean binaryColName = false;\n        if ( qualifier.startsWith( \"@@@binary@@@\" ) ) {\n          qualifier = qualifier.replace( \"@@@binary@@@\", \"\" );\n          // assume hex encoded\n          binaryColName = true;\n        }\n\n        byte[] kv = null;\n        try {\n          kv = result.getValue( colFamilyName, qualifier, binaryColName );\n        } catch ( Exception e ) {\n          throw new KettleException( e );\n        }\n\n        int outputIndex = outputRowMeta.indexOfValue( name );\n        if ( outputIndex < 0 ) {\n          throw new KettleException( BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInput.Error.ColumnNotDefinedInOutput\", name ) );\n        }\n\n        Object decodedVal = currentCol.decodeColumnValue( ( kv == null ) ? null : kv );\n\n        outputRowData[outputIndex] = decodedVal;\n      }\n    }\n\n    return outputRowData;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.Cursor;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.ConfigurationProducer;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingEditor;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilter;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilterFactory;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ComboValuesSelectionListener;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\n\n/**\n * Dialog class for HBaseInput\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\n@PluginDialog( id = \"HBaseInput\", image = \"HB.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"Products/HBase_Input\" )\npublic class HBaseInputDialog extends BaseStepDialog implements StepDialogInterface, ConfigurationProducer {\n\n  /** various UI bits and pieces for the dialog */\n  private Label m_stepnameLabel;\n  private Text m_stepnameText;\n\n  // The tabs of the dialog\n  private CTabFolder m_wTabFolder;\n  private CTabItem m_wConfigTab;\n\n  private CTabItem m_wFilterTab;\n\n  private CTabItem m_editorTab;\n\n  NamedClusterWidgetImpl namedClusterWidget;\n\n  // Core config line\n  private Button m_coreConfigBut;\n  private TextVar m_coreConfigText;\n\n  // Default config line\n  private Button m_defaultConfigBut;\n  private TextVar m_defaultConfigText;\n\n  private final HBaseInputMeta m_currentMeta;\n  private final HBaseInputMeta m_originalMeta;\n  private final HBaseInputMeta m_configurationMeta;\n\n  // Table name line\n  private Button m_mappedTableNamesBut;\n  private CCombo m_mappedTableNamesCombo;\n\n  // Mapping name line\n  private Button m_mappingNamesBut;\n  private CCombo m_mappingNamesCombo;\n\n  /** Store the mapping information in the step's meta data */\n  private Button m_storeMappingInStepMetaData;\n\n  // Key start line\n  private TextVar m_keyStartText;\n\n  // Key stop line\n  private TextVar m_keyStopText;\n\n  // Rows to be cached by Scanner\n  private TextVar m_scanCacheText;\n\n  // Key as a column\n  // private Button m_includeKey;\n\n  // Key information\n  private String m_keyName;\n  private Mapping.KeyType m_keyType;\n  private Label m_keyInfo;\n  private Button m_getKeyInfoBut;\n\n  // Fields table widget\n  private TableView m_fieldsView;\n\n  // filters fields widget\n  private TableView m_filtersView;\n  private ColumnInfo m_filterAliasCI;\n  private Button m_matchAllBut;\n  private Button m_matchAnyBut;\n\n  // mapping editor composite\n  private MappingEditor m_mappingEditor;\n\n  // cached copy of the mapped columns\n  private Map<String, HBaseValueMetaInterface> m_mappedColumns;\n\n  // lookup map for indexed columns\n  private Map<String, String> m_indexedLookup = new HashMap<>();\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n\n  public HBaseInputDialog( Shell parent, Object in, TransMeta tr, String name ) {\n\n    super( parent, (BaseStepMeta) in, tr, name );\n\n    m_currentMeta = (HBaseInputMeta) in;\n    m_originalMeta = (HBaseInputMeta) m_currentMeta.clone();\n    m_configurationMeta = (HBaseInputMeta) m_currentMeta.clone();\n    namedClusterService = m_currentMeta.getNamedClusterService();\n    runtimeTestActionService = m_currentMeta.getRuntimeTestActionService();\n    runtimeTester = m_currentMeta.getRuntimeTester();\n    namedClusterServiceLocator = m_currentMeta.getNamedClusterServiceLocator();\n  }\n\n  public String open() {\n\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MIN | SWT.MAX );\n\n    props.setLook( shell );\n    setShellImage( shell, m_currentMeta );\n\n    // used to listen to a text field (m_wStepname)\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    };\n\n    changed = m_currentMeta.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( Messages.getString( \"HBaseInputDialog.Shell.Title\" ) );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    m_stepnameLabel = new Label( shell, SWT.RIGHT );\n    m_stepnameLabel.setText( Messages.getString( \"HBaseInputDialog.StepName.Label\" ) );\n    props.setLook( m_stepnameLabel );\n\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( middle, -margin );\n    fd.top = new FormAttachment( 0, margin );\n    m_stepnameLabel.setLayoutData( fd );\n    m_stepnameText = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_stepnameText.setText( stepname );\n    props.setLook( m_stepnameText );\n    m_stepnameText.addModifyListener( lsMod );\n\n    // format the text field\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_stepnameText.setLayoutData( fd );\n\n    m_wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( m_wTabFolder, Props.WIDGET_STYLE_TAB );\n    m_wTabFolder.setSimple( false );\n\n    // Start of the config tab\n    m_wConfigTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wConfigTab.setText( Messages.getString( \"HBaseInputDialog.ConfigTab.TabTitle\" ) );\n\n    Composite wConfigComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wConfigComp );\n\n    FormLayout configLayout = new FormLayout();\n    configLayout.marginWidth = 3;\n    configLayout.marginHeight = 3;\n    wConfigComp.setLayout( configLayout );\n\n    Label namedClusterLab = new Label( wConfigComp, SWT.RIGHT );\n    namedClusterLab.setText( Messages.getString( \"HBaseInputDialog.NamedCluster.Label\" ) );\n    namedClusterLab.setToolTipText( Messages.getString( \"HBaseInputDialog.NamedCluster.TipText\" ) );\n    props.setLook( namedClusterLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 10 );\n    fd.right = new FormAttachment( middle, -margin );\n    namedClusterLab.setLayoutData( fd );\n\n    namedClusterWidget = new NamedClusterWidgetImpl( wConfigComp, false, namedClusterService, runtimeTestActionService, runtimeTester, false );\n    namedClusterWidget.initiate();\n    props.setLook( namedClusterWidget );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    namedClusterWidget.setLayoutData( fd );\n\n    // core config line\n    Label coreConfigLab = new Label( wConfigComp, SWT.RIGHT );\n    coreConfigLab.setText( Messages.getString( \"HBaseInputDialog.CoreConfig.Label\" ) );\n    coreConfigLab.setToolTipText( Messages.getString( \"HBaseInputDialog.CoreConfig.TipText\" ) );\n    props.setLook( coreConfigLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    coreConfigLab.setLayoutData( fd );\n\n    m_coreConfigBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_coreConfigBut );\n    m_coreConfigBut.setText( Messages.getString( \"System.Button.Browse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, 0 );\n    m_coreConfigBut.setLayoutData( fd );\n\n    m_coreConfigBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n        String[] extensions = null;\n        String[] filterNames = null;\n\n        extensions = new String[ 2 ];\n        filterNames = new String[ 2 ];\n        extensions[ 0 ] = \"*.xml\";\n        filterNames[ 0 ] = Messages.getString( \"HBaseInputDialog.FileType.XML\" );\n        extensions[ 1 ] = \"*\";\n        filterNames[ 1 ] = Messages.getString( \"System.FileType.AllFiles\" );\n\n        dialog.setFilterExtensions( extensions );\n\n        if ( dialog.open() != null ) {\n          m_coreConfigText.setText( dialog.getFilterPath() + System.getProperty( \"file.separator\" )\n            + dialog.getFileName() );\n        }\n\n      }\n    } );\n\n    m_coreConfigText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_coreConfigText );\n    m_coreConfigText.addModifyListener( lsMod );\n\n    // set the tool tip to the contents with any env variables expanded\n    m_coreConfigText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_coreConfigText.setToolTipText( transMeta.environmentSubstitute( m_coreConfigText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, margin );\n    fd.right = new FormAttachment( m_coreConfigBut, -margin );\n    m_coreConfigText.setLayoutData( fd );\n\n    // default config line\n    Label defaultConfigLab = new Label( wConfigComp, SWT.RIGHT );\n    defaultConfigLab.setText( Messages.getString( \"HBaseInputDialog.DefaultConfig.Label\" ) );\n    defaultConfigLab.setToolTipText( Messages.getString( \"HBaseInputDialog.DefaultConfig.TipText\" ) );\n    props.setLook( defaultConfigLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    defaultConfigLab.setLayoutData( fd );\n\n    m_defaultConfigBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_defaultConfigBut );\n    m_defaultConfigBut.setText( Messages.getString( \"System.Button.Browse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, 0 );\n    m_defaultConfigBut.setLayoutData( fd );\n\n    m_defaultConfigBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n        String[] extensions = null;\n        String[] filterNames = null;\n\n        extensions = new String[ 2 ];\n        filterNames = new String[ 2 ];\n        extensions[ 0 ] = \"*.xml\";\n        filterNames[ 0 ] = Messages.getString( \"HBaseInputDialog.FileType.XML\" );\n        extensions[ 1 ] = \"*\";\n        filterNames[ 1 ] = Messages.getString( \"System.FileType.AllFiles\" );\n\n        dialog.setFilterExtensions( extensions );\n\n        if ( dialog.open() != null ) {\n          m_defaultConfigText.setText( dialog.getFilterPath() + System.getProperty( \"file.separator\" )\n            + dialog.getFileName() );\n        }\n\n      }\n    } );\n\n    m_defaultConfigText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_defaultConfigText );\n    m_defaultConfigText.addModifyListener( lsMod );\n\n    // set the tool tip to the contents with any env variables expanded\n    m_defaultConfigText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_defaultConfigText.setToolTipText( transMeta.environmentSubstitute( m_defaultConfigText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, margin );\n    fd.right = new FormAttachment( m_defaultConfigBut, -margin );\n    m_defaultConfigText.setLayoutData( fd );\n\n    // table name\n    Label tableNameLab = new Label( wConfigComp, SWT.RIGHT );\n    tableNameLab.setText( Messages.getString( \"HBaseInputDialog.TableName.Label\" ) );\n    tableNameLab.setToolTipText( Messages.getString( \"HBaseInputDialog.TableName.TipText\" ) );\n    props.setLook( tableNameLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    tableNameLab.setLayoutData( fd );\n\n    m_mappedTableNamesBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_mappedTableNamesBut );\n    m_mappedTableNamesBut.setText( Messages.getString( \"HBaseInputDialog.TableName.Button\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, 0 );\n    m_mappedTableNamesBut.setLayoutData( fd );\n\n    m_mappedTableNamesCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_mappedTableNamesCombo );\n\n    m_mappedTableNamesCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_mappedTableNamesCombo.setToolTipText( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ) );\n      }\n    } );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, margin );\n    fd.right = new FormAttachment( m_mappedTableNamesBut, -margin );\n    m_mappedTableNamesCombo.setLayoutData( fd );\n\n    m_mappedTableNamesBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        setupMappedTableNames();\n        if (  m_mappedTableNamesCombo.getItemCount() > 0 ) {\n          m_mappedTableNamesCombo.setListVisible( true );\n        }\n      }\n    } );\n\n    // mapping name\n    Label mappingNameLab = new Label( wConfigComp, SWT.RIGHT );\n    mappingNameLab.setText( Messages.getString( \"HBaseInputDialog.MappingName.Label\" ) );\n    mappingNameLab.setToolTipText( Messages.getString( \"HBaseInputDialog.MappingName.TipText\" ) );\n    props.setLook( mappingNameLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    mappingNameLab.setLayoutData( fd );\n\n    m_mappingNamesBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_mappingNamesBut );\n    m_mappingNamesBut.setText( Messages.getString( \"HBaseInputDialog.MappingName.Button\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, 0 );\n    m_mappingNamesBut.setLayoutData( fd );\n\n    m_mappingNamesBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        setupMappingNamesForTable( false );\n        if (  m_mappingNamesCombo.getItemCount() > 0 ) {\n          m_mappingNamesCombo.setListVisible( true );\n        }\n      }\n    } );\n\n    m_mappingNamesCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_mappingNamesCombo );\n\n    m_mappingNamesCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        // checkKeyInformation(true);\n\n        m_mappingNamesCombo.setToolTipText( transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) );\n        m_storeMappingInStepMetaData.setSelection( false );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, margin );\n    fd.right = new FormAttachment( m_mappingNamesBut, -margin );\n    m_mappingNamesCombo.setLayoutData( fd );\n\n    // store mapping in meta data\n    Label storeMapping = new Label( wConfigComp, SWT.RIGHT );\n    storeMapping.setText( BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInputDialog.StoreMapping.Label\" ) );\n    storeMapping.setToolTipText(\n      BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInputDialog.StoreMapping.TipText\" ) );\n    props.setLook( storeMapping );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_mappingNamesCombo, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    storeMapping.setLayoutData( fd );\n\n    m_storeMappingInStepMetaData = new Button( wConfigComp, SWT.CHECK );\n    props.setLook( m_storeMappingInStepMetaData );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_mappingNamesCombo, margin );\n    m_storeMappingInStepMetaData.setLayoutData( fd );\n\n    // keystart\n    Label keyStartLab = new Label( wConfigComp, SWT.RIGHT );\n    keyStartLab.setText( Messages.getString( \"HBaseInputDialog.KeyStart.Label\" ) );\n    keyStartLab.setToolTipText( Messages.getString( \"HBaseInputDialog.KeyStart.TipText\" ) );\n    props.setLook( keyStartLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_storeMappingInStepMetaData, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    keyStartLab.setLayoutData( fd );\n\n    m_keyStartText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_keyStartText.setToolTipText( Messages.getString( \"HBaseInputDialog.KeyStart.TipText\" ) );\n    m_keyStartText.addModifyListener( lsMod );\n    props.setLook( m_keyStartText );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_storeMappingInStepMetaData, margin );\n    m_keyStartText.setLayoutData( fd );\n\n    // keystop\n    Label keyStopLab = new Label( wConfigComp, SWT.RIGHT );\n    keyStopLab.setText( Messages.getString( \"HBaseInputDialog.KeyStop.Label\" ) );\n    keyStopLab.setToolTipText( Messages.getString( \"HBaseInputDialog.KeyStop.TipText\" ) );\n    props.setLook( keyStopLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_keyStartText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    keyStopLab.setLayoutData( fd );\n\n    m_keyStopText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_keyStopText.setToolTipText( Messages.getString( \"HBaseInputDialog.KeyStop.TipText\" ) );\n    m_keyStopText.addModifyListener( lsMod );\n    props.setLook( m_keyStopText );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_keyStartText, margin );\n    m_keyStopText.setLayoutData( fd );\n\n    // Scanner caching\n    Label scannerCacheLab = new Label( wConfigComp, SWT.RIGHT );\n    scannerCacheLab.setText( Messages.getString( \"HBaseInputDialog.ScannerCache.Label\" ) );\n    scannerCacheLab.setToolTipText( Messages.getString( \"HBaseInputDialog.ScannerCache.TipText\" ) );\n    props.setLook( scannerCacheLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_keyStopText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    scannerCacheLab.setLayoutData( fd );\n\n    m_scanCacheText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_scanCacheText.setToolTipText( Messages.getString( \"HBaseInputDialog.ScannerCache.TipText\" ) );\n    props.setLook( m_scanCacheText );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_keyStopText, margin );\n    m_scanCacheText.setLayoutData( fd );\n\n    m_getKeyInfoBut = new Button( wConfigComp, SWT.PUSH );\n    m_getKeyInfoBut.setText( \"Get Key/Fields Info\" );\n    props.setLook( m_getKeyInfoBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    m_getKeyInfoBut.setLayoutData( fd );\n    m_getKeyInfoBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        checkKeyInformation( false, true );\n      }\n    } );\n\n    Group keyGroup = new Group( wConfigComp, SWT.SHADOW_ETCHED_IN );\n    FormLayout keyLayout = new FormLayout();\n    keyGroup.setLayout( keyLayout );\n    props.setLook( keyGroup );\n\n    m_keyInfo = new Label( keyGroup, SWT.RIGHT );\n    m_keyInfo.setText( \"-- Key details --\" );\n    props.setLook( m_keyInfo );\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, margin );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, -margin );\n    m_keyInfo.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.right = new FormAttachment( m_getKeyInfoBut, -margin );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    keyGroup.setLayoutData( fd );\n\n    // fields stuff\n    ColumnInfo[] colinf =\n      new ColumnInfo[] {\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_ALIAS\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_KEY\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_FAMILY\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_NAME\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_TYPE\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_FORMAT\" ), ColumnInfo.COLUMN_TYPE_FORMAT,\n          3 ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_INDEXED\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ), };\n\n    colinf[ 0 ].setReadOnly( true );\n    colinf[ 1 ].setReadOnly( true );\n    colinf[ 2 ].setReadOnly( true );\n    colinf[ 3 ].setReadOnly( true );\n    colinf[ 4 ].setReadOnly( true );\n    colinf[ 5 ].setReadOnly( true );\n\n    colinf[ 5 ].setComboValuesSelectionListener( new ComboValuesSelectionListener() {\n\n      public String[] getComboValues( TableItem tableItem, int rowNr, int colNr ) {\n        String[] comboValues = new String[] {};\n        int type = ValueMeta.getType( tableItem.getText( colNr - 1 ) );\n        switch ( type ) {\n          case ValueMetaInterface.TYPE_DATE:\n            comboValues = Const.getDateFormats();\n            break;\n          case ValueMetaInterface.TYPE_INTEGER:\n          case ValueMetaInterface.TYPE_BIGNUMBER:\n          case ValueMetaInterface.TYPE_NUMBER:\n            comboValues = Const.getNumberFormats();\n            break;\n          default:\n            break;\n        }\n        return comboValues;\n      }\n    } );\n\n    m_fieldsView = new TableView( transMeta, wConfigComp, SWT.FULL_SELECTION | SWT.MULTI, colinf, 1, lsMod, props );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( m_scanCacheText, margin * 2 );\n    fd.bottom = new FormAttachment( m_getKeyInfoBut, -margin * 2 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_fieldsView.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wConfigComp.setLayoutData( fd );\n\n    wConfigComp.layout();\n    m_wConfigTab.setControl( wConfigComp );\n\n    // --- mapping editor tab\n    m_editorTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_editorTab.setText( Messages.getString( \"HBaseInputDialog.MappingEditorTab.TabTitle\" ) );\n\n    m_mappingEditor =\n      new MappingEditor( shell, m_wTabFolder, this, null, SWT.FULL_SELECTION | SWT.MULTI, false, props, transMeta,\n        namedClusterService, runtimeTestActionService, runtimeTester, namedClusterServiceLocator );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( 0, 0 );\n    m_mappingEditor.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_mappingEditor.setLayoutData( fd );\n\n    m_mappingEditor.layout();\n    m_editorTab.setControl( m_mappingEditor );\n\n    // ----- Start of the filter tab --------\n    m_wFilterTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wFilterTab.setText( Messages.getString( \"HBaseInputDialog.FilterTab.TabTitle\" ) );\n\n    Composite wFilterComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wFilterComp );\n\n    FormLayout filterLayout = new FormLayout();\n    filterLayout.marginWidth = 3;\n    filterLayout.marginHeight = 3;\n    wFilterComp.setLayout( filterLayout );\n\n    m_matchAllBut = new Button( wFilterComp, SWT.RADIO );\n    m_matchAllBut.setText( Messages.getString( \"HBaseInputDialog.Filters.RADIO_ALL\" ) );\n    props.setLook( m_matchAllBut );\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( 0, 0 );\n    m_matchAllBut.setLayoutData( fd );\n    m_matchAllBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    } );\n\n    m_matchAnyBut = new Button( wFilterComp, SWT.RADIO );\n    m_matchAnyBut.setText( Messages.getString( \"HBaseInputDialog.Filters.RADIO_ANY\" ) );\n    props.setLook( m_matchAnyBut );\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( m_matchAllBut, 0 );\n    fd.right = new FormAttachment( 100, -margin );\n    m_matchAnyBut.setLayoutData( fd );\n    m_matchAnyBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    } );\n\n    m_matchAllBut.setSelection( true );\n\n    final ColumnInfo[] colinf2 =\n      new ColumnInfo[] {\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_ALIAS\" ) + \"     \",\n          ColumnInfo.COLUMN_TYPE_CCOMBO, false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_TYPE\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_OPERATOR\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_COMPARISON\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_FORMAT\" ), ColumnInfo.COLUMN_TYPE_CCOMBO,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Filters.FIELD_SIGNED\" ), ColumnInfo.COLUMN_TYPE_CCOMBO,\n          false ), };\n\n    colinf2[ 0 ].setReadOnly( false );\n    colinf2[ 1 ].setReadOnly( false );\n    colinf2[ 2 ].setReadOnly( true );\n    colinf2[ 3 ].setReadOnly( false );\n    colinf2[ 4 ].setReadOnly( false );\n    colinf2[ 5 ].setReadOnly( true );\n\n    m_filterAliasCI = colinf2[ 0 ];\n    m_filterAliasCI.setComboValues( new String[] { \"\" } );\n    colinf2[ 2 ].setComboValues( ColumnFilter.ComparisonType.getAllOperators() );\n    colinf2[ 5 ].setComboValues( new String[] { \"Y\", \"N\" } );\n\n    colinf2[ 2 ].setComboValuesSelectionListener( new ComboValuesSelectionListener() {\n\n      public String[] getComboValues( TableItem tableItem, int rowNr, int colNr ) {\n        String[] comboValues = colinf2[ 2 ].getComboValues();\n\n        // try to fill in the type\n        String alias = tableItem.getText( 1 );\n        HBaseValueMetaInterface vm = null;\n        if ( !Const.isEmpty( alias ) ) {\n          vm = setFilterTableTypeColumn( tableItem );\n        }\n\n        if ( vm != null ) {\n          if ( vm.isNumeric() || vm.isDate() || vm.isBoolean() ) {\n            comboValues = ColumnFilter.ComparisonType.getNumericOperators();\n          } else if ( vm.isString() ) {\n            comboValues = ColumnFilter.ComparisonType.getStringOperators();\n          } else {\n            comboValues = new String[ 1 ];\n            comboValues[ 0 ] = \"\";\n          }\n        } else {\n          // if we've not got a connection, or there is no user-specified\n          // columns saved in the meta class, then just get all the\n          // operators\n          comboValues = ColumnFilter.ComparisonType.getAllOperators();\n        }\n\n        return comboValues;\n      }\n    } );\n\n    colinf2[ 4 ].setComboValuesSelectionListener( new ComboValuesSelectionListener() {\n      public String[] getComboValues( TableItem tableItem, int rowNr, int colNr ) {\n        String[] comboValues = new String[] {};\n\n        // try to fill in the type\n        String alias = tableItem.getText( 1 );\n        if ( !Const.isEmpty( alias ) ) {\n          setFilterTableTypeColumn( tableItem );\n        }\n        int type = ValueMeta.getType( tableItem.getText( 2 ) );\n        switch ( type ) {\n          case ValueMetaInterface.TYPE_DATE:\n            comboValues = Const.getDateFormats();\n            break;\n          case ValueMetaInterface.TYPE_INTEGER:\n          case ValueMetaInterface.TYPE_BIGNUMBER:\n          case ValueMetaInterface.TYPE_NUMBER:\n            comboValues = Const.getNumberFormats();\n            break;\n          default:\n            break;\n          // if there is not type information available (no connection and no\n          // user-specified\n          // columns in the meta class) then the user will just have to type\n          // in their own\n          // formatting string (if necessary)\n        }\n        return comboValues;\n      }\n    } );\n\n    m_filtersView = new TableView( transMeta, wFilterComp, SWT.FULL_SELECTION | SWT.MULTI, colinf2, 1, lsMod, props );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( m_matchAllBut, margin * 2 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_filtersView.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wFilterComp.setLayoutData( fd );\n\n    wFilterComp.layout();\n    m_wFilterTab.setControl( wFilterComp );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_stepnameText, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -50 );\n    m_wTabFolder.setLayoutData( fd );\n\n    // Buttons inherited from BaseStepDialog\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( Messages.getString( \"System.Button.OK\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( Messages.getString( \"System.Button.Cancel\" ) );\n\n    setButtonPositions( new Button[] { wOK, wCancel }, margin, m_wTabFolder );\n\n    // Add listeners\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wCancel.addListener( SWT.Selection, lsCancel );\n    wOK.addListener( SWT.Selection, lsOK );\n\n    lsDef = new SelectionAdapter() {\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    m_stepnameText.addSelectionListener( lsDef );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    m_wTabFolder.setSelection( 0 );\n    setSize();\n\n    getData();\n\n    ServiceStatus serviceStatus = m_currentMeta.getServiceStatus();\n    if ( !serviceStatus.isOk() ) {\n      new ErrorDialog( shell, Messages.getString( \"Dialog.Error\" ),\n        Messages.getString( \"HBaseInput.Error.ServiceStatus\" ),\n        serviceStatus.getException() );\n    }\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n\n    return stepname;\n  }\n\n  protected HBaseValueMetaInterface setFilterTableTypeColumn( TableItem tableItem ) {\n    // try to fill in the type\n    String alias = tableItem.getText( 1 ).trim();\n    if ( !Const.isEmpty( alias ) ) {\n      // try using the mapping information first since it is complete\n      if ( transMeta.environmentSubstitute( alias ).equals( m_keyName ) ) {\n        tableItem.setText( 2, m_keyType.toString() );\n        HBaseValueMetaInterface vm = m_mappedColumns.get( transMeta.environmentSubstitute( alias ) );\n        if ( vm != null ) {\n          vm.setType( HBaseInput.getKettleTypeByKeyType( m_keyType ) );\n          String type = ValueMetaBase.getTypeDesc( vm.getType() );\n          tableItem.setText( 2, type );\n          return vm;\n        }\n      } else if ( m_mappedColumns != null ) {\n        HBaseValueMetaInterface vm = m_mappedColumns.get( transMeta.environmentSubstitute( alias ) );\n        if ( vm != null ) {\n          String type = ValueMetaBase.getTypeDesc( vm.getType() );\n          if ( vm.getType() == ValueMetaInterface.TYPE_INTEGER ) {\n            if ( vm.getIsLongOrDouble() ) {\n              type = \"Long\";\n            } else {\n              type = \"Integer\";\n            }\n          }\n          if ( vm.getType() == ValueMetaInterface.TYPE_NUMBER ) {\n            if ( vm.getIsLongOrDouble() ) {\n              type = \"Double\";\n            } else {\n              type = \"Float\";\n            }\n          }\n\n          tableItem.setText( 2, type );\n          return vm;\n        }\n      } else if ( m_currentMeta.getOutputFields() != null && m_currentMeta.getOutputFields().size() > 0 ) {\n\n        // use the user-selected fields information\n        for ( HBaseValueMetaInterface vm : m_currentMeta.getOutputFields() ) {\n          String aliasF = vm.getAlias();\n          if ( alias.equals( aliasF ) ) {\n            String type = ValueMetaBase.getTypeDesc( vm.getType() );\n            tableItem.setText( 2, type );\n            return vm;\n          }\n        }\n      }\n    }\n\n    return null;\n  }\n\n  protected void updateMetaConnectionDetails( HBaseInputMeta meta ) {\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    if ( nc != null ) {\n      meta.setNamedCluster( nc );\n    }\n\n    meta.setCoreConfigURL( m_coreConfigText.getText() );\n    meta.setDefaulConfigURL( m_defaultConfigText.getText() );\n    meta.setSourceTableName( m_mappedTableNamesCombo.getText() );\n    meta.setSourceMappingName( m_mappingNamesCombo.getText() );\n  }\n\n  protected void ok() {\n    if ( Const.isEmpty( m_stepnameText.getText() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( Messages.getString( \"System.StepJobEntryNameMissing.Title\" ) );\n      mb.setMessage( Messages.getString( \"System.JobEntryNameMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n    NamedCluster selectedNamedCluster = namedClusterWidget.getSelectedNamedCluster();\n    if ( selectedNamedCluster == null ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( Messages.getString( \"Dialog.Error\" ) );\n      mb.setMessage( Messages.getString( \"HBaseInputDialog.NamedClusterNotSelected.Msg\" ) );\n      mb.open();\n      return;\n    } else {\n      if ( StringUtils.isEmpty( selectedNamedCluster.getZooKeeperHost() ) && !selectedNamedCluster.isUseGateway() ) {\n        MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n        mb.setText( Messages.getString( \"Dialog.Error\" ) );\n        mb.setMessage( Messages.getString( \"HBaseInputDialog.NamedClusterMissingValues.Msg\" ) );\n        mb.open();\n        return;\n      }\n    }\n    HBaseService hBaseService = null;\n    try {\n      hBaseService = getHBaseService();\n    } catch ( ClusterInitializationException e ) {\n      throw new RuntimeException( e );\n    }\n    HBaseValueMetaInterfaceFactory hBaseValueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n\n    stepname = m_stepnameText.getText();\n\n    updateMetaConnectionDetails( m_currentMeta );\n\n    m_currentMeta.setKeyStartValue( m_keyStartText.getText() );\n    m_currentMeta.setKeyStopValue( m_keyStopText.getText() );\n    m_currentMeta.setScannerCacheSize( m_scanCacheText.getText() );\n    m_currentMeta.setMatchAnyFilter( m_matchAnyBut.getSelection() );\n\n    int numNonEmpty = m_fieldsView.nrNonEmpty();\n    if ( numNonEmpty > 0 ) {\n      ByteConversionUtil byteConversionUtil = hBaseService.getByteConversionUtil();\n      List<HBaseValueMetaInterface> outputFields = new ArrayList<>();\n\n      for ( int i = 0; i < numNonEmpty; i++ ) {\n        TableItem item = m_fieldsView.getNonEmpty( i );\n        String alias = item.getText( 1 ).trim();\n        String isKey = item.getText( 2 ).trim();\n        String family = item.getText( 3 ).trim();\n        String column = item.getText( 4 ).trim();\n        String type = item.getText( 5 ).trim();\n        String format = item.getText( 6 ).trim();\n\n        HBaseValueMetaInterface vm = hBaseValueMetaInterfaceFactory\n          .createHBaseValueMetaInterface( family, column, alias, ValueMeta.getType( type ), -1, -1 );\n\n        vm.setTableName( m_mappedTableNamesCombo.getText() );\n        vm.setMappingName( m_mappingNamesCombo.getText() );\n        vm.setKey( isKey.equalsIgnoreCase( \"Y\" ) );\n        String indexItems = m_indexedLookup.get( alias );\n        if ( indexItems != null ) {\n          Object[] values = byteConversionUtil.stringIndexListToObjects( indexItems );\n          vm.setIndex( values );\n          vm.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n        }\n        vm.setConversionMask( format );\n\n        outputFields.add( vm );\n      }\n      m_currentMeta.setOutputFields( outputFields );\n    } else {\n      m_currentMeta.setOutputFields( null ); // output everything\n    }\n\n    numNonEmpty = m_filtersView.nrNonEmpty();\n    if ( numNonEmpty > 0 ) {\n      ColumnFilterFactory columnFilterFactory = hBaseService.getColumnFilterFactory();\n      List<ColumnFilter> filters = new ArrayList<>();\n\n      for ( int i = 0; i < m_filtersView.nrNonEmpty(); i++ ) {\n        TableItem item = m_filtersView.getNonEmpty( i );\n        String alias = item.getText( 1 ).trim();\n        String type = item.getText( 2 ).trim();\n        String operator = item.getText( 3 ).trim();\n        String comparison = item.getText( 4 ).trim();\n        String signed = item.getText( 6 ).trim();\n        String format = item.getText( 5 ).trim();\n        ColumnFilter f = columnFilterFactory.createFilter( alias );\n        f.setFieldType( type );\n        f.setComparisonOperator( ColumnFilter.ComparisonType.stringToOpp( operator ) );\n        f.setConstant( comparison );\n        f.setSignedComparison( signed.equalsIgnoreCase( \"Y\" ) );\n        f.setFormat( format );\n        filters.add( f );\n      }\n\n      m_currentMeta.setColumnFilters( filters );\n    } else {\n      m_currentMeta.setColumnFilters( null );\n    }\n\n    if ( m_storeMappingInStepMetaData.getSelection() ) {\n      if ( Const.isEmpty( m_mappingNamesCombo.getText() ) ) {\n        List<String> problems = new ArrayList<String>();\n\n        Mapping toSet = m_mappingEditor.getMapping( false, problems, false );\n        if ( problems.size() > 0 ) {\n          StringBuffer p = new StringBuffer();\n          for ( String s : problems ) {\n            p.append( s ).append( \"\\n\" );\n          }\n          MessageDialog md =\n            new MessageDialog( shell, BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInputDialog.Error.IssuesWithMapping.Title\" ), null, BaseMessages.getString( HBaseInputMeta.PKG,\n              \"HBaseInputDialog.Error.IssuesWithMapping\" ) + \":\\n\\n\" + p.toString(), MessageDialog.WARNING,\n                  new String[] { BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInputDialog.Error.IssuesWithMapping.ButtonOK\" ),\n                      BaseMessages.getString( HBaseInputMeta.PKG, \"HBaseInputDialog.Error.IssuesWithMapping.ButtonCancel\" ) }, 0 );\n          MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n          int idx = md.open() & 0xFF;\n          if ( idx == 1 || idx == 255 /* 255 = escape pressed */ ) {\n            return; // Cancel\n          }\n        }\n        m_currentMeta.setMapping( toSet );\n      } else {\n        HBaseConnection connection = null;\n        try {\n          connection = getHBaseConnection();\n          MappingAdmin admin = new MappingAdmin( connection );\n          Mapping current =\n            admin.getMapping( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ), transMeta\n              .environmentSubstitute( m_mappingNamesCombo.getText() ) );\n\n          m_currentMeta.setMapping( current );\n          m_currentMeta.setSourceMappingName( \"\" );\n        } catch ( Exception e ) {\n          logError( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), e );\n          new ErrorDialog( shell, Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" ), Messages\n            .getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), e );\n        } finally {\n          if ( connection != null ) {\n            try {\n              connection.close();\n            } catch ( Exception e ) {\n              String msg = Messages.getString( \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n              logError( msg, e );\n              new ErrorDialog( shell, msg, msg, e );\n            }\n          }\n        }\n      }\n    } else {\n      // we're going to use a mapping stored in HBase - null out any stored\n      // mapping\n      m_currentMeta.setMapping( null );\n    }\n\n    if ( !m_originalMeta.equals( m_currentMeta ) ) {\n      m_currentMeta.setChanged();\n      changed = m_currentMeta.hasChanged();\n    }\n\n    dispose();\n  }\n\n  protected void cancel() {\n    stepname = null;\n    m_currentMeta.setChanged( changed );\n\n    dispose();\n  }\n\n  private void getData() {\n\n    namedClusterWidget.setSelectedNamedCluster( m_currentMeta.getNamedCluster().getName() );\n\n    if ( !Const.isEmpty( m_currentMeta.getCoreConfigURL() ) ) {\n      m_coreConfigText.setText( m_currentMeta.getCoreConfigURL() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getDefaultConfigURL() ) ) {\n      m_defaultConfigText.setText( m_currentMeta.getDefaultConfigURL() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getSourceTableName() ) ) {\n      m_mappedTableNamesCombo.setText( m_currentMeta.getSourceTableName() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getSourceMappingName() ) ) {\n      m_mappingNamesCombo.setText( m_currentMeta.getSourceMappingName() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getKeyStartValue() ) ) {\n      m_keyStartText.setText( m_currentMeta.getKeyStartValue() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getKeyStopValue() ) ) {\n      m_keyStopText.setText( m_currentMeta.getKeyStopValue() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getScannerCacheSize() ) ) {\n      m_scanCacheText.setText( m_currentMeta.getScannerCacheSize() );\n    }\n\n    m_matchAnyBut.setSelection( m_currentMeta.getMatchAnyFilter() );\n    m_matchAllBut.setSelection( !m_currentMeta.getMatchAnyFilter() );\n\n    // filters\n    if ( m_currentMeta.getColumnFilters() != null && m_currentMeta.getColumnFilters().size() > 0 ) {\n      for ( ColumnFilter f : m_currentMeta.getColumnFilters() ) {\n        TableItem item = new TableItem( m_filtersView.table, SWT.NONE );\n\n        if ( !Const.isEmpty( f.getFieldAlias() ) ) {\n          item.setText( 1, f.getFieldAlias() );\n        }\n        if ( !Const.isEmpty( f.getFieldType() ) ) {\n          item.setText( 2, f.getFieldType() );\n        }\n        if ( f.getComparisonOperator() != null ) {\n          item.setText( 3, f.getComparisonOperator().toString() );\n        }\n        if ( !Const.isEmpty( f.getConstant() ) ) {\n          item.setText( 4, f.getConstant() );\n        }\n        item.setText( 6, ( f.getSignedComparison() ) ? \"Y\" : \"N\" );\n        if ( !Const.isEmpty( f.getFormat() ) ) {\n          item.setText( 5, f.getFormat() );\n        }\n      }\n\n      m_filtersView.removeEmptyRows();\n      m_filtersView.setRowNums();\n      m_filtersView.optWidth( true );\n    }\n\n    if ( Const.isEmpty( m_currentMeta.getSourceMappingName() ) && m_currentMeta.getMapping() != null ) {\n      m_mappingEditor.setMapping( m_currentMeta.getMapping() );\n      m_storeMappingInStepMetaData.setSelection( true );\n    }\n\n    // do the key and columns\n    checkKeyInformation( true, false );\n  }\n\n  @Override public HBaseService getHBaseService() throws ClusterInitializationException {\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    return namedClusterServiceLocator.getService( nc, HBaseService.class );\n  }\n\n  public HBaseConnection getHBaseConnection() throws IOException, ClusterInitializationException {\n    HBaseConnection conf = null;\n\n    String coreConf = \"\";\n    String defaultConf = \"\";\n    String zookeeperHosts = \"\";\n\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    HBaseService hBaseService = getHBaseService();\n    if ( nc != null && !nc.isUseGateway()) {\n      zookeeperHosts = transMeta.environmentSubstitute( nc.getZooKeeperHost() );\n    }\n\n    if ( !Const.isEmpty( m_coreConfigText.getText() ) ) {\n      coreConf = transMeta.environmentSubstitute( m_coreConfigText.getText() );\n    }\n\n    if ( !Const.isEmpty( m_defaultConfigText.getText() ) ) {\n      defaultConf = transMeta.environmentSubstitute( m_defaultConfigText.getText() );\n    }\n\n    if ( Const.isEmpty( zookeeperHosts ) && Const.isEmpty( coreConf ) && Const.isEmpty( defaultConf )\n      && ( nc == null || !nc.isUseGateway() ) ) {\n      throw new IOException( BaseMessages.getString( HBaseInputMeta.PKG,\n        \"MappingDialog.Error.Message.CantConnectNoConnectionDetailsProvided\" ) );\n    }\n    return hBaseService.getHBaseConnection( transMeta, coreConf, defaultConf, null );\n  }\n\n  private void checkKeyInformation( boolean quiet, boolean readFieldsFromMapping ) {\n    String zookeeperQuorumText = null;\n\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    if ( nc != null ) {\n      zookeeperQuorumText = nc.getZooKeeperHost();\n    }\n\n    boolean displayFieldsEmbeddedMapping =\n      ( ( m_mappingEditor.getMapping( false, null, false ) != null && Const.isEmpty( m_mappingNamesCombo.getText() ) ) );\n    boolean displayFieldsMappingFromHBase =\n      ( !Const.isEmpty( m_coreConfigText.getText() ) || !Const.isEmpty( zookeeperQuorumText ) || ( nc != null && nc.isUseGateway() ) )\n        && !Const.isEmpty( m_mappedTableNamesCombo.getText() ) && !Const.isEmpty( m_mappingNamesCombo.getText() );\n\n    if ( displayFieldsEmbeddedMapping || displayFieldsMappingFromHBase ) {\n      try {\n        m_indexedLookup = new HashMap<String, String>();\n\n        MappingAdmin admin = null;\n\n        Mapping current = null;\n        Map<String, HBaseValueMetaInterface> mappedColumns = null;\n\n        boolean filterAliasesDone = false;\n        HBaseConnection connection = null;\n\n        try {\n          if ( displayFieldsMappingFromHBase && readFieldsFromMapping ) {\n            connection = getHBaseConnection();\n            if ( displayFieldsMappingFromHBase ) {\n              admin = new MappingAdmin( connection );\n            }\n            current =\n              admin.getMapping( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ), transMeta\n                .environmentSubstitute( m_mappingNamesCombo.getText() ) );\n          } else {\n            current = m_mappingEditor.getMapping( false, null, true );\n          }\n\n          if ( current != null ) {\n            // Key information\n            m_keyName = current.getKeyName();\n            m_keyType = current.getKeyType();\n            m_keyInfo.setText( \"HBase Key: \" + m_keyName + \" (\" + m_keyType.toString() + \")\" );\n\n            mappedColumns = current.getMappedColumns();\n            m_mappedColumns = mappedColumns; // cached copy\n\n            // Set up the alias combo box in the filters tab\n            List<String> filterAliasNames = new ArrayList<String>();\n            filterAliasNames.add( m_keyName );\n            for ( String alias : mappedColumns.keySet() ) {\n              HBaseValueMetaInterface column = mappedColumns.get( alias );\n              String aliasS = column.getAlias();\n              if ( column.isNumeric() || column.isDate() || column.isString() || column.isBoolean() ) {\n                filterAliasNames.add( aliasS );\n              }\n            }\n            String[] filterAliasNamesA = filterAliasNames.toArray( new String[ 1 ] );\n            m_filterAliasCI.setComboValues( filterAliasNamesA );\n            filterAliasesDone = true;\n          } else {\n            m_keyInfo.setText( \"\" );\n          }\n        } catch ( Exception ex ) {\n          if ( !quiet ) {\n            logError( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n              + \" \\\"\"\n              + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n              + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), ex );\n            new ErrorDialog( shell, Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" ), Messages\n              .getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n              + \" \\\"\"\n              + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n              + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), ex );\n          }\n          m_keyInfo.setText( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" ) );\n        } finally {\n          if ( connection != null ) {\n            connection.close();\n          }\n        }\n\n        // Fields information\n        m_fieldsView.clearAll( false );\n        ByteConversionUtil byteConversionUtil = getHBaseService().getByteConversionUtil();\n\n        if ( current != null && readFieldsFromMapping ) {\n          TableItem item = new TableItem( m_fieldsView.table, SWT.NONE );\n          item.setText( 1, m_keyName );\n          item.setText( 2, \"Y\" );\n          item.setText( 7, \"N\" );\n          if ( current.getKeyType() == Mapping.KeyType.DATE || current.getKeyType() == Mapping.KeyType.UNSIGNED_DATE ) {\n            item.setText( 5, ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_DATE ) );\n          } else if ( current.getKeyType() == Mapping.KeyType.STRING ) {\n            item.setText( 5, ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING ) );\n          } else if ( current.getKeyType() == Mapping.KeyType.INTEGER\n            || current.getKeyType() == Mapping.KeyType.UNSIGNED_INTEGER\n            || current.getKeyType() == Mapping.KeyType.UNSIGNED_LONG\n            || current.getKeyType() == Mapping.KeyType.LONG ) {\n            item.setText( 5, ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_INTEGER ) );\n          } else {\n            item.setText( 5, ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_BINARY ) );\n          }\n\n          // get all the fields from the mapping\n          for ( String alias : mappedColumns.keySet() ) {\n            if ( alias.equalsIgnoreCase( m_keyName ) ) {\n              continue;\n            }\n            HBaseValueMetaInterface column = mappedColumns.get( alias );\n            String aliasS = column.getAlias();\n            String family = column.getColumnFamily();\n            String name = column.getColumnName();\n            String type = column.getTypeDesc();\n            String format = column.getConversionMask();\n\n            item = new TableItem( m_fieldsView.table, SWT.NONE );\n            if ( column.getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED ) {\n              String valuesString = byteConversionUtil.objectIndexValuesToString( column.getIndex() );\n\n              m_indexedLookup.put( aliasS, valuesString );\n              item.setText( 7, \"Y\" );\n            } else {\n              item.setText( 7, \"N\" );\n            }\n\n            item.setText( 1, aliasS );\n            item.setText( 2, \"N\" );\n            item.setText( 3, family );\n            item.setText( 4, name );\n            item.setText( 5, type );\n            if ( !Const.isEmpty( format ) ) {\n              item.setText( 6, format );\n            }\n          }\n        }\n\n        if ( !readFieldsFromMapping && m_currentMeta.getOutputFields() != null\n          && m_currentMeta.getOutputFields().size() > 0 ) {\n\n          // user has selected some fields from the mapping to output\n          List<String> filterAliasNames = new ArrayList<String>();\n          for ( HBaseValueMetaInterface column : m_currentMeta.getOutputFields() ) {\n            TableItem item = new TableItem( m_fieldsView.table, SWT.NONE );\n\n            String aliasS = column.getAlias();\n            String type = column.getTypeDesc();\n            item.setText( 1, aliasS );\n            item.setText( 5, type );\n\n            if ( column.isKey() ) {\n              item.setText( 2, \"Y\" );\n              item.setText( 7, \"N\" );\n\n              if ( !Const.isEmpty( column.getConversionMask() ) ) {\n                item.setText( 6, column.getConversionMask() );\n              }\n              if ( !filterAliasesDone ) {                //todo check for key type may be do not work in some cases\n                filterAliasNames.add( aliasS );\n              }\n              continue; // skip the rest\n            }\n\n            item.setText( 2, \"N\" );\n\n            if ( column.isNumeric() || column.isDate() || column.isString() ) {\n              if ( !filterAliasesDone ) {\n                filterAliasNames.add( aliasS );\n              }\n            }\n            String family = column.getColumnFamily();\n            String name = column.getColumnName();\n            String format = column.getConversionMask();\n\n            if ( column.getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED ) {\n              String valuesString = byteConversionUtil.objectIndexValuesToString( column.getIndex() );\n\n              m_indexedLookup.put( aliasS, valuesString );\n              item.setText( 7, \"Y\" );\n            } else {\n              item.setText( 7, \"N\" );\n            }\n\n            item.setText( 3, family );\n            item.setText( 4, name );\n\n            if ( !Const.isEmpty( format ) ) {\n              item.setText( 6, format );\n            }\n          }\n\n          // set the allowable combo values for the selectable columns in the\n          // filter tab\n          if ( !filterAliasesDone ) {\n            String[] filterAliasNamesA = filterAliasNames.toArray( new String[ 1 ] );\n            m_filterAliasCI.setComboValues( filterAliasNamesA );\n            filterAliasesDone = true;\n          }\n        }\n\n        m_fieldsView.removeEmptyRows();\n        m_fieldsView.setRowNums();\n        m_fieldsView.optWidth( true );\n\n      } catch ( Exception ex ) {\n        if ( !quiet ) {\n          logError( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), ex );\n          new ErrorDialog( shell, Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" ), Messages\n            .getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), ex );\n        }\n        m_keyInfo.setText( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToGetMapping\" ) );\n      }\n    } else {\n      m_keyInfo.setText( \"\" );\n    }\n  }\n\n  private void setupMappedTableNames() {\n    HBaseConnection connection = null;\n    Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n    try {\n      shell.setCursor( busy );\n      connection = getHBaseConnection();\n      MappingAdmin admin = new MappingAdmin( connection );\n      Set<String> tableNames = admin.getMappedTables( parseNamespaceFromTableName( null ) );\n\n      m_mappedTableNamesCombo.removeAll();\n      for ( String s : tableNames ) {\n        m_mappedTableNamesCombo.add( s );\n      }\n    } catch ( Exception e ) {\n      logError( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), e );\n      new ErrorDialog( shell, Messages.getString( \"HBaseInputDialog.ErrorMessage.\" + \"UnableToConnect\" ), Messages\n        .getString( \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), e );\n    } finally {\n      shell.setCursor( null );\n      busy.dispose();\n      if ( connection != null ) {\n        try {\n          connection.close();\n        } catch ( Exception e ) {\n          String msg = Messages.getString( \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n          logError( msg, e );\n          new ErrorDialog( shell, msg, msg, e );\n        }\n      }\n    }\n  }\n\n  private void setupMappingNamesForTable( boolean quiet ) {\n    m_mappingNamesCombo.removeAll();\n\n    if ( !Const.isEmpty( m_mappedTableNamesCombo.getText() ) ) {\n      HBaseConnection connection = null;\n      try {\n        connection = getHBaseConnection();\n        MappingAdmin admin = new MappingAdmin( connection );\n\n        List<String> mappingNames = admin.getMappingNames( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText().trim() ) );\n\n        for ( String n : mappingNames ) {\n          m_mappingNamesCombo.add( n );\n        }\n      } catch ( Exception ex ) {\n        if ( !quiet ) {\n          logError( Messages.getString( \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), ex );\n          new ErrorDialog( shell, Messages.getString( \"HBaseInputDialog.ErrorMessage.\" + \"UnableToConnect\" ), Messages\n            .getString( \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), ex );\n        }\n      } finally {\n        if ( connection != null ) {\n          try {\n            connection.close();\n          } catch ( Exception e ) {\n            if ( !quiet ) {\n              String msg = Messages.getString( \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n              logError( msg, e );\n              new ErrorDialog( shell, msg, msg, e );\n            }\n          }\n        }\n      }\n    }\n  }\n\n  private String parseNamespaceFromTableName( String defaultNamespaceIfNoneSpecified ) {\n    return HbaseUtil.parseNamespaceFromTableName( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ),\n      defaultNamespaceIfNoneSpecified );\n  }\n\n  public String getCurrentConfiguration() {\n    updateMetaConnectionDetails( m_configurationMeta );\n    return m_configurationMeta.getXML();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hbase.FilterDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingUtils;\nimport org.pentaho.big.data.kettle.plugins.hbase.meta.AELHBaseMappingImpl;\nimport org.pentaho.big.data.kettle.plugins.hbase.meta.AELHBaseValueMetaImpl;\nimport org.pentaho.di.core.CheckResult;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilter;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.ColumnFilterFactory;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\n\n/**\n * Class providing an input step for reading data from an HBase table according to meta data mapping info stored in a\n * separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the meta data\n * format.\n */\n@Step( id = \"HBaseInput\", image = \"HB.svg\", name = \"HBaseInput.Name\", description = \"HBaseInput.Description\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    documentationUrl = \"pdi-transformation-steps-reference-overview/hbase-input-cp-main-page\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.hbaseinput\" )\n@InjectionSupported( localizationPrefix = \"HBaseInput.Injection.\", groups = {\"OUTPUT_FIELDS\", \"MAPPING\", \"FILTER\"} )\npublic class HBaseInputMeta extends BaseStepMeta implements StepMetaInterface {\n\n  protected static Class<?> PKG = HBaseInputMeta.class;\n\n  private final NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n  private final NamedClusterService namedClusterService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private MetastoreLocator metaStoreService;\n  private final RuntimeTester runtimeTester;\n\n  protected NamedCluster namedCluster;\n\n  /**\n   * path/url to hbase-site.xml\n   */\n  @Injection( name = \"HBASE_SITE_XML_URL\" )\n  protected String m_coreConfigURL;\n\n  /**\n   * path/url to hbase-default.xml\n   */\n  @Injection( name = \"HBASE_DEFAULT_XML_URL\" )\n  protected String m_defaultConfigURL;\n\n  /**\n   * the name of the HBase table to read from\n   */\n  @Injection( name = \"SOURCE_TABLE_NAME\" )\n  protected String m_sourceTableName;\n\n  /**\n   * the name of the mapping for columns/types for the source table\n   */\n  @Injection( name = \"SOURCE_MAPPING_NAME\" )\n  protected String m_sourceMappingName;\n\n  /**\n   * Start key value for range scans\n   */\n  @Injection( name = \"START_KEY_VALUE\" )\n  protected String m_keyStart;\n\n  /**\n   * Stop key value for range scans\n   */\n  @Injection( name = \"STOP_KEY_VALUE\" )\n  protected String m_keyStop;\n\n  /**\n   * Scanner caching\n   */\n  @Injection( name = \"SCANNER_ROW_CACHE_SIZE\" )\n  protected String m_scannerCacheSize;\n\n  protected transient Mapping m_cachedMapping;\n\n  /**\n   * The selected fields to output. If null, then all fields from the mapping are output\n   */\n  protected List<HBaseValueMetaInterface> m_outputFields;\n\n  @InjectionDeep\n  protected List<OutputFieldDefinition> outputFieldsDefinition;\n\n  /**\n   * The configured column filters. If null, then no filters are applied to the result set\n   */\n  protected List<ColumnFilter> m_filters;\n\n  @InjectionDeep\n  protected List<FilterDefinition> filtersDefinition;\n\n  /**\n   * If true, then any matching filter will cause the row to be output, otherwise all filters have to return true before\n   * the row is output\n   */\n  @Injection( name = \"MATCH_ANY_FILTER\" )\n  protected boolean m_matchAnyFilter;\n\n  /**\n   * The mapping to use if we are not loading one dynamically at runtime from HBase itself\n   */\n  protected Mapping m_mapping;\n\n  @InjectionDeep\n  protected MappingDefinition mappingDefinition;\n\n  private ServiceStatus serviceStatus = ServiceStatus.OK;\n\n  public HBaseInputMeta() {\n    this( NamedClusterManager.getInstance(), BigDataServicesHelper.getNamedClusterServiceLocator(),\n      RuntimeTestActionServiceImpl.getInstance(), RuntimeTesterImpl.getInstance() );\n  }\n\n  public HBaseInputMeta( NamedClusterService namedClusterService,\n                         NamedClusterServiceLocator namedClusterServiceLocator,\n                         RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n  }\n\n  public synchronized MetastoreLocator getMetastoreLocator() {\n    if ( this.metaStoreService == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metaStoreService = metastoreLocators.stream().findFirst().get();\n      } catch ( Exception e ) {\n        getLog().logError( \"Error getting MetastoreLocator\", e );\n      }\n    }\n    return this.metaStoreService;\n  }\n\n  public HBaseInputMeta(NamedClusterService namedClusterService,\n                        NamedClusterServiceLocator namedClusterServiceLocator,\n                        RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester, MetastoreLocator metastoreLocator) {\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n    this.metaStoreService = metastoreLocator;\n  }\n\n  /**\n   * Set the mapping to use for decoding the row\n   *\n   * @param m the mapping to use\n   */\n  public void setMapping( Mapping m ) {\n    m_mapping = m;\n  }\n\n  /**\n   * Get the mapping to use for decoding the row\n   *\n   * @return the mapping to use\n   */\n  public Mapping getMapping() {\n    return m_mapping;\n  }\n\n  /**\n   * Set the URL to the hbase-site.xml. Either this OR the zookeeper host list can be used to establish a connection.\n   *\n   * @param coreConfig\n   */\n  public void setCoreConfigURL( String coreConfig ) {\n    m_coreConfigURL = coreConfig;\n    m_cachedMapping = null;\n  }\n\n  /**\n   * Get the URL to the hbase-site.xml file.\n   *\n   * @return the URL to the hbase-site.xml file or null if not set.\n   */\n  public String getCoreConfigURL() {\n    return m_coreConfigURL;\n  }\n\n  /**\n   * Set the URL to the hbase-default.xml file. This can be optionally supplied in conjuction with hbase-site.xml. If\n   * not supplied, then the default hbase-default.xml included in the main hbase jar file is used.\n   *\n   * @param defaultConfig URL to the hbase-default.xml file.\n   */\n  public void setDefaulConfigURL( String defaultConfig ) {\n    m_defaultConfigURL = defaultConfig;\n    m_cachedMapping = null;\n  }\n\n  /**\n   * Get the URL to hbase-default.xml\n   *\n   * @return the URL to hbase-default.xml or null if not set.\n   */\n  public String getDefaultConfigURL() {\n    return m_defaultConfigURL;\n  }\n\n  public void setSourceTableName( String sourceTable ) {\n    m_sourceTableName = sourceTable;\n    m_cachedMapping = null;\n  }\n\n  /**\n   * Get the name of the HBase table to read from.\n   *\n   * @return the name of the source HBase table.\n   */\n  public String getSourceTableName() {\n    return m_sourceTableName;\n  }\n\n  /**\n   * Set the name of the mapping to use that defines column names and types for the source table.\n   *\n   * @param sourceMapping the name of the mapping to use.\n   */\n  public void setSourceMappingName( String sourceMapping ) {\n    m_sourceMappingName = sourceMapping;\n    m_cachedMapping = null;\n  }\n\n  /**\n   * Get the name of the mapping to use for reading and decoding column values for the source table.\n   *\n   * @return the name of the mapping to use.\n   */\n  public String getSourceMappingName() {\n    return m_sourceMappingName;\n  }\n\n  /**\n   * Set whether a given row needs to match at least one of the user specified column filters.\n   *\n   * @param a true if at least one filter needs to match before a given row is returned. If false then *all* filters\n   *          must match.\n   */\n  public void setMatchAnyFilter( boolean a ) {\n    m_matchAnyFilter = a;\n  }\n\n  /**\n   * Get whether a given row needs to match at least one of the user-specified column filters.\n   *\n   * @return true if a given row needs to match at least one of the user specified column filters. Returns false if\n   * *all* column filters need to match\n   */\n  public boolean getMatchAnyFilter() {\n    return m_matchAnyFilter;\n  }\n\n  /**\n   * Set the starting value (inclusive) of the key for range scans\n   *\n   * @param start the starting value of the key to use in range scans.\n   */\n  public void setKeyStartValue( String start ) {\n    m_keyStart = start;\n  }\n\n  /**\n   * Get the starting value of the key to use in range scans\n   *\n   * @return the starting value of the key\n   */\n  public String getKeyStartValue() {\n    return m_keyStart;\n  }\n\n  /**\n   * Set the stop value (exclusive) of the key to use in range scans. May be null to indicate scan to the end of the\n   * table\n   *\n   * @param stop the stop value of the key to use in range scans\n   */\n  public void setKeyStopValue( String stop ) {\n    m_keyStop = stop;\n  }\n\n  /**\n   * Get the stop value of the key to use in range scans\n   *\n   * @return the stop value of the key\n   */\n  public String getKeyStopValue() {\n    return m_keyStop;\n  }\n\n  /**\n   * Set the number of rows to cache for scans. Higher values result in improved performance since there will be fewer\n   * requests to HBase but at the expense of increased memory consumption.\n   *\n   * @param s the number of rows to cache for scans.\n   */\n  public void setScannerCacheSize( String s ) {\n    m_scannerCacheSize = s;\n  }\n\n  /**\n   * The number of rows to cache for scans.\n   *\n   * @return the number of rows to cache for scans.\n   */\n  public String getScannerCacheSize() {\n    return m_scannerCacheSize;\n  }\n\n  /**\n   * Set a list of fields to emit from this steo. If not specified, then all fields defined in the mapping for the\n   * source table will be emitted.\n   *\n   * @param fields a list of fields to emit from this step.\n   */\n  public void setOutputFields( List<HBaseValueMetaInterface> fields ) {\n    m_outputFields = fields;\n  }\n\n  /**\n   * Get the list of fields to emit from this step. May return null, which indicates that *all* fields defined in the\n   * mapping for the source table will be emitted.\n   *\n   * @return the fields that will be output or null (indicating all fields defined in the mapping will be output).\n   */\n  public List<HBaseValueMetaInterface> getOutputFields() {\n    return m_outputFields;\n  }\n\n  /**\n   * Set a list of column filters to use to refine the query\n   *\n   * @param list a list of column filters to refine the query\n   */\n  public void setColumnFilters( List<ColumnFilter> list ) {\n    m_filters = list;\n  }\n\n  /**\n   * Get the list of column filters to use for refining the results of a scan. May return null if no filters are in use.\n   *\n   * @return a list of columm filters by which to refine the results of a query scan.\n   */\n  public List<ColumnFilter> getColumnFilters() {\n    return m_filters;\n  }\n\n  public void setDefault() {\n    m_coreConfigURL = null;\n    m_defaultConfigURL = null;\n    m_cachedMapping = null;\n    m_sourceTableName = null;\n    m_sourceMappingName = null;\n    m_keyStart = null;\n    m_keyStop = null;\n    namedCluster = namedClusterService.getClusterTemplate();\n  }\n\n  private String getIndexValues( HBaseValueMetaInterface vm ) {\n    Object[] labels = vm.getIndex();\n    StringBuffer vals = new StringBuffer();\n    vals.append( \"{\" );\n\n    for ( int i = 0; i < labels.length; i++ ) {\n      if ( i != labels.length - 1 ) {\n        vals.append( labels[i].toString().trim() ).append( \",\" );\n      } else {\n        vals.append( labels[i].toString().trim() ).append( \"}\" );\n      }\n    }\n    return vals.toString();\n  }\n\n  void applyInjection( VariableSpace space ) throws KettleException {\n    if ( namedCluster == null ) {\n      throw new KettleException( \"Named cluster was not initialized!\" );\n    }\n    if ( namedCluster.getShimIdentifier() == null && getParentStepMeta() != null\n      && getParentStepMeta().getParentTransMeta() != null ) {\n      // If here we have a template for the named cluster, not the real thing.  This is likely due to not having\n      // the namedCluster present in the local metastore.  Time to load it from the embedded Metastore which is only\n      // present at runtime\n      NamedCluster nc = namedClusterService.getNamedClusterByName( namedCluster.getName(),\n        getMetastoreLocator()\n          .getExplicitMetastore( getParentStepMeta().getParentTransMeta().getEmbeddedMetastoreProviderKey() ) );\n      if ( nc != null && nc.getShimIdentifier() != null ) {\n        namedCluster = nc; //Overwrite with the real one\n      }\n    }\n    try {\n      HBaseService hBaseService = getService();\n      Mapping tempMapping = null;\n      if ( mappingDefinition != null ) {\n        tempMapping = getMapping( mappingDefinition, hBaseService );\n        setMapping( tempMapping );\n      }\n\n      if ( outputFieldsDefinition != null && !outputFieldsDefinition.isEmpty() ) {\n        if ( mappingDefinition == null ) {\n          if ( !Const.isEmpty( m_sourceMappingName ) ) {\n            tempMapping =\n                getMappingFromHBase( hBaseService, space, m_sourceTableName, m_sourceMappingName, m_coreConfigURL,\n                    m_defaultConfigURL );\n          } else {\n            tempMapping = m_mapping;\n          }\n        }\n        setOutputFields( createOutputFieldsDefinition( tempMapping, hBaseService ) );\n      }\n\n      if ( filtersDefinition != null && !filtersDefinition.isEmpty() ) {\n        ColumnFilterFactory columnFilterFactory = hBaseService.getColumnFilterFactory();\n        setColumnFilters( createColumnFiltersFromDefinition( columnFilterFactory ) );\n      }\n    } catch ( Exception e ) {\n      throw new KettleException( e );\n    }\n  }\n\n  @VisibleForTesting\n  Mapping getMapping( MappingDefinition mappingDefinition, HBaseService hBaseService ) throws KettleException {\n    return MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  static Mapping getMappingFromHBase( HBaseService hBaseService, VariableSpace space, String tableName,\n                                      String mappingName, String coreConfigURL, String defaultConfigURL ) throws KettleException {\n    try {\n      String siteConfig = \"\";\n      if ( !Const.isEmpty( coreConfigURL ) ) {\n        siteConfig = space.environmentSubstitute( coreConfigURL );\n      }\n      String defaultConfig = \"\";\n      if ( !Const.isEmpty( ( defaultConfigURL ) ) ) {\n        defaultConfig = space.environmentSubstitute( defaultConfigURL );\n      }\n      MappingAdmin mappingAdmin = MappingUtils.getMappingAdmin( hBaseService, space, siteConfig, defaultConfig );\n      return mappingAdmin.getMapping( tableName, mappingName );\n    } catch ( Exception e ) {\n      throw new KettleException( e );\n    }\n  }\n\n  @VisibleForTesting\n  List<HBaseValueMetaInterface> createOutputFieldsDefinition( Mapping mapping, HBaseService hBaseService ) {\n    return createOutputFieldsDefinition( outputFieldsDefinition, mapping, hBaseService );\n  }\n\n  static List<HBaseValueMetaInterface> createOutputFieldsDefinition(\n      List<OutputFieldDefinition> outputFieldsDefinition, Mapping m_mapping, HBaseService hBaseService ) {\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n    ByteConversionUtil byteConversionUtil = hBaseService.getByteConversionUtil();\n\n    List<HBaseValueMetaInterface> outputFields = new ArrayList<>();\n    Map<String, HBaseValueMetaInterface> columns = m_mapping.getMappedColumns();\n    for ( OutputFieldDefinition fieldDefinition : outputFieldsDefinition ) {\n      HBaseValueMetaInterface valueMeta =\n          valueMetaInterfaceFactory.createHBaseValueMetaInterface( fieldDefinition.getFamily(), fieldDefinition\n                  .getColumnName(), fieldDefinition.getAlias(), ValueMeta.getType( fieldDefinition.getHbaseType() ), -1,\n              -1 );\n      valueMeta.setKey( fieldDefinition.isKey() );\n      valueMeta.setConversionMask( fieldDefinition.getFormat() );\n      HBaseValueMetaInterface mappedColumn = columns.get( fieldDefinition.getAlias() );\n      if ( mappedColumn != null && mappedColumn.getIndex() != null ) {\n        Object[] indexVal = mappedColumn.getIndex();\n        String indexStrign = byteConversionUtil.objectIndexValuesToString( indexVal );\n\n        Object[] vals = byteConversionUtil.stringIndexListToObjects( indexStrign );\n        valueMeta.setIndex( vals );\n        valueMeta.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n      }\n      outputFields.add( valueMeta );\n    }\n    return outputFields;\n  }\n\n  @VisibleForTesting\n  List<ColumnFilter> createColumnFiltersFromDefinition( ColumnFilterFactory c ) {\n    return createColumnFiltersFromDefinition( filtersDefinition, c );\n  }\n\n  static List<ColumnFilter> createColumnFiltersFromDefinition( List<FilterDefinition> filtersDefinition,\n                                                               ColumnFilterFactory columnFilterFactory ) {\n    List<ColumnFilter> filters = new ArrayList<>();\n    for ( FilterDefinition filterDefinition : filtersDefinition ) {\n      ColumnFilter columnFilter = columnFilterFactory.createFilter( filterDefinition.getAlias() );\n      columnFilter.setFieldType( filterDefinition.getFieldType() );\n      columnFilter.setComparisonOperator( filterDefinition.getComparisonType() );\n      columnFilter.setConstant( filterDefinition.getConstant() );\n      columnFilter.setSignedComparison( filterDefinition.isSignedComparison() );\n      columnFilter.setFormat( filterDefinition.getFormat() );\n      filters.add( columnFilter );\n    }\n    return filters;\n  }\n\n  @Override\n  public String getXML() {\n    try {\n      applyInjection( new Variables() );\n    } catch ( KettleException e ) {\n      logError( \"Error occurred while injecting metadata. Transformation meta could be incorrect!\", e );\n    }\n    StringBuilder retval = new StringBuilder();\n\n    namedClusterLoadSaveUtil\n        .getXml( retval, namedClusterService, namedCluster, MetaStoreConst.getDefaultMetastore(), getLog() );\n\n    if ( parentStepMeta != null && parentStepMeta.getParentTransMeta() != null ) {\n      parentStepMeta.getParentTransMeta().getNamedClusterEmbedManager().addClusterToMeta( namedCluster.getName() );\n    }\n\n    if ( !Const.isEmpty( m_coreConfigURL ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"core_config_url\", m_coreConfigURL ) );\n    }\n    if ( !Const.isEmpty( m_defaultConfigURL ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"default_config_url\", m_defaultConfigURL ) );\n    }\n    if ( !Const.isEmpty( m_sourceTableName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"source_table_name\", m_sourceTableName ) );\n    }\n    if ( !Const.isEmpty( m_sourceMappingName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"source_mapping_name\", m_sourceMappingName ) );\n    }\n    if ( !Const.isEmpty( m_keyStart ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"key_start\", m_keyStart ) );\n    }\n    if ( !Const.isEmpty( m_keyStop ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"key_stop\", m_keyStop ) );\n    }\n    if ( !Const.isEmpty( m_scannerCacheSize ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"scanner_cache_size\", m_scannerCacheSize ) );\n    }\n\n    if ( m_outputFields != null && m_outputFields.size() > 0 ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.openTag( \"output_fields\" ) );\n\n      for ( HBaseValueMetaInterface vm : m_outputFields ) {\n        vm.getXml( retval );\n      }\n\n      retval.append( \"\\n    \" ).append( XMLHandler.closeTag( \"output_fields\" ) );\n    }\n\n    if ( m_filters != null && m_filters.size() > 0 ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.openTag( \"column_filters\" ) );\n\n      for ( ColumnFilter f : m_filters ) {\n        f.appendXML( retval );\n      }\n      retval.append( \"\\n    \" ).append( XMLHandler.closeTag( \"column_filters\" ) );\n    }\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"match_any_filter\", m_matchAnyFilter ) );\n\n    if ( m_mapping != null ) {\n      retval.append( m_mapping.getXML() );\n    }\n\n    return retval.toString();\n  }\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore )\n      throws KettleXMLException {\n    System.out.println( \"loading data\" );\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreLocator().getMetastore();\n    }\n\n    this.namedCluster =\n        namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, null, repository, metaStore, stepnode, getLog() );\n\n    HBaseService hBaseService = null;\n    try {\n      hBaseService = getService();\n    } catch ( Exception e ) {\n      getLog().logError( e.getMessage() );\n    }\n\n    m_coreConfigURL = XMLHandler.getTagValue( stepnode, \"core_config_url\" );\n    m_defaultConfigURL = XMLHandler.getTagValue( stepnode, \"default_config_url\" );\n    m_sourceTableName = HbaseUtil.expandLegacyTableNameOnLoad( XMLHandler.getTagValue( stepnode, \"source_table_name\" ) );\n    m_sourceMappingName = XMLHandler.getTagValue( stepnode, \"source_mapping_name\" );\n    m_keyStart = XMLHandler.getTagValue( stepnode, \"key_start\" );\n    m_keyStop = XMLHandler.getTagValue( stepnode, \"key_stop\" );\n    m_scannerCacheSize = XMLHandler.getTagValue( stepnode, \"scanner_cache_size\" );\n    String m = XMLHandler.getTagValue( stepnode, \"match_any_filter\" );\n    if ( !Const.isEmpty( m ) ) {\n      m_matchAnyFilter = m.equalsIgnoreCase( \"Y\" );\n    }\n\n    if ( hBaseService != null ) {\n      HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n      m_outputFields = valueMetaInterfaceFactory.createListFromNode( stepnode );\n\n      ColumnFilterFactory columnFilterFactory = hBaseService.getColumnFilterFactory();\n      MappingFactory mappingFactory = hBaseService.getMappingFactory();\n\n      Node filters = XMLHandler.getSubNode( stepnode, \"column_filters\" );\n      if ( filters != null && XMLHandler.countNodes( filters, \"filter\" ) > 0 ) {\n        int nrFilters = XMLHandler.countNodes( filters, \"filter\" );\n        m_filters = new ArrayList<ColumnFilter>();\n\n        for ( int i = 0; i < nrFilters; i++ ) {\n          Node filterNode = XMLHandler.getSubNodeByNr( filters, \"filter\", i );\n          m_filters.add( columnFilterFactory.createFilter( filterNode ) );\n        }\n      }\n\n      Mapping tempMapping = mappingFactory.createMapping();\n      if ( tempMapping.loadXML( stepnode ) ) {\n        m_mapping = tempMapping;\n      } else {\n        m_mapping = null;\n      }\n    } else {\n      Mapping tempMapping = new AELHBaseMappingImpl();\n      if ( tempMapping.loadXML( stepnode ) ) {\n        m_mapping = tempMapping;\n      } else {\n        getLog().logError( \"There is no meta data to inflate meta object\" );\n      }\n\n      Node fields = XMLHandler.getSubNode( stepnode, \"output_fields\" );\n\n      if ( fields != null ) {\n        int nrfields = XMLHandler.countNodes( fields, \"field\" );\n        List<HBaseValueMetaInterface> m_outputFields = new ArrayList<>( nrfields );\n\n        for ( int i = 0; i < nrfields; i++ ) {\n          m_outputFields.add( createFromNode( XMLHandler.getSubNodeByNr( fields, \"field\", i ) ) );\n        }\n      }\n    }\n  }\n\n  private HBaseValueMetaInterface createFromNode( Node fieldNode ) {\n    String isKey = XMLHandler.getTagValue( fieldNode, \"key\" ).trim();\n    String alias = XMLHandler.getTagValue( fieldNode, \"alias\" ).trim();\n    String columnFamily = \"\";\n    String columnName = alias;\n    if ( !isKey.equalsIgnoreCase( \"Y\" ) ) {\n      if ( XMLHandler.getTagValue( fieldNode, \"family\" ) != null ) {\n        columnFamily = XMLHandler.getTagValue( fieldNode, \"family\" ).trim();\n      }\n\n      if ( XMLHandler.getTagValue( fieldNode, \"column\" ) != null ) {\n        columnName = XMLHandler.getTagValue( fieldNode, \"column\" ).trim();\n      }\n    }\n\n    String typeS = XMLHandler.getTagValue( fieldNode, \"type\" ).trim();\n    String tableName = XMLHandler.getTagValue( fieldNode, \"table_name\" );\n    String mappingName = XMLHandler.getTagValue( fieldNode, \"mapping_name\" );\n\n    AELHBaseValueMetaImpl vm = new AELHBaseValueMetaImpl( isKey.equalsIgnoreCase( \"Y\" ), alias, columnName, columnFamily, tableName, mappingName );\n    vm.setHBaseTypeFromString( typeS );\n    return vm;\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step ) throws KettleException {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreLocator().getMetastore();\n    }\n\n    namedClusterLoadSaveUtil.saveRep( rep, metaStore, id_transformation, id_step, namedClusterService, namedCluster, getLog() );\n\n    if ( !Const.isEmpty( m_coreConfigURL ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"core_config_url\", m_coreConfigURL );\n    }\n    if ( !Const.isEmpty( m_defaultConfigURL ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"default_config_url\", m_defaultConfigURL );\n    }\n    if ( !Const.isEmpty( m_sourceTableName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"source_table_name\", m_sourceTableName );\n    }\n    if ( !Const.isEmpty( m_sourceMappingName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"source_mapping_name\", m_sourceMappingName );\n    }\n    if ( !Const.isEmpty( m_keyStart ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"key_start\", m_keyStart );\n    }\n    if ( !Const.isEmpty( m_keyStop ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"key_stop\", m_keyStop );\n    }\n    if ( !Const.isEmpty( m_scannerCacheSize ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"scanner_cache_size\", m_scannerCacheSize );\n    }\n\n    if ( m_outputFields != null && m_outputFields.size() > 0 ) {\n\n      for ( int i = 0; i < m_outputFields.size(); i++ ) {\n        m_outputFields.get( i ).saveRep( rep, id_transformation, id_step, i );\n      }\n    }\n\n    if ( m_filters != null && m_filters.size() > 0 ) {\n      for ( int i = 0; i < m_filters.size(); i++ ) {\n        ColumnFilter f = m_filters.get( i );\n        f.saveRep( rep, id_transformation, id_step, i );\n      }\n    }\n\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"match_any_filter\", m_matchAnyFilter );\n\n    if ( m_mapping != null ) {\n      m_mapping.saveRep( rep, id_transformation, id_step );\n    }\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n      throws KettleException {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreLocator().getMetastore();\n    }\n\n    this.namedCluster = namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, id_step, rep, metaStore, null, getLog() );\n\n    HBaseService hBaseService = null;\n    try {\n      hBaseService = getService();\n    } catch ( Exception e ) {\n      getLog().logError( e.getMessage() );\n    }\n\n    m_coreConfigURL = rep.getStepAttributeString( id_step, 0, \"core_config_url\" );\n    m_defaultConfigURL = rep.getStepAttributeString( id_step, 0, \"default_config_url\" );\n    m_sourceTableName = HbaseUtil.expandLegacyTableNameOnLoad( rep.getStepAttributeString( id_step, 0, \"source_table_name\" ) );\n    m_sourceMappingName = rep.getStepAttributeString( id_step, 0, \"source_mapping_name\" );\n    m_keyStart = rep.getStepAttributeString( id_step, 0, \"key_start\" );\n    m_keyStop = rep.getStepAttributeString( id_step, 0, \"key_stop\" );\n    m_matchAnyFilter = rep.getStepAttributeBoolean( id_step, 0, \"match_any_filter\" );\n    m_scannerCacheSize = rep.getStepAttributeString( id_step, 0, \"scanner_cache_size\" );\n\n    if ( hBaseService != null ) {\n      HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n      ColumnFilterFactory columnFilterFactory = hBaseService.getColumnFilterFactory();\n      MappingFactory mappingFactory = hBaseService.getMappingFactory();\n\n      m_outputFields = valueMetaInterfaceFactory.createListFromRepository( rep, id_step );\n\n      int nrFilters = rep.countNrStepAttributes( id_step, \"cf_comparison_opp\" );\n      if ( nrFilters > 0 ) {\n        m_filters = new ArrayList<>();\n\n        for ( int i = 0; i < nrFilters; i++ ) {\n          m_filters.add( columnFilterFactory.createFilter( rep, i, id_step ) );\n        }\n      }\n\n      Mapping tempMapping = mappingFactory.createMapping();\n      if ( tempMapping.readRep( rep, id_step ) ) {\n        m_mapping = tempMapping;\n      } else {\n        m_mapping = null;\n      }\n    }\n  }\n\n  @Override\n  public void check( List<CheckResultInterface> remarks, TransMeta transMeta, StepMeta stepMeta, RowMetaInterface prev,\n                     String[] input, String[] output, RowMetaInterface info, VariableSpace variableSpace, Repository repository, IMetaStore metaStore ) {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreLocator().getMetastore();\n    }\n\n    RowMeta r = new RowMeta();\n    try {\n      getFields( transMeta.getBowl(), r, \"testName\", null, null, null, repository, metaStore );\n\n      CheckResult cr =\n          new CheckResult( CheckResult.TYPE_RESULT_OK, \"Step can connect to HBase. Named mapping exists\", stepMeta );\n      remarks.add( cr );\n    } catch ( Exception ex ) {\n      CheckResult cr = new CheckResult( CheckResult.TYPE_RESULT_ERROR, ex.getMessage(), stepMeta );\n      remarks.add( cr );\n    }\n  }\n\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n                                TransMeta transMeta, Trans trans ) {\n\n    return new HBaseInput( stepMeta, stepDataInterface, copyNr, transMeta, trans, namedClusterServiceLocator );\n  }\n\n  public StepDataInterface getStepData() {\n\n    return new HBaseInputData();\n  }\n\n  private void setupCachedMapping( VariableSpace space ) throws KettleStepException {\n    HBaseService hBaseService = null;\n    try {\n      hBaseService = getService();\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleStepException( e );\n    }\n    if ( Const.isEmpty( m_coreConfigURL ) && Const.isEmpty( namedCluster.getZooKeeperHost() ) ) {\n      throw new KettleStepException( \"No output fields available (missing \" + \"connection details)!\" );\n    }\n\n    if ( m_mapping == null && ( Const.isEmpty( m_sourceTableName ) || Const.isEmpty( m_sourceMappingName ) ) ) {\n      throw new KettleStepException( \"No output fields available (missing table \" + \"mapping details)!\" );\n    }\n\n    if ( m_cachedMapping == null ) {\n      // cache the mapping information\n      if ( m_mapping != null ) {\n        m_cachedMapping = m_mapping;\n      } else {\n        String coreConf = null;\n        String defaultConf = null;\n\n        try {\n          if ( !Const.isEmpty( m_coreConfigURL ) ) {\n            coreConf = space.environmentSubstitute( m_coreConfigURL );\n          }\n          if ( !Const.isEmpty( ( m_defaultConfigURL ) ) ) {\n            defaultConf = space.environmentSubstitute( m_defaultConfigURL );\n          }\n\n        } catch ( Exception ex ) {\n          throw new KettleStepException( ex.getMessage(), ex );\n        }\n\n        List<String> forLogging = new ArrayList<String>();\n\n        try ( HBaseConnection conf = hBaseService.getHBaseConnection( space, coreConf, defaultConf, getLog() ) ) {\n          MappingAdmin mappingAdmin = null;\n\n          for ( String m : forLogging ) {\n            logBasic( m );\n          }\n\n          mappingAdmin = new MappingAdmin( conf );\n\n          m_cachedMapping = mappingAdmin.getMapping( space.environmentSubstitute( m_sourceTableName ),\n            space.environmentSubstitute( m_sourceMappingName ) );\n        } catch ( Exception ex ) {\n          throw new KettleStepException( ex.getMessage(), ex );\n        }\n      }\n    }\n  }\n\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n                         VariableSpace space, Repository repository, IMetaStore metaStore ) throws KettleStepException {\n\n    rowMeta.clear(); // start afresh - eats the input\n\n    if ( m_outputFields != null && m_outputFields.size() > 0 ) {\n      // we have some stored field information - use this\n      for ( HBaseValueMetaInterface vm : m_outputFields ) {\n\n        vm.setOrigin( origin );\n        rowMeta.addValueMeta( vm );\n      }\n    } else {\n      // want all fields from the mapping - connect and get the details\n      setupCachedMapping( space );\n\n      int kettleType;\n      if ( m_cachedMapping.getKeyType() == Mapping.KeyType.DATE\n          || m_cachedMapping.getKeyType() == Mapping.KeyType.UNSIGNED_DATE ) {\n        kettleType = ValueMetaInterface.TYPE_DATE;\n      } else if ( m_cachedMapping.getKeyType() == Mapping.KeyType.STRING ) {\n        kettleType = ValueMetaInterface.TYPE_STRING;\n      } else if ( m_cachedMapping.getKeyType() == Mapping.KeyType.BINARY ) {\n        kettleType = ValueMetaInterface.TYPE_BINARY;\n      } else {\n        kettleType = ValueMetaInterface.TYPE_INTEGER;\n      }\n\n      ValueMetaInterface keyMeta = new ValueMeta( m_cachedMapping.getKeyName(), kettleType );\n\n      keyMeta.setOrigin( origin );\n      rowMeta.addValueMeta( keyMeta );\n      // }\n\n      // Add the rest of the fields in the mapping\n      Map<String, HBaseValueMetaInterface> mappedColumnsByAlias = m_cachedMapping.getMappedColumns();\n      Set<String> aliasSet = mappedColumnsByAlias.keySet();\n      for ( String alias : aliasSet ) {\n        HBaseValueMetaInterface columnMeta = mappedColumnsByAlias.get( alias );\n        columnMeta.setOrigin( origin );\n        rowMeta.addValueMeta( columnMeta );\n      }\n    }\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public NamedClusterServiceLocator getNamedClusterServiceLocator() {\n    return namedClusterServiceLocator;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public List<OutputFieldDefinition> getOutputFieldsDefinition() {\n    return outputFieldsDefinition;\n  }\n\n  public void setOutputFieldsDefinition( List<OutputFieldDefinition> outputFieldsDefinition ) {\n    this.outputFieldsDefinition = outputFieldsDefinition;\n  }\n\n  public List<FilterDefinition> getFiltersDefinition() {\n    return filtersDefinition;\n  }\n\n  public void setFiltersDefinition( List<FilterDefinition> filtersDefinition ) {\n    this.filtersDefinition = filtersDefinition;\n  }\n\n  public MappingDefinition getMappingDefinition() {\n    return mappingDefinition;\n  }\n\n  public void setMappingDefinition( MappingDefinition mappingDefinition ) {\n    this.mappingDefinition = mappingDefinition;\n  }\n\n  protected HBaseService getService() throws ClusterInitializationException {\n    HBaseService service = null;\n    try {\n      String embeddedMetastoreProviderKey =\n        parentStepMeta == null || parentStepMeta.getParentTransMeta() == null ? null\n          : parentStepMeta.getParentTransMeta().getEmbeddedMetastoreProviderKey();\n      service = namedClusterServiceLocator.getService( this.namedCluster, HBaseService.class,\n        embeddedMetastoreProviderKey );\n      this.serviceStatus = ServiceStatus.OK;\n    } catch ( Exception e ) {\n      this.serviceStatus = ServiceStatus.notOk( e );\n      logError( Messages.getString( \"HBaseInput.Error.ServiceStatus\" ) );\n      throw e;\n    }\n    return service;\n  }\n\n  public ServiceStatus getServiceStatus() {\n    if ( this.serviceStatus == null ) {\n      this.serviceStatus = ServiceStatus.OK;\n    }\n    return this.serviceStatus;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/Messages.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.pentaho.di.i18n.BaseMessages;\n\npublic class Messages {\n  public static final Class<Messages> PKG = Messages.class;\n\n  public static String getString( String key ) {\n    return BaseMessages.getString( PKG, key );\n  }\n\n  public static String getString( String key, String param1 ) {\n    return BaseMessages.getString( PKG, key, param1 );\n  }\n\n  public static String getString( String key, String param1, String param2 ) {\n    return BaseMessages.getString( PKG, key, param1, param2 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3, String param4 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4 );\n  }\n\n  public static String getString(\n      String key, String param1, String param2, String param3, String param4, String param5 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4, param5 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3, String param4,\n      String param5, String param6 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4, param5, param6 );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/input/OutputFieldDefinition.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.pentaho.di.core.injection.Injection;\n\npublic class OutputFieldDefinition {\n\n  @Injection( name = \"OUTPUT_FIELD_ALIAS\", group = \"OUTPUT_FIELDS\" )\n  private String alias;\n\n  @Injection( name = \"OUTPUT_FIELD_KEY\", group = \"OUTPUT_FIELDS\" )\n  private boolean key;\n\n  @Injection( name = \"OUTPUT_FIELD_COLUMN_NAME\", group = \"OUTPUT_FIELDS\" )\n  private String columnName;\n\n  @Injection( name = \"OUTPUT_FIELD_FAMILY\", group = \"OUTPUT_FIELDS\" )\n  private String family;\n\n  @Injection( name = \"OUTPUT_FIELD_TYPE\", group = \"OUTPUT_FIELDS\" )\n  private String hbaseType;\n\n  @Injection( name = \"OUTPUT_FIELD_FORMAT\", group = \"OUTPUT_FIELDS\" )\n  private String format;\n\n  public boolean isKey() {\n    return key;\n  }\n\n  public void setKey( boolean key ) {\n    this.key = key;\n  }\n\n  public String getAlias() {\n    return alias;\n  }\n\n  public void setAlias( String alias ) {\n    this.alias = alias;\n  }\n\n  public String getColumnName() {\n    return columnName;\n  }\n\n  public void setColumnName( String columnName ) {\n    this.columnName = columnName;\n  }\n\n  public String getFamily() {\n    return family;\n  }\n\n  public void setFamily( String family ) {\n    this.family = family;\n  }\n\n  public String getHbaseType() {\n    return hbaseType;\n  }\n\n  public void setHbaseType( String hbaseType ) {\n    this.hbaseType = hbaseType;\n  }\n\n  public String getFormat() {\n    return format;\n  }\n\n  public void setFormat( String format ) {\n    this.format = format;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/ConfigurationProducer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\n\nimport java.io.IOException;\n\n/**\n * Interface to something that can produce a connection to HBase\n * \n * @author Mark Hall (mhall{[at]}penthao{[dot]}com)\n * @version $Revision$\n * \n */\npublic interface ConfigurationProducer {\n  HBaseService getHBaseService() throws ClusterInitializationException;\n\n  /**\n   * Get a configuration object encapsulating connection information for HBase\n   * \n   * @return a HBaseConnection object for interacting with the currently configured connection to HBase\n   * @throws Exception\n   *           if the connection can't be supplied for some reason\n   */\n  HBaseConnection getHBaseConnection() throws ClusterInitializationException, IOException;\n\n  String getCurrentConfiguration();\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/FieldProducer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.pentaho.di.core.row.RowMetaInterface;\n\n/**\n * Interface to something that can provide meta data on the fields that it is receiving\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n * \n */\npublic interface FieldProducer {\n\n  /**\n   * Get the incoming fields\n   * \n   * @return the incoming fields\n   */\n  RowMetaInterface getIncomingFields();\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/HBaseRowToKettleTuple.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowDataUtil;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.NavigableMap;\n\n/**\n * Class for decoding HBase rows to a <key, family, column, value, time stamp> Kettle row format.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\npublic class HBaseRowToKettleTuple {\n\n  /**\n   * Holds a set of tuples (Kettle rows) - one for each column from an HBase row\n   */\n  protected List<Object[]> mDecodedTuples;\n\n  /**\n   * Index in the Kettle row format of the key column\n   */\n  protected int mKeyIndex = -1;\n\n  /**\n   * Index in the Kettle row format of the family column\n   */\n  protected int mFamilyIndex = -1;\n\n  /**\n   * Index in the Kettle row format of the column name column\n   */\n  protected int mColNameIndex = -1;\n\n  /**\n   * Index in the Kettle row format of the column value column\n   */\n  protected int mValueIndex = -1;\n\n  /**\n   * Index in the Kettle row format of the time stamp column\n   */\n  protected int mTimestampIndex = -1;\n\n  /**\n   * List of (optional) byte array encoded user-specified column families to extract column values for\n   */\n  protected List<byte[]> mUserSpecifiedFamilies;\n\n  /**\n   * List of (optional) human-readable user-specified column families to extract column values for\n   */\n  protected List<String> mUserSpecifiedFamiliesHumanReadable;\n\n  protected List<HBaseValueMetaInterface> mTupleColsFromAliasMap;\n\n  protected ByteConversionUtil mBytesUtil;\n\n  public HBaseRowToKettleTuple( ByteConversionUtil bytesUtil ) {\n    if ( bytesUtil == null ) {\n      throw new NullPointerException();\n    }\n    mBytesUtil = bytesUtil;\n  }\n\n  public void reset() {\n    mDecodedTuples = null;\n\n    mKeyIndex = -1;\n    mFamilyIndex = -1;\n    mColNameIndex = -1;\n    mValueIndex = -1;\n    mTimestampIndex = -1;\n    mUserSpecifiedFamilies = null;\n    mUserSpecifiedFamiliesHumanReadable = null;\n\n    mTupleColsFromAliasMap = null;\n  }\n\n  /**\n   * Convert an HBase row to (potentially) multiple Kettle rows in tuple format.\n   *\n   * @param mapping                the mapping information to use (must be a \"tuple\" mapping)\n   * @param tupleColsMappedByAlias the meta data for each of the tuple columns the user has opted to have output\n   * @param outputRowMeta          the outgoing Kettle row format\n   * @return a list of Kettle rows in tuple format\n   * @throws KettleException if a problem occurs\n   */\n  public List<Object[]> hbaseRowToKettleTupleMode( HBaseValueMetaInterfaceFactory hBaseValueMetaInterfaceFactory,\n                                                   Object result, Mapping mapping,\n                                                   Map<String, HBaseValueMetaInterface> tupleColsMappedByAlias,\n                                                   RowMetaInterface outputRowMeta ) throws KettleException {\n\n    if ( mDecodedTuples == null ) {\n      mTupleColsFromAliasMap = new ArrayList<>();\n      // add the key first - type (or name for that matter)\n      // is not important as this is just a dummy placeholder\n      // here so that indexes into m_tupleColsFromAliasMap align with the output\n      // row meta\n      // format\n      HBaseValueMetaInterface keyMeta = hBaseValueMetaInterfaceFactory\n        .createHBaseValueMetaInterface( null, mapping.getKeyName(), \"dummy\", ValueMetaInterface.TYPE_INTEGER, 0, 0 );\n      mTupleColsFromAliasMap.add( keyMeta );\n\n      for ( Map.Entry<String, HBaseValueMetaInterface> entry : tupleColsMappedByAlias.entrySet() ) {\n        mTupleColsFromAliasMap.add( tupleColsMappedByAlias.get( entry.getValue() ) );\n      }\n    }\n\n    return hbaseRowToKettleTupleMode( result, mapping, mTupleColsFromAliasMap, outputRowMeta );\n  }\n\n  /**\n   * Convert an HBase row to (potentially) multiple Kettle rows in tuple format.\n   *\n   * @param mapping       the mapping information to use (must be a \"tuple\" mapping)\n   * @param tupleCols     the meta data for each of the tuple columns the user has opted to have output\n   * @param outputRowMeta the outgoing Kettle row format\n   * @return a list of Kettle rows in tuple format\n   * @throws KettleException if a problem occurs\n   */\n  public List<Object[]> hbaseRowToKettleTupleMode( Object result, Mapping mapping,\n                                                   List<HBaseValueMetaInterface> tupleCols,\n                                                   RowMetaInterface outputRowMeta ) throws KettleException {\n\n    if ( mDecodedTuples == null ) {\n      mDecodedTuples = new ArrayList<>();\n      mKeyIndex = outputRowMeta.indexOfValue( mapping.getKeyName() );\n      mFamilyIndex = outputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() );\n      mColNameIndex = outputRowMeta.indexOfValue( Mapping.TupleMapping.COLUMN.toString() );\n      mValueIndex = outputRowMeta.indexOfValue( Mapping.TupleMapping.VALUE.toString() );\n      mTimestampIndex = outputRowMeta.indexOfValue( Mapping.TupleMapping.TIMESTAMP.toString() );\n\n      if ( !Const.isEmpty( mapping.getTupleFamilies() ) ) {\n        String[] familiesS = mapping.getTupleFamiliesSplit();\n        mUserSpecifiedFamilies = new ArrayList<>();\n        mUserSpecifiedFamiliesHumanReadable = new ArrayList<>();\n\n        for ( String family : familiesS ) {\n          mUserSpecifiedFamiliesHumanReadable.add( family );\n          mUserSpecifiedFamilies.add( mBytesUtil.toBytes( family.trim() ) );\n        }\n      }\n    } else {\n      mDecodedTuples.clear();\n    }\n\n    byte[] rawKey = null;\n    try {\n      rawKey = (byte[]) result.getClass().getMethod( \"getRow\" ).invoke( result );\n    } catch ( Exception ex ) {\n      throw new KettleException( ex );\n    }\n    Object decodedKey = mapping.decodeKeyValue( rawKey );\n\n    NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> rowData = null;\n    try {\n      rowData =\n        (NavigableMap<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>>) result.getClass().getMethod( \"getMap\" )\n          .invoke( result );\n    } catch ( Exception ex ) {\n      throw new KettleException( ex );\n    }\n\n    if ( !Const.isEmpty( mapping.getTupleFamilies() ) ) {\n      int i = 0;\n      for ( byte[] family : mUserSpecifiedFamilies ) {\n        NavigableMap<byte[], NavigableMap<Long, byte[]>> colMap = rowData.get( family );\n        for ( Map.Entry<byte[], NavigableMap<Long, byte[]>> colMapEntry : colMap.entrySet() ) {\n          NavigableMap<Long, byte[]> valuesByTimestamp = colMapEntry.getValue();\n\n          Object[] newTuple = RowDataUtil.allocateRowData( outputRowMeta.size() );\n\n          // row key\n          if ( mKeyIndex != -1 ) {\n            newTuple[ mKeyIndex ] = decodedKey;\n          }\n\n          // get value of most recent column value\n          Map.Entry<Long, byte[]> mostRecentColVal = valuesByTimestamp.lastEntry();\n\n          // store the timestamp\n          if ( mTimestampIndex != -1 ) {\n            newTuple[ mTimestampIndex ] = mostRecentColVal.getKey();\n          }\n\n          // column name\n          if ( mColNameIndex != -1 ) {\n            HBaseValueMetaInterface colNameMeta = tupleCols.get( mColNameIndex );\n            Object decodedColName = colNameMeta.decodeColumnValue( colMapEntry.getKey() );\n            newTuple[ mColNameIndex ] = decodedColName;\n          }\n\n          // column value\n          if ( mValueIndex != -1 ) {\n            HBaseValueMetaInterface colValueMeta = tupleCols.get( mValueIndex );\n            Object decodedValue =\n              colValueMeta.decodeColumnValue( mostRecentColVal.getValue() );\n            newTuple[ mValueIndex ] = decodedValue;\n          }\n\n          // column family\n          if ( mFamilyIndex != -1 ) {\n            newTuple[ mFamilyIndex ] = mUserSpecifiedFamiliesHumanReadable.get( i );\n          }\n\n          mDecodedTuples.add( newTuple );\n        }\n        i++;\n      }\n    } else {\n      // process all column families\n      for ( Map.Entry<byte[], NavigableMap<byte[], NavigableMap<Long, byte[]>>> rowDataEntry : rowData.entrySet() ) {\n\n        // column family\n        Object decodedFamily = null;\n        if ( mFamilyIndex != -1 ) {\n          HBaseValueMetaInterface colFamMeta = tupleCols.get( mFamilyIndex );\n          decodedFamily = colFamMeta.decodeColumnValue( rowDataEntry.getKey() );\n        }\n\n        NavigableMap<byte[], NavigableMap<Long, byte[]>> colMap = rowDataEntry.getValue();\n        for ( Map.Entry<byte[], NavigableMap<Long, byte[]>> colMapEntry : colMap.entrySet() ) {\n          NavigableMap<Long, byte[]> valuesByTimestamp = colMapEntry.getValue();\n\n          Object[] newTuple = RowDataUtil.allocateRowData( outputRowMeta.size() );\n\n          // row key\n          if ( mKeyIndex != -1 ) {\n            newTuple[ mKeyIndex ] = decodedKey;\n          }\n\n          // get value of most recent column value\n          Map.Entry<Long, byte[]> mostRecentColVal = valuesByTimestamp.lastEntry();\n\n          // store the timestamp\n          if ( mTimestampIndex != -1 ) {\n            newTuple[ mTimestampIndex ] = mostRecentColVal.getKey();\n          }\n\n          // column name\n          if ( mColNameIndex != -1 ) {\n            HBaseValueMetaInterface colNameMeta = tupleCols.get( mColNameIndex );\n            Object decodedColName = colNameMeta.decodeColumnValue( colMapEntry.getKey() );\n            newTuple[ mColNameIndex ] = decodedColName;\n          }\n\n          // column value\n          if ( mValueIndex != -1 ) {\n            HBaseValueMetaInterface colValueMeta = tupleCols.get( mValueIndex );\n            Object decodedValue = colValueMeta.decodeColumnValue( mostRecentColVal.getValue() );\n            newTuple[ mValueIndex ] = decodedValue;\n          }\n\n          // column family\n          if ( mFamilyIndex != -1 ) {\n            newTuple[ mFamilyIndex ] = decodedFamily;\n          }\n\n          mDecodedTuples.add( newTuple );\n        }\n      }\n    }\n\n    return mDecodedTuples;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MappingAdmin.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTable;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\nimport org.pentaho.hadoop.shim.api.hbase.Result;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScanner;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScannerBuilder;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Calendar;\nimport java.util.Collections;\nimport java.util.Date;\nimport java.util.GregorianCalendar;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.NavigableMap;\nimport java.util.Random;\nimport java.util.Set;\nimport java.util.TreeMap;\nimport java.util.TreeSet;\n\n/**\n * Class for managing a mapping table in HBase. Has routines for creating the mapping table, writing and reading\n * mappings to/from the table and creating a test table for debugging purposes. Also has a rough and ready command line\n * interface. For more information on the structure of a table mapping see org.pentaho.hbase.mapping.Mapping.\n *\n * @author Mark Hall (mhall[{at]}pentaho{[dot]}com)\n */\npublic class MappingAdmin implements Closeable {\n\n  /**\n   * Configuration object for the connection protected Configuration m_connection;\n   */\n\n  private final HBaseConnection hBaseConnection;\n\n  /** Name of the mapping table (might make this configurable at some stage) */\n  protected String m_mappingTableName = \"pentaho_mappings\";\n\n  /** family name to hold the mapped column meta data in a mapping */\n  public static final String COLUMNS_FAMILY_NAME = \"columns\";\n\n  /**\n   * family name to hold the key meta data in a mapping. This meta data will be the same for any mapping defined on the\n   * same table\n   */\n  public static final String KEY_FAMILY_NAME = \"key\";\n\n  /**\n   * Constructor. No conneciton information configured.\n   */\n  //  public MappingAdmin() {\n  //    try {\n  //      HadoopConfiguration active =\n  //          HadoopConfigurationBootstrap.getHadoopConfigurationProvider().getActiveConfiguration();\n  //      HBaseShim hbaseShim = active.getHBaseShim();\n  //      m_bytesUtil = hbaseShim.getHBaseConnection().getBytesUtil();\n  //    } catch ( Exception ex ) {\n  //      // catastrophic failure if we can't obtain a concrete implementation\n  //      throw new RuntimeException( ex );\n  //    }\n  //  }\n\n\n  public MappingAdmin( HBaseConnection hBaseConnection ) {\n    this.hBaseConnection = hBaseConnection;\n  }\n\n  /**\n   * Set the name of the mapping table.\n   *\n   * @param tableName\n   *          the name to use for the mapping table.\n   */\n  public void setMappingTableName( String tableName ) {\n    m_mappingTableName = tableName;\n  }\n\n  /**\n   * Get the name of the mapping table\n   *\n   * @return the name of the mapping table\n   */\n  public String getMappingTableName() {\n    return m_mappingTableName;\n  }\n\n  /**\n   * Creates a test mapping (in standard format) called \"MarksTestMapping\" for a test table called \"MarksTestTable\"\n   *\n   * @throws Exception\n   *           if a problem occurs\n   */\n  public void createTestMapping() throws Exception {\n    String keyName = \"MyKey\";\n    String tableName = \"MarksTestTable\";\n    String mappingName = \"MarksTestMapping\";\n\n    MappingFactory mappingFactory = hBaseConnection.getMappingFactory();\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseConnection.getHBaseValueMetaInterfaceFactory();\n\n    Mapping.KeyType keyType = Mapping.KeyType.LONG;\n    Mapping testMapping = mappingFactory.createMapping( tableName, mappingName, keyName, keyType );\n\n    String family1 = \"Family1\";\n    String colA = \"first_string_column\";\n    HBaseValueMetaInterface vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family1, colA, colA, ValueMetaInterface.TYPE_STRING, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colB = \"first_unsigned_int_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family1, colB, colB, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n    vm.setIsLongOrDouble( false );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String family2 = \"Family2\";\n    String colC = \"first_indexed_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colC, colC, ValueMetaInterface.TYPE_STRING, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    vm.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n    Object[] vals = { \"nomVal1\", \"nomVal2\", \"nomVal3\" };\n    vm.setIndex( vals );\n    testMapping.addMappedColumn( vm, false );\n\n    String colD = \"first_binary_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family1, colD, colD, ValueMetaInterface.TYPE_BINARY, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colE = \"first_boolean_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family1, colE, colE, ValueMetaInterface.TYPE_BOOLEAN, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colF = \"first_signed_date_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family1, colF, colF, ValueMetaInterface.TYPE_DATE, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colG = \"first_signed_double_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colG, colG, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colH = \"first_signed_float_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colH, colH, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n    vm.setIsLongOrDouble( false );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colI = \"first_signed_int_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colI, colI, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n    vm.setIsLongOrDouble( false );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colJ = \"first_signed_long_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colJ, colJ, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colK = \"first_unsigned_date_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colK, colK, ValueMetaInterface.TYPE_DATE, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colL = \"first_unsigned_double_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colL, colL, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colM = \"first_unsigned_float_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colM, colM, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n    vm.setIsLongOrDouble( false );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    String colN = \"first_unsigned_long_column\";\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family2, colN, colN, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n    vm.setTableName( tableName );\n    vm.setMappingName( mappingName );\n    testMapping.addMappedColumn( vm, false );\n\n    putMapping( testMapping, false );\n  }\n\n  /**\n   * Creates a test mapping (in tuple format) called \"MarksTestTupleMapping\" for a test table called\n   * \"MarksTestTupleTable\"\n   *\n   * @throws Exception\n   *           if a problem occurs\n   */\n  public void createTestTupleMapping() throws Exception {\n    String keyName = \"KEY\";\n    String tableName = \"MarksTestTupleTable\";\n    String mappingName = \"MarksTestTupleMapping\";\n\n    MappingFactory mappingFactory = hBaseConnection.getMappingFactory();\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseConnection.getHBaseValueMetaInterfaceFactory();\n\n    Mapping.KeyType keyType = Mapping.KeyType.UNSIGNED_LONG;\n    Mapping testMapping = mappingFactory.createMapping( tableName, mappingName, keyName, keyType );\n    testMapping.setTupleMapping( true );\n    String family = \"\";\n    String colName = \"\";\n\n    HBaseValueMetaInterface vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, colName, \"Family\", ValueMetaInterface.TYPE_STRING, -1, -1 );\n    testMapping.addMappedColumn( vm, true );\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, colName, \"Column\", ValueMetaInterface.TYPE_STRING, -1, -1 );\n    testMapping.addMappedColumn( vm, true );\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, colName, \"Value\", ValueMetaInterface.TYPE_STRING, -1, -1 );\n    testMapping.addMappedColumn( vm, true );\n    vm = valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, colName, \"Timestamp\", ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n    vm.setIsLongOrDouble( true );\n    testMapping.addMappedColumn( vm, true );\n\n    putMapping( testMapping, false );\n  }\n\n  /**\n   * Creates a test table called \"MarksTestTupleTable\"\n   *\n   * @throws Exception\n   *           if a problem occurs\n   */\n  public void createTupleTestTable() throws Exception {\n    // create a test table in the same format as the test tuple mapping\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    if ( hBaseConnection == null ) {\n      throw new IOException( \"No connection exists yet!\" );\n    }\n\n    HBaseTable marksTestTupleTable = hBaseConnection.getTable( \"MarksTestTupleTable\" );\n    if ( marksTestTupleTable.exists() ) {\n      // drop/delete the table and re-create\n      marksTestTupleTable.disable();\n      marksTestTupleTable.delete();\n    }\n\n    List<String> colFamilies = new ArrayList<String>();\n    colFamilies.add( \"Family1\" );\n    colFamilies.add( \"Family2\" );\n    marksTestTupleTable.create( colFamilies, null );\n    HBaseTableWriteOperationManager writeOperationManager =\n      marksTestTupleTable.createWriteOperationManager( (long) 1024 * 1024 * 12 );\n\n    for ( long key = 1; key < 500; key++ ) {\n      HBasePut hBasePut = writeOperationManager.createPut( byteConversionUtil.encodeKeyValue( key, Mapping.KeyType.UNSIGNED_LONG ) );\n      hBasePut.setWriteToWAL( false );\n\n      // 20 columns every second row (all columns are string)\n      for ( int i = 0; i < 10 * ( ( key % 2 ) + 1 ); i++ ) {\n        if ( i < 10 ) {\n          hBasePut.addColumn( \"Family1\", \"string_col\" + i, false, byteConversionUtil.toBytes( \"StringValue_\" + key ) );\n        } else {\n          hBasePut.addColumn( \"Family2\", \"string_col\" + i, false, byteConversionUtil.toBytes( \"StringValue_\" + key ) );\n        }\n\n        hBasePut.execute();\n      }\n    }\n    writeOperationManager.flushCommits();\n    writeOperationManager.close();\n  }\n\n  /**\n   * Creates a test table called \"MarksTestTable\"\n   *\n   * @throws Exception\n   *           if a problem occurs\n   */\n  public void createTestTable() throws Exception {\n\n    // create a test table in the same format as the test mapping\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n\n    HBaseTable marksTestTable = hBaseConnection.getTable( \"MarksTestTable\" );\n    if ( marksTestTable != null ) {\n      // drop/delete the table and re-create\n      marksTestTable.disable();\n      marksTestTable.delete();\n    }\n\n    List<String> colFamilies = new ArrayList<String>();\n    colFamilies.add( \"Family1\" );\n    colFamilies.add( \"Family2\" );\n    marksTestTable.create( colFamilies, null );\n    HBaseTableWriteOperationManager writeOperationManager =\n      marksTestTable.createWriteOperationManager( (long) 1024 * 1024 * 12 );\n\n    // insert 200 test rows of random stuff\n    Random r = new Random();\n    String[] nomVals = { \"nomVal1\", \"nomVal2\", \"nomVal3\" };\n    Date date = new Date();\n    Calendar c = new GregorianCalendar();\n    c.setTime( date );\n    Calendar c2 = new GregorianCalendar();\n    c2.set( 1970, 2, 1 );\n    for ( long key = -500; key < 20000; key++ ) {\n      HBasePut hBasePut = writeOperationManager.createPut( byteConversionUtil.encodeKeyValue( key, Mapping.KeyType.LONG ) );\n      hBasePut.setWriteToWAL( false );\n\n      // unsigned (positive) integer column\n\n      hBasePut.addColumn( \"Family1\", \"first_unsigned_int_column\", false, byteConversionUtil.toBytes( ( key < 0\n        ? (int) -key : key ) / 10 ) );\n\n      // String column\n      hBasePut\n        .addColumn( \"Family1\", \"first_string_column\", false, byteConversionUtil.toBytes( \"StringValue_\" + key ) );\n\n      // have some null values - every 10th row has no value for the indexed\n      // column\n      if ( key % 10L > 0 ) {\n        int index = r.nextInt( 3 );\n        String nomVal = nomVals[ index ];\n        hBasePut.addColumn( \"Family2\", \"first_indexed_column\", false, byteConversionUtil.toBytes( nomVal ) );\n      }\n\n      // signed integer column\n      double d = r.nextDouble();\n      int signedInt = r.nextInt( 100 );\n      if ( d < 0.5 ) {\n        signedInt = -signedInt;\n      }\n      hBasePut.addColumn( \"Family2\", \"first_signed_int_column\", false, byteConversionUtil.toBytes( signedInt ) );\n\n      // unsigned (positive) float column\n      float f = r.nextFloat() * 1000.0f;\n      hBasePut.addColumn( \"Family2\", \"first_unsigned_float_column\", false, byteConversionUtil.toBytes( f ) );\n\n      // signed float column\n      if ( d > 0.5 ) {\n        f = -f;\n      }\n      hBasePut.addColumn( \"Family2\", \"first_signed_float_column\", false, byteConversionUtil.toBytes( f ) );\n\n      // unsigned double column\n      double dd = d * 10000 * r.nextDouble();\n      hBasePut.addColumn( \"Family2\", \"first_unsigned_double_column\", false, byteConversionUtil.toBytes( dd ) );\n\n      // signed double\n      if ( d > 0.5 ) {\n        dd = -dd;\n      }\n      hBasePut.addColumn( \"Family2\", \"first_signed_double_column\", false, byteConversionUtil.toBytes( dd ) );\n\n      // unsigned long\n      long l = r.nextInt( 300 );\n      hBasePut.addColumn( \"Family2\", \"first_unsigned_long_column\", false, byteConversionUtil.toBytes( l ) );\n\n      if ( d < 0.5 ) {\n        l = -l;\n      }\n      hBasePut.addColumn( \"Family2\", \"first_signed_long_column\", false, byteConversionUtil.toBytes( l ) );\n\n      // unsigned date (vals >= 1st Jan 1970)\n      c.add( Calendar.DAY_OF_YEAR, 1 );\n\n      long longd = c.getTimeInMillis();\n      hBasePut.addColumn( \"Family1\", \"first_unsigned_date_column\", false, byteConversionUtil.toBytes( longd ) );\n\n      // signed date (vals < 1st Jan 1970)\n      c2.add( Calendar.DAY_OF_YEAR, -1 );\n      longd = c2.getTimeInMillis();\n\n      hBasePut.addColumn( \"Family1\", \"first_signed_date_column\", false, byteConversionUtil.toBytes( longd ) );\n\n      // boolean column\n      String bVal = \"\";\n      if ( d < 0.5 ) {\n        bVal = \"N\";\n      } else {\n        bVal = \"Y\";\n      }\n      hBasePut.addColumn( \"Family1\", \"first_boolean_column\", false, byteConversionUtil.toBytes( bVal ) );\n\n      // serialized objects\n      byte[] serialized = byteConversionUtil.encodeObject( new Double( d ) );\n\n      hBasePut.addColumn( \"Family1\", \"first_serialized_column\", false, serialized );\n\n      // binary (raw bytes)\n      byte[] rawStuff = byteConversionUtil.toBytes( 5034555 );\n      hBasePut.addColumn( \"Family1\", \"first_binary_column\", false, rawStuff );\n\n      hBasePut.execute();\n    }\n\n    writeOperationManager.flushCommits();\n    writeOperationManager.close();\n  }\n\n  /**\n   * Create the mapping table\n   * @param tableName The fuly qualified tablename with namespace to make the mapping file for\n   * @throws Exception\n   *           if there is no connection specified or the mapping table already exists.\n   */\n  public void createMappingTable( String tableName ) throws Exception {\n    HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) );\n    if ( hBaseTable.exists() ) {\n      throw new IOException( \"Mapping table already exists!\" );\n    }\n\n    List<String> colFamNames = new ArrayList<String>();\n    colFamNames.add( COLUMNS_FAMILY_NAME );\n    colFamNames.add( KEY_FAMILY_NAME );\n\n    hBaseTable.create( colFamNames, null );\n  }\n\n  /**\n   * Check to see if the specified mapping name exists for the specified table\n   *\n   * @param tableName\n   *          the name of the table\n   * @param mappingName\n   *          the name of the mapping\n   * @return true if the specified mapping exists for the specified table\n   * @throws IOException\n   *           if a problem occurs\n   */\n  public boolean mappingExists( String tableName, String mappingName ) throws Exception {\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) ) ) {\n      if ( hBaseTable.exists() ) {\n        return hBaseTable.keyExists( hBaseConnection.getByteConversionUtil()\n          .compoundKey( HbaseUtil.parseQualifierFromTableName( tableName ), mappingName ) );\n      }\n      return false;\n    }\n  }\n\n  /**\n   * Get a list of fully qualifieed tableNames that have mappings in the given namesSpace. If the namespace is null then\n   * cycle through all namespaces. List will be empty if there are no mappings defined yet.\n   *\n   * @return a list of tables that have mappings.\n   * @throws IOException\n   *           if something goes wrong\n   */\n  public Set<String> getMappedTables( String nameSpace ) throws Exception {\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    TreeSet<String> tableNames = new TreeSet<>();\n    if ( nameSpace != null ) {\n      addMappedTables( tableNames, nameSpace );\n    } else {\n      List<String> namespaces = hBaseConnection.listNamespaces();\n      for ( String nextNamespace: namespaces ) {\n        addMappedTables( tableNames, nextNamespace );\n      }\n    }\n    return tableNames;\n  }\n\n  private void addMappedTables( Set<String> tableNames, String nameSpace ) throws Exception {\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( nameSpace + \":\" + m_mappingTableName ) ) {\n      if ( hBaseTable.exists() ) {\n        ResultScannerBuilder scannerBuilder = hBaseTable.createScannerBuilder( null, null );\n        scannerBuilder.setCaching( 10 );\n\n        try ( ResultScanner resultScanner = scannerBuilder.build() ) {\n          Result next;\n          while ( ( next = resultScanner.next() ) != null ) {\n            byte[] rawKey = next.getRow();\n\n            // extract the table name\n            String tableName =\n              nameSpace + \":\" + HbaseUtil.parseQualifierFromTableName( byteConversionUtil.splitKey( rawKey )[ 0 ] );\n            tableNames.add( tableName );\n          }\n        }\n      }\n    }\n  }\n\n  /**\n   * Get a list of mappings for the supplied table name. List will be empty if there are no mappings defined for the\n   * table.\n   *\n   * @param tableName\n   *          the table name\n   * @return a list of mappings\n   * @throws Exception\n   *           if something goes wrong.\n   */\n  public List<String> getMappingNames( String tableName ) throws Exception {\n    tableName = HbaseUtil.expandTableName( tableName );\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    List<String> mappingsForTable = new ArrayList<String>();\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) ) ) {\n      if ( hBaseTable.exists() ) {\n        ResultScannerBuilder scannerBuilder = hBaseTable.createScannerBuilder( null, null );\n        scannerBuilder.setCaching( 10 );\n\n        try ( ResultScanner resultScanner = scannerBuilder.build() ) {\n          Result next;\n          String qualifier = HbaseUtil.parseQualifierFromTableName( tableName );\n          while ( ( next = resultScanner.next() ) != null ) {\n            byte[] rowKey = next.getRow();\n            String[] splitKey = byteConversionUtil.splitKey( rowKey );\n            String tableN = splitKey[ 0 ];\n\n            if ( qualifier.equals( tableN ) ) {\n              // extract out the mapping name\n              mappingsForTable.add( splitKey[ 1 ] );\n            }\n          }\n        }\n      }\n      return mappingsForTable;\n    }\n  }\n\n  /**\n   * Delete a mapping from the mapping table\n   *\n   * @param tableName\n   *          name of the table in question\n   * @param mappingName\n   *          name of the mapping in question\n   * @return true if the named mapping for the named table was deleted successfully; false if the mapping table does not\n   * exist or the named mapping for the named table does not exist in the mapping table\n   * @throws Exception\n   *           if a problem occurs during deletion\n   */\n  public boolean deleteMapping( String tableName, String mappingName ) throws Exception {\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) ) ) {\n      try ( HBaseTableWriteOperationManager hBaseTableWriteOperationManager = hBaseTable\n        .createWriteOperationManager( null ) ) {\n\n        if ( !hBaseTable.exists() ) {\n          // create the mapping table\n          createMappingTable( tableName );\n          return false; // no mapping table so nothing to delete!\n        }\n\n        if ( hBaseTable.disabled() ) {\n          hBaseTable.enable();\n        }\n\n        boolean mappingExists = mappingExists( tableName, mappingName );\n        if ( !mappingExists ) {\n          return false; // mapping doesn't seem to exist\n        }\n\n        hBaseTableWriteOperationManager.createDelete(\n          byteConversionUtil.compoundKey( HbaseUtil.parseQualifierFromTableName( tableName ), mappingName ) )\n          .execute();\n        return true;\n      }\n    }\n  }\n\n  /**\n   * Delete a mapping from the mapping table\n   *\n   * @param theMapping\n   *          the mapping to delete\n   * @return true if the mapping was deleted successfully; false if the mapping table does not exist or the suppied\n   * mapping does not exist in the mapping table\n   * @throws Exception\n   *           if a problem occurs during deletion\n   */\n  public boolean deleteMapping( Mapping theMapping ) throws Exception {\n    String tableName = theMapping.getTableName();\n    String mappingName = theMapping.getMappingName();\n\n    return deleteMapping( tableName, mappingName );\n  }\n\n\n  public void putMapping( Mapping theMapping, boolean overwrite ) throws Exception {\n    String tableName = theMapping.getTableName();\n    String mappingName = theMapping.getMappingName();\n    Map<String, HBaseValueMetaInterface> mapping = theMapping.getMappedColumns();\n    String keyName = theMapping.getKeyName();\n    Mapping.KeyType keyType = theMapping.getKeyType();\n    boolean isTupleMapping = theMapping.isTupleMapping();\n    String tupleFamilies = theMapping.getTupleFamilies();\n\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) ) ) {\n      if ( !hBaseTable.exists() ) {\n        // create the mapping table\n        createMappingTable( tableName );\n      }\n\n      if ( hBaseTable.disabled() ) {\n        hBaseTable.enable();\n      }\n\n      boolean mappingExists = mappingExists( tableName, mappingName );\n      if ( mappingExists && !overwrite ) {\n        throw new IOException(\n          \"The mapping \\\"\" + mappingName + \"\\\" already exists \" + \"for table \\\"\" + tableName + \"\\\"\" );\n      }\n\n      if ( mappingExists ) {\n        deleteMapping( tableName, mappingName );\n      }\n\n      HBaseTableWriteOperationManager writeOperationManager = hBaseTable.createWriteOperationManager( null );\n      HBasePut hBasePut = writeOperationManager\n        .createPut( byteConversionUtil.compoundKey( HbaseUtil.parseQualifierFromTableName( tableName ), mappingName ) );\n      hBasePut.setWriteToWAL( true );\n\n      String family = COLUMNS_FAMILY_NAME;\n      Set<String> aliases = mapping.keySet();\n      for ( String alias : aliases ) {\n        HBaseValueMetaInterface vm = mapping.get( alias );\n        String valueType = ValueMetaInterface.typeCodes[ vm.getType() ];\n\n        // make sure that we save the correct type name so that unsigned filtering\n        // works correctly!\n        if ( vm.isInteger() && vm.getIsLongOrDouble() ) {\n          valueType = \"Long\";\n        }\n\n        if ( vm.isNumber() ) {\n          if ( vm.getIsLongOrDouble() ) {\n            valueType = \"Double\";\n          } else {\n            valueType = \"Float\";\n          }\n        }\n\n        // check for nominal/indexed\n        if ( vm.getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED && vm.isString() ) {\n          Object[] labels = vm.getIndex();\n          StringBuffer vals = new StringBuffer();\n          vals.append( \"{\" );\n\n          for ( int i = 0; i < labels.length; i++ ) {\n            if ( i != labels.length - 1 ) {\n              vals.append( labels[ i ].toString().trim() ).append( \",\" );\n            } else {\n              vals.append( labels[ i ].toString().trim() ).append( \"}\" );\n            }\n          }\n          valueType = vals.toString();\n        }\n\n        // add this mapped column in\n        hBasePut\n          .addColumn( family, hBasePut.createColumnName( vm.getColumnFamily(), vm.getColumnName(), alias ), false,\n            byteConversionUtil.toBytes( valueType ) );\n      }\n\n      // now do the key\n      family = KEY_FAMILY_NAME;\n      List<String> qualifier = new ArrayList<>( Collections.singletonList( keyName ) );\n\n      // indicate that this is a tuple mapping by appending SEPARATOR to the name\n      // of the key + any specified column families to extract from\n      if ( isTupleMapping ) {\n        if ( Const.isEmpty( tupleFamilies ) ) {\n          qualifier.add( \"\" );\n        } else {\n          qualifier.add( tupleFamilies );\n        }\n      }\n      String valueType = keyType.toString();\n\n      hBasePut\n        .addColumn( family, hBasePut.createColumnName( qualifier.toArray( new String[ qualifier.size() ] ) ), false,\n          byteConversionUtil.toBytes( valueType ) );\n\n      // add the row\n      hBasePut.execute();\n      writeOperationManager.flushCommits();\n    }\n  }\n\n  /**\n   * Returns a textual description of a mapping\n   *\n   * @param tableName\n   *          the table name\n   * @param mappingName\n   *          the mapping name\n   * @return a string describing the specified mapping on the specified table\n   * @throws IOException\n   *           if a problem occurs\n   */\n  public String describeMapping( String tableName, String mappingName ) throws Exception {\n\n    return describeMapping( getMapping( tableName, mappingName ) );\n  }\n\n  /**\n   * Returns a textual description of a mapping\n   *\n   * @param aMapping\n   *          the mapping\n   * @return a textual description of the supplied mapping object\n   * @throws IOException\n   *           if a problem occurs\n   */\n  public String describeMapping( Mapping aMapping ) throws IOException {\n\n    return aMapping.toString();\n  }\n\n  /**\n   * Get a mapping for the specified table under the specified mapping name\n   *\n   * @param tableName\n   *          the name of the table\n   * @param mappingName\n   *          the name of the mapping to get for the table\n   * @return a mapping for the supplied table\n   * @throws Exception\n   *           if a mapping by the given name does not exist for the given table\n   */\n  public Mapping getMapping( String tableName, String mappingName ) throws Exception {\n    ByteConversionUtil byteConversionUtil = hBaseConnection.getByteConversionUtil();\n    MappingFactory mappingFactory = hBaseConnection.getMappingFactory();\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseConnection.getHBaseValueMetaInterfaceFactory();\n    try ( HBaseTable hBaseTable = hBaseConnection.getTable( getMappingTableName( tableName ) ) ) {\n      if ( !hBaseTable.exists() ) {\n\n        // create the mapping table\n        createMappingTable( tableName );\n\n        throw new IOException( \"Mapping \\\"\" + tableName + \",\" + mappingName + \"\\\" does not exist!\" );\n      }\n\n      byte[] compoundKey =\n        byteConversionUtil.compoundKey( HbaseUtil.parseQualifierFromTableName( tableName ), mappingName );\n      ResultScannerBuilder scannerBuilder = hBaseTable.createScannerBuilder( compoundKey, compoundKey );\n      scannerBuilder.setCaching( 10 );\n\n      ResultScanner resultScanner = scannerBuilder.build();\n      Result result = resultScanner.next();\n      if ( result == null ) {\n        throw new IOException( \"Mapping \\\"\" + tableName + \",\" + mappingName + \"\\\" does not exist!\" );\n      }\n\n      NavigableMap<byte[], byte[]> colsInKeyFamily = result.getFamilyMap( KEY_FAMILY_NAME );\n\n      Set<byte[]> keyCols = colsInKeyFamily.keySet();\n      // should only be one key defined!!\n      if ( keyCols.size() != 1 ) {\n        throw new IOException( \"Mapping \\\"\" + tableName + \",\" + mappingName + \"\\\" has more than one key defined!\" );\n      }\n\n      byte[] keyNameB = keyCols.iterator().next();\n      String decodedKeyName = byteConversionUtil.toString( keyNameB );\n      byte[] keyTypeB = colsInKeyFamily.get( keyNameB );\n      String decodedKeyType = byteConversionUtil.toString( keyTypeB );\n      Mapping.KeyType keyType = null;\n\n      for ( Mapping.KeyType t : Mapping.KeyType.values() ) {\n        if ( decodedKeyType.equalsIgnoreCase( t.toString() ) ) {\n          keyType = t;\n          break;\n        }\n      }\n\n      if ( keyType == null ) {\n        throw new IOException( \"Unrecognized type for the key column in \\\"\" + compoundKey + \"\\\"\" );\n      }\n\n      String tupleFamilies = \"\";\n      boolean isTupleMapping = false;\n      if ( decodedKeyName.indexOf( ',' ) > 0 ) {\n\n        isTupleMapping = true;\n\n        if ( decodedKeyName.indexOf( ',' ) != decodedKeyName.length() - 1 ) {\n          tupleFamilies = decodedKeyName.substring( decodedKeyName.indexOf( ',' ) + 1, decodedKeyName.length() );\n        }\n        decodedKeyName = decodedKeyName.substring( 0, decodedKeyName.indexOf( ',' ) );\n      }\n\n      Mapping resultMapping = mappingFactory.createMapping( tableName, mappingName, decodedKeyName, keyType );\n      resultMapping.setTupleMapping( isTupleMapping );\n      if ( !Const.isEmpty( tupleFamilies ) ) {\n        resultMapping.setTupleFamilies( tupleFamilies );\n      }\n\n      Map<String, HBaseValueMetaInterface> resultCols = new TreeMap<String, HBaseValueMetaInterface>();\n\n      // now process the mapping\n      NavigableMap<byte[], byte[]> colsInMapping = result.getFamilyMap( COLUMNS_FAMILY_NAME );\n\n      Set<byte[]> colNames = colsInMapping.keySet();\n\n      for ( byte[] b : colNames ) {\n        String decodedName = byteConversionUtil.toString( b );\n        byte[] c = colsInMapping.get( b );\n        if ( c == null ) {\n          throw new IOException( \"No type declaration for column \\\"\" + decodedName + \"\\\"\" );\n        }\n\n        String decodedType = byteConversionUtil.toString( c );\n\n        HBaseValueMetaInterface newMeta = null;\n        if ( decodedType.equalsIgnoreCase( \"Float\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n\n          // While passing through Kettle this will be represented\n          // as a double\n          newMeta.setIsLongOrDouble( false );\n        } else if ( decodedType.equalsIgnoreCase( \"Double\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_NUMBER, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"String\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_STRING, -1, -1 );\n        } else if ( decodedType.toLowerCase().startsWith( \"date\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_DATE, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"Boolean\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_BOOLEAN, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"Integer\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n\n          // Integer in the mapping is really an integer (not a long\n          // as Kettle uses internally)\n          newMeta.setIsLongOrDouble( false );\n        } else if ( decodedType.equalsIgnoreCase( \"Long\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_INTEGER, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"BigNumber\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_BIGNUMBER, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"Serializable\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_SERIALIZABLE, -1, -1 );\n        } else if ( decodedType.equalsIgnoreCase( \"Binary\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_BINARY, -1, -1 );\n        } else if ( decodedType.startsWith( \"{\" ) && decodedType.endsWith( \"}\" ) ) {\n          newMeta = valueMetaInterfaceFactory\n            .createHBaseValueMetaInterface( decodedName, ValueMetaInterface.TYPE_STRING, -1, -1 );\n\n          Object[] labels = null;\n          try {\n            labels = byteConversionUtil.stringIndexListToObjects( decodedType );\n          } catch ( IllegalArgumentException ex ) {\n            throw new IOException( \"Indexed/nominal type must have at least one \" + \"label declared\" );\n          }\n          newMeta.setIndex( labels );\n          newMeta.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n        } else {\n          throw new IOException( \"Unknown column type : \\\"\" + decodedType + \"\\\"\" );\n        }\n\n        newMeta.setTableName( tableName );\n        newMeta.setMappingName( mappingName );\n        // check that this one doesn't have the same name as the key!\n        String alias = newMeta.getAlias();\n        if ( !Mapping.TupleMapping.KEY.toString().equalsIgnoreCase( alias ) ) {\n          if ( resultMapping.getKeyName().equals( alias ) ) {\n            throw new IOException( \"Error in mapping. Column \\\"\" + newMeta.getAlias()\n              + \"\\\" has the same name as the table key (\" + resultMapping.getKeyName() + \")\" );\n          } else {\n            resultCols.put( newMeta.getAlias(), newMeta );\n          }\n        }\n      }\n\n      resultMapping.setMappedColumns( resultCols );\n      return resultMapping;\n    }\n  }\n\n  @Override public void close() throws IOException {\n    hBaseConnection.close();\n  }\n\n  public HBaseConnection getConnection() {\n    return hBaseConnection;\n  }\n\n  public static String getTableNameFromVariable( BaseStepMeta stepMeta, String mappedTableName ) {\n    TransMeta parentTransMeta = stepMeta.getParentStepMeta().getParentTransMeta();\n    return parentTransMeta.environmentSubstitute( mappedTableName );\n  }\n\n  private String getMappingTableName( String tableName ) {\n    return HbaseUtil.expandTableName( HbaseUtil.parseNamespaceFromTableName( tableName ), m_mappingTableName );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MappingEditor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.TreeSet;\n\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.events.DisposeEvent;\nimport org.eclipse.swt.events.DisposeListener;\nimport org.eclipse.swt.events.FocusEvent;\nimport org.eclipse.swt.events.FocusListener;\nimport org.eclipse.swt.events.KeyAdapter;\nimport org.eclipse.swt.events.KeyEvent;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.graphics.Cursor;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.hbase.HBaseConnectionException;\nimport org.pentaho.big.data.kettle.plugins.hbase.input.HBaseInput;\nimport org.pentaho.big.data.kettle.plugins.hbase.input.Messages;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTable;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ComboValuesSelectionListener;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\n/**\n * A re-usable composite for creating and editing table mappings for HBase. Also has the (optional) ability to create\n * the table if the table for which the mapping is being created does not exist. When creating a new table, the name\n * supplied may be optionally suffixed with some parameters for compression and bloom filter type. If no parameters are\n * supplied then the HBase defaults of no compression and no bloom filter(s) are used. The table name may be suffixed\n * with\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @[NONE | GZ | LZO][@[NONE | ROW | ROWCOL]] for compression and bloom filter type respectively. Note that LZO\n * compression requires LZO libraries to be installed on the HBase nodes.\n */\npublic class MappingEditor extends Composite implements ConfigurationProducer {\n\n  private static final Class<?> PKG = MappingEditor.class;\n\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  protected Shell m_shell;\n  protected Composite m_parent;\n\n  protected boolean m_allowTableCreate;\n\n  protected NamedClusterWidgetImpl namedClusterWidget;\n\n  // table name line\n  protected CCombo m_existingTableNamesCombo;\n  protected Button m_getTableNames;\n  protected boolean m_familiesInvalidated;\n\n  // mapping name line\n  protected CCombo m_existingMappingNamesCombo;\n\n  // fields view\n  protected TableView m_fieldsView;\n  protected ColumnInfo m_keyCI;\n  protected ColumnInfo m_familyCI;\n  protected ColumnInfo m_typeCI;\n\n  protected Button m_saveBut;\n  protected Button m_deleteBut;\n\n  protected Button m_getFieldsBut;\n\n  protected Button m_keyValueTupleBut;\n\n  protected MappingAdmin m_admin;\n\n  protected ConfigurationProducer m_configProducer;\n  protected FieldProducer m_incomingFieldsProducer;\n\n  /**\n   * default family name to use when creating a new table using incoming fields\n   */\n  protected static final String DEFAULT_FAMILY = \"Family1\";\n\n  protected String m_currentConfiguration = \"\";\n  protected boolean m_connectionProblem;\n\n  protected TransMeta m_transMeta;\n\n  public MappingEditor( Shell shell, Composite parent, ConfigurationProducer configProducer,\n                        FieldProducer fieldProducer, int tableViewStyle, boolean allowTableCreate, PropsUI props,\n                        TransMeta transMeta, NamedClusterService namedClusterService,\n                        RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                        NamedClusterServiceLocator namedClusterServiceLocator ) {\n    super( parent, SWT.NONE );\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    m_shell = shell;\n    m_parent = parent;\n    m_transMeta = transMeta;\n    boolean showConnectWidgets = false;\n    m_configProducer = configProducer;\n    if ( m_configProducer != null ) {\n      m_currentConfiguration = m_configProducer.getCurrentConfiguration();\n    } else {\n      showConnectWidgets = true;\n      m_configProducer = this;\n    }\n\n    m_incomingFieldsProducer = fieldProducer;\n\n    m_allowTableCreate = allowTableCreate;\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    FormLayout controlLayout = new FormLayout();\n\n    controlLayout.marginWidth = 3;\n    controlLayout.marginHeight = 3;\n\n    setLayout( controlLayout );\n    props.setLook( this );\n\n    if ( showConnectWidgets ) {\n      Label namedClusterLabel = new Label( this, SWT.RIGHT );\n      namedClusterLabel.setText( Messages.getString( \"MappingDialog.NamedCluster.Label\" ) );\n      props.setLook( namedClusterLabel );\n      FormData fd = new FormData();\n      fd.left = new FormAttachment( 0, 0 );\n      fd.top = new FormAttachment( 0, 10 );\n      fd.right = new FormAttachment( middle, -margin );\n      namedClusterLabel.setLayoutData( fd );\n\n      namedClusterWidget =\n        new NamedClusterWidgetImpl( this, false, namedClusterService, runtimeTestActionService, runtimeTester, false );\n      namedClusterWidget.initiate();\n      props.setLook( namedClusterWidget );\n      fd = new FormData();\n      fd.left = new FormAttachment( middle, 0 );\n      fd.top = new FormAttachment( 0, margin );\n      fd.right = new FormAttachment( 100, 0 );\n      namedClusterWidget.setLayoutData( fd );\n\n      m_currentConfiguration = m_configProducer.getCurrentConfiguration();\n    }\n\n    parent.addDisposeListener(\n      new DisposeListener() {\n        @Override\n        public void widgetDisposed( DisposeEvent de ) {\n          try {\n            resetConnection();\n          } catch ( Exception e ) {\n            // we have to swallow it.\n          }\n        }\n      } );\n\n    // table names\n    Label tableNameLab = new Label( this, SWT.RIGHT );\n    tableNameLab.setText( Messages.getString( \"MappingDialog.TableName.Label\" ) );\n    props.setLook( tableNameLab );\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    if ( showConnectWidgets ) {\n      fd.top = new FormAttachment( namedClusterWidget, margin );\n    } else {\n      fd.top = new FormAttachment( 0, margin );\n    }\n    fd.right = new FormAttachment( middle, -margin );\n    tableNameLab.setLayoutData( fd );\n\n    m_getTableNames = new Button( this, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_getTableNames );\n    m_getTableNames.setText( Messages.getString( \"MappingDialog.TableName.GetTableNames\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    if ( showConnectWidgets ) {\n      fd.top = new FormAttachment( namedClusterWidget, 0 );\n    } else {\n      fd.top = new FormAttachment( 0, 0 );\n    }\n    m_getTableNames.setLayoutData( fd );\n\n    m_getTableNames.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        populateTableCombo( true );\n        if (  m_existingTableNamesCombo.getItemCount() > 0 ) {\n          m_existingTableNamesCombo.setListVisible( true );\n        }\n      }\n    } );\n\n    m_existingTableNamesCombo = new CCombo( this, SWT.BORDER );\n    props.setLook( m_existingTableNamesCombo );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.right = new FormAttachment( m_getTableNames, -margin );\n    if ( showConnectWidgets ) {\n      fd.top = new FormAttachment( namedClusterWidget, margin );\n    } else {\n      fd.top = new FormAttachment( 0, margin );\n    }\n    m_existingTableNamesCombo.setLayoutData( fd );\n\n    // Must be editable to change the namespace once populated (see Hbase row decoder).  If m_allowTableCreate is false\n    // then saving the map is disabled so it is not important what text exists here\n    m_existingTableNamesCombo.setEditable( true );\n\n    // mapping names\n    Label mappingNameLab = new Label( this, SWT.RIGHT );\n    mappingNameLab.setText( Messages.getString( \"MappingDialog.MappingName.Label\" ) );\n    props.setLook( mappingNameLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_getTableNames, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    mappingNameLab.setLayoutData( fd );\n\n    m_existingMappingNamesCombo = new CCombo( this, SWT.BORDER );\n    props.setLook( m_existingMappingNamesCombo );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_getTableNames, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_existingMappingNamesCombo.setLayoutData( fd );\n\n    m_existingTableNamesCombo.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_familiesInvalidated = true;\n        populateMappingComboAndFamilyStuff();\n      }\n\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        m_familiesInvalidated = true;\n        populateMappingComboAndFamilyStuff();\n      }\n    } );\n\n    m_existingTableNamesCombo.addKeyListener( new KeyAdapter() {\n      @Override\n      public void keyPressed( KeyEvent e ) {\n        m_familiesInvalidated = true;\n      }\n    } );\n\n    m_existingTableNamesCombo.addFocusListener( new FocusListener() {\n      public void focusGained( FocusEvent e ) {\n        // populateTableCombo(false);\n      }\n\n      public void focusLost( FocusEvent e ) {\n        m_familiesInvalidated = true;\n        populateMappingComboAndFamilyStuff();\n      }\n    } );\n\n    m_existingMappingNamesCombo.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        loadTableViewFromMapping();\n      }\n\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        loadTableViewFromMapping();\n      }\n    } );\n\n    // fields\n    ColumnInfo[] colinf =\n      new ColumnInfo[] {\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_ALIAS\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_KEY\" ), ColumnInfo.COLUMN_TYPE_CCOMBO,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_FAMILY\" ), ColumnInfo.COLUMN_TYPE_CCOMBO,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_NAME\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_TYPE\" ), ColumnInfo.COLUMN_TYPE_CCOMBO,\n          false ),\n        new ColumnInfo( Messages.getString( \"HBaseInputDialog.Fields.FIELD_INDEXED\" ), ColumnInfo.COLUMN_TYPE_TEXT,\n          false ), };\n\n    m_keyCI = colinf[ 1 ];\n    m_keyCI.setComboValues( new String[] { \"N\", \"Y\" } );\n    m_familyCI = colinf[ 2 ];\n    m_familyCI.setComboValues( new String[] { \"\" } );\n    m_typeCI = colinf[ 4 ];\n    // default types for non-key fields\n    m_typeCI.setComboValues( new String[] { \"String\", \"Integer\", \"Long\", \"Float\", \"Double\", \"Date\", \"BigNumber\",\n      \"Serializable\", \"Binary\" } );\n\n    m_keyCI.setComboValuesSelectionListener( new ComboValuesSelectionListener() {\n      public String[] getComboValues( TableItem tableItem, int rowNr, int colNr ) {\n\n        tableItem.setText( 5, \"\" );\n        return m_keyCI.getComboValues();\n      }\n    } );\n\n    m_typeCI.setComboValuesSelectionListener( new ComboValuesSelectionListener() {\n      public String[] getComboValues( TableItem tableItem, int rowNr, int colNr ) {\n        String[] comboValues = null;\n\n        String keyOrNot = tableItem.getText( 2 );\n        if ( Utils.isEmpty( keyOrNot ) || keyOrNot.equalsIgnoreCase( \"N\" ) ) {\n          comboValues =\n            new String[] { \"String\", \"Integer\", \"Long\", \"Float\", \"Double\", \"Boolean\", \"Date\", \"BigNumber\",\n              \"Serializable\", \"Binary\" };\n        } else {\n          comboValues =\n            new String[] { \"String\", \"Integer\", \"UnsignedInteger\", \"Long\", \"UnsignedLong\", \"Date\", \"UnsignedDate\",\n              \"Binary\" };\n        }\n\n        return comboValues;\n      }\n    } );\n\n    m_saveBut = new Button( this, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_saveBut );\n    m_saveBut.setText( Messages.getString( \"MappingDialog.SaveMapping\" ) );\n    m_saveBut.setToolTipText( Messages.getString( \"MappingDialog.SaveMapping.TipText\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, margin );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    m_saveBut.setLayoutData( fd );\n\n    m_saveBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        saveMapping();\n      }\n    } );\n\n    m_deleteBut = new Button( this, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_deleteBut );\n    m_deleteBut.setText( Messages.getString( \"MappingDialog.DeleteMapping\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( m_saveBut, margin );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    m_deleteBut.setLayoutData( fd );\n    m_deleteBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        deleteMapping();\n      }\n    } );\n\n    m_keyValueTupleBut = new Button( this, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_keyValueTupleBut );\n    m_keyValueTupleBut.setText( Messages.getString( \"MappingDialog.KeyValueTemplate\" ) );\n    m_keyValueTupleBut.setToolTipText( Messages.getString( \"MappingDialog.KeyValueTemplate.TipText\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    m_keyValueTupleBut.setLayoutData( fd );\n\n    m_keyValueTupleBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        populateTableWithTupleTemplate( m_allowTableCreate );\n      }\n    } );\n\n    if ( m_allowTableCreate ) {\n\n      m_getFieldsBut = new Button( this, SWT.PUSH | SWT.CENTER );\n      props.setLook( m_getFieldsBut );\n      m_getFieldsBut.setText( Messages.getString( \"MappingDialog.GetIncomingFields\" ) );\n      fd = new FormData();\n      fd.right = new FormAttachment( m_keyValueTupleBut, -margin );\n      fd.bottom = new FormAttachment( 100, -margin * 2 );\n      m_getFieldsBut.setLayoutData( fd );\n\n      m_getFieldsBut.addSelectionListener( new SelectionAdapter() {\n        @Override\n        public void widgetSelected( SelectionEvent e ) {\n          populateTableWithIncomingFields();\n        }\n      } );\n\n    }\n\n    m_fieldsView = new TableView( transMeta, this, tableViewStyle, colinf, 1, null, props );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( m_existingMappingNamesCombo, margin * 2 );\n    fd.bottom = new FormAttachment( m_saveBut, -margin * 2 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_fieldsView.setLayoutData( fd );\n  }\n\n  private void populateTableWithTupleTemplate( boolean fromOutputStep ) {\n    Table table = m_fieldsView.table;\n\n    Set<String> existingRowAliases = new HashSet<String>();\n    for ( int i = 0; i < table.getItemCount(); i++ ) {\n      TableItem tableItem = table.getItem( i );\n      String alias = tableItem.getText( 1 );\n      if ( !Utils.isEmpty( alias ) ) {\n        existingRowAliases.add( alias );\n      }\n    }\n\n    int choice = 0;\n    if ( existingRowAliases.size() > 0 ) {\n      // Ask what we should do with existing mapping data\n      MessageDialog md =\n        new MessageDialog( m_shell, Messages.getString( \"MappingDialog.GetFieldsChoice.Title\" ), null, Messages\n          .getString( \"MappingDialog.GetFieldsChoice.Message\", \"\" + existingRowAliases.size(), \"\" + ( fromOutputStep\n            ? /* 6 */ 5 : 5 ) ), MessageDialog.WARNING, new String[] { Messages.getString(\n          \"MappingOutputDialog.ClearAndAdd\" ), Messages.getString( \"MappingOutputDialog.Cancel\" ), }, 0 );\n      MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n      int idx = md.open();\n      choice = idx & 0xFF;\n    }\n\n    if ( choice == 1 || choice == 255 /* 255 = escape pressed */ ) {\n      return; // Cancel\n    }\n\n    m_fieldsView.clearAll();\n    TableItem item = new TableItem( table, SWT.NONE );\n    item.setText( 1, \"KEY\" );\n    item.setText( 2, \"Y\" );\n    item = new TableItem( table, SWT.NONE );\n    item.setText( 1, \"Family\" );\n    item.setText( 2, \"N\" );\n    item.setText( 5, \"String\" );\n    item = new TableItem( table, SWT.NONE );\n    item.setText( 1, \"Column\" );\n    item.setText( 2, \"N\" );\n    item = new TableItem( table, SWT.NONE );\n    item.setText( 1, \"Value\" );\n    item.setText( 2, \"N\" );\n    item = new TableItem( table, SWT.NONE );\n    item.setText( 1, \"Timestamp\" );\n    item.setText( 2, \"N\" );\n    item.setText( 5, \"Long\" );\n\n    /*\n     * Disabled from GUI for now, since visibility/ACL processing\n     * requires an additional co-processor on HBase\n     *\n    if ( fromOutputStep ) {\n      item = new TableItem( table, SWT.NONE );\n      item.setText( 1, \"Visibility\" );\n      item.setText( 2, \"N\" );\n      item.setText( 5, \"String\" );\n    }\n    */\n\n    m_fieldsView.removeEmptyRows();\n    m_fieldsView.setRowNums();\n    m_fieldsView.optWidth( true );\n  }\n\n  private void populateTableWithIncomingFields() {\n    if ( m_incomingFieldsProducer != null ) {\n      RowMetaInterface incomingRowMeta = m_incomingFieldsProducer.getIncomingFields();\n\n      Table table = m_fieldsView.table;\n      if ( incomingRowMeta != null ) {\n        Set<String> existingRowAliases = new HashSet<String>();\n        for ( int i = 0; i < table.getItemCount(); i++ ) {\n          TableItem tableItem = table.getItem( i );\n          String alias = tableItem.getText( 1 );\n          if ( !Utils.isEmpty( alias ) ) {\n            existingRowAliases.add( alias );\n          }\n        }\n\n        int choice = 0;\n        if ( existingRowAliases.size() > 0 ) {\n          // Ask what we should do with existing mapping data\n          MessageDialog md =\n            new MessageDialog( m_shell, Messages.getString( \"MappingDialog.GetFieldsChoice.Title\" ), null, Messages\n              .getString( \"MappingDialog.GetFieldsChoice.Message\", \"\" + existingRowAliases.size(), \"\"\n                + incomingRowMeta.size() ), MessageDialog.WARNING, new String[] { Messages.getString(\n              \"MappingDialog.AddNew\" ), Messages.getString( \"MappingOutputDialog.Add\" ), Messages.getString(\n              \"MappingOutputDialog.ClearAndAdd\" ), Messages.getString(\n              \"MappingOutputDialog.Cancel\" ), }, 0 );\n          MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n          int idx = md.open();\n          choice = idx & 0xFF;\n        }\n\n        if ( choice == 3 || choice == 255 /* 255 = escape pressed */ ) {\n          return; // Cancel\n        }\n\n        if ( choice == 2 ) {\n          m_fieldsView.clearAll();\n        }\n\n        ByteConversionUtil byteConversionUtil = null;\n        try {\n          byteConversionUtil = m_configProducer.getHBaseService().getByteConversionUtil();\n        } catch ( Exception e ) {\n          throw new RuntimeException( e );\n        }\n        for ( int i = 0; i < incomingRowMeta.size(); i++ ) {\n          ValueMetaInterface vm = incomingRowMeta.getValueMeta( i );\n          boolean addIt = true;\n\n          if ( choice == 0 ) {\n            // only add if its not already in the table\n            if ( existingRowAliases.contains( vm.getName() ) ) {\n              addIt = false;\n            }\n          }\n\n          if ( addIt ) {\n            TableItem item = new TableItem( m_fieldsView.table, SWT.NONE );\n            item.setText( 1, vm.getName() );\n            item.setText( 2, \"N\" );\n\n            if ( m_familyCI.getComboValues()[ 0 ].length() > 0 ) {\n              // use existing first column family name as the default\n              item.setText( 3, m_familyCI.getComboValues()[ 0 ] );\n            } else {\n              // default\n              item.setText( 3, DEFAULT_FAMILY );\n            }\n\n            item.setText( 4, vm.getName() );\n            item.setText( 5, vm.getTypeDesc() );\n            if ( vm.getType() == ValueMetaInterface.TYPE_INTEGER ) {\n              item.setText( 5, \"Long\" );\n            }\n            if ( vm.getType() == ValueMetaInterface.TYPE_NUMBER ) {\n              item.setText( 5, \"Double\" );\n            }\n            if ( vm.getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED ) {\n              Object[] indexValus = vm.getIndex();\n              String indexValsS = byteConversionUtil.objectIndexValuesToString( indexValus );\n              item.setText( 6, indexValsS );\n            }\n          }\n        }\n\n        m_fieldsView.removeEmptyRows();\n        m_fieldsView.setRowNums();\n        m_fieldsView.optWidth( true );\n      }\n    }\n  }\n\n  private void populateTableCombo( boolean force ) {\n    if ( namedClusterWidget != null && namedClusterWidget.getSelectedNamedCluster() == null ) {\n      MessageDialog.openError( m_shell, BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Title.NamedClusterNotSelected\" ), BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Message.NamedClusterNotSelected.Msg\" ) );\n      return;\n    }\n\n    if ( m_configProducer == null ) {\n      return;\n    }\n\n    if ( m_connectionProblem ) {\n      if ( !m_currentConfiguration.equals( m_configProducer.getCurrentConfiguration() ) ) {\n        // try again - perhaps the user has corrected connection information\n        m_connectionProblem = false;\n        m_currentConfiguration = m_configProducer.getCurrentConfiguration();\n      }\n    }\n\n    if ( ( m_existingTableNamesCombo.getItemCount() == 0 || force ) && !m_connectionProblem ) {\n      String existingName = m_existingTableNamesCombo.getText();\n      String namespace = HbaseUtil.parseNamespaceFromTableName( existingName, null );\n\n      m_existingTableNamesCombo.removeAll();\n      Cursor busy = new Cursor( this.getDisplay(), SWT.CURSOR_WAIT );\n      try {\n        this.setCursor( busy );\n        resetConnection();\n        m_admin = MappingUtils.getMappingAdmin( m_configProducer );\n\n        TreeSet<String> tableNames = new TreeSet<>();\n        if ( namespace != null ) {\n          addTables( tableNames, namespace );\n        } else {\n          List<String> namespaces = m_admin.getConnection().listNamespaces();\n          for ( String nextNamespace: namespaces ) {\n            addTables( tableNames, nextNamespace );\n          }\n        }\n\n        for ( String currentTableName : tableNames ) {\n          m_existingTableNamesCombo.add( currentTableName );\n        }\n        // restore any previous value\n        if ( !Utils.isEmpty( existingName ) ) {\n          m_existingTableNamesCombo.setText( existingName );\n        }\n      } catch ( Exception e ) {\n        m_connectionProblem = true;\n        showConnectionErrorDialog( e );\n      } finally {\n        this.setCursor( null );\n        busy.dispose();\n      }\n    }\n  }\n\n  private void addTables( Set<String> tableNames, String namespace ) throws Exception {\n    HBaseConnection hBaseConnection = m_admin.getConnection();\n    List<String> tables = hBaseConnection.listTableNamesByNamespace( namespace );\n    for ( String currentTableName : tables ) {\n      tableNames.add( HbaseUtil.expandTableName( currentTableName ) );\n    }\n  }\n\n  private void resetConnection() throws IOException {\n    if ( m_admin != null ) {\n      m_admin.close();\n    }\n    m_admin = null;\n  }\n\n  private boolean notInitializedMappingAdmin() {\n    return m_admin == null;\n  }\n\n  private void showConnectionErrorDialog( Exception ex ) {\n    new ErrorDialog( m_shell, Messages.getString( \"MappingDialog.Error.Title.UnableToConnect\" ), Messages.getString(\n      \"MappingDialog.Error.Message.UnableToConnect\" ) + \"\\n\\n\", ex );\n  }\n\n  private void deleteMapping() {\n    if ( namedClusterWidget != null && namedClusterWidget.getSelectedNamedCluster() == null ) {\n      MessageDialog.openError( m_shell, BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Title.NamedClusterNotSelected\" ), BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Message.NamedClusterNotSelected.Msg\" ) );\n      return;\n    }\n    String tableName = \"\";\n    if ( !Utils.isEmpty( m_existingTableNamesCombo.getText().trim() ) ) {\n      tableName = m_existingTableNamesCombo.getText().trim();\n\n      if ( tableName.indexOf( '@' ) > 0 ) {\n        tableName = tableName.substring( 0, tableName.indexOf( '@' ) );\n      }\n    }\n    if ( Utils.isEmpty( tableName ) || Utils.isEmpty( m_existingMappingNamesCombo.getText().trim() ) ) {\n      MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.MissingTableMappingName\" ),\n        Messages.getString( \"MappingDialog.Error.Message.MissingTableMappingName\" ) );\n      return;\n    }\n\n    try {\n      boolean ok =\n        MessageDialog.openConfirm( m_shell, Messages.getString( \"MappingDialog.Info.Title.ConfirmDelete\" ), Messages\n          .getString( \"MappingDialog.Info.Message.ConfirmDelete\", m_existingMappingNamesCombo.getText().trim(),\n            tableName ) );\n\n      if ( ok ) {\n        if ( notInitializedMappingAdmin() ) {\n          try {\n            m_admin = MappingUtils.getMappingAdmin( m_configProducer );\n          } catch ( HBaseConnectionException e ) {\n            showConnectionErrorDialog( e );\n            return;\n          }\n        }\n\n        boolean result =\n          m_admin.deleteMapping( m_existingTableNamesCombo.getText().trim(), m_existingMappingNamesCombo.getText()\n            .trim() );\n        if ( result ) {\n          MessageDialog.openConfirm( m_shell, Messages.getString( \"MappingDialog.Info.Title.MappingDeleted\" ), Messages\n            .getString( \"MappingDialog.Info.Message.MappingDeleted\", m_existingMappingNamesCombo.getText().trim(),\n              tableName ) );\n\n          // make sure that the list of mappings for the selected table gets\n          // updated.\n          populateMappingComboAndFamilyStuff();\n        } else {\n          MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.DeleteMapping\" ), Messages\n            .getString( \"MappingDialog.Error.Message.DeleteMapping\", m_existingMappingNamesCombo.getText().trim(),\n              tableName ) );\n        }\n      }\n      return;\n    } catch ( Exception ex ) {\n      MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.DeleteMapping\" ), Messages\n        .getString( \"MappingDialog.Error.Message.DeleteMappingIO\", m_existingMappingNamesCombo.getText().trim(),\n          tableName, ex.getMessage() ) );\n    }\n  }\n\n  public Mapping getMapping( boolean performChecksAndShowGUIErrorDialog, List<String> problems ) throws Exception {\n    return getMapping( performChecksAndShowGUIErrorDialog, problems, false );\n  }\n\n  /**\n   * Parameter includeKeyToColumns should be true if only we need key to be included in mapColumns and mapAliases\n   */\n  public Mapping getMapping( boolean performChecksAndShowGUIErrorDialog, List<String> problems,\n                             Boolean includeKeyToColumns ) {\n    String tableName = \"\";\n    if ( !Utils.isEmpty( m_existingTableNamesCombo.getText().trim() ) ) {\n      tableName = m_existingTableNamesCombo.getText().trim();\n\n      if ( tableName.indexOf( '@' ) > 0 ) {\n        tableName = tableName.substring( 0, tableName.indexOf( '@' ) );\n      }\n    }\n\n    // empty table name or mapping name does not force an abort\n    if ( performChecksAndShowGUIErrorDialog && ( Utils.isEmpty( m_existingMappingNamesCombo.getText().trim() )\n      || Utils.isEmpty( tableName ) ) ) {\n      MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.MissingTableMappingName\" ),\n        Messages.getString( \"MappingDialog.Error.Message.MissingTableMappingName\" ) );\n      if ( problems != null ) {\n        problems.add( Messages.getString( \"MappingDialog.Error.Message.MissingTableMappingName\" ) );\n      }\n      return null;\n    }\n\n    // do we have any non-empty rows in the table?\n    if ( m_fieldsView.nrNonEmpty() == 0 && performChecksAndShowGUIErrorDialog ) {\n      MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.NoFieldsDefined\" ), Messages\n        .getString( \"MappingDialog.Error.Message.NoFieldsDefined\" ) );\n      if ( problems != null ) {\n        problems.add( Messages.getString( \"MappingDialog.Error.Message.NoFieldsDefined\" ) );\n      }\n      return null;\n    }\n    // do we have a key defined in the table?\n    HBaseService hBaseService = null;\n    try {\n      hBaseService = m_configProducer.getHBaseService();\n      if ( hBaseService == null ) { // Backlog-32244; If we don't have a service don't bother trying to map it.\n        return null;\n      }\n    } catch ( Exception e ) {\n      if ( problems != null ) {\n        problems.add( e.getMessage() );\n      }\n      return null;\n    }\n    Mapping theMapping =\n      hBaseService.getMappingFactory().createMapping( tableName, m_existingMappingNamesCombo.getText().trim() );\n    boolean keyDefined = false;\n    boolean moreThanOneKey = false;\n    List<String> missingFamilies = new ArrayList<String>();\n    List<String> missingColumnNames = new ArrayList<String>();\n    List<String> missingTypes = new ArrayList<String>();\n\n    int nrNonEmpty = m_fieldsView.nrNonEmpty();\n\n    // is the mapping a tuple mapping?\n    boolean isTupleMapping = false;\n    int tupleIdCount = 0;\n    if ( nrNonEmpty >= 5 && nrNonEmpty <= 6 ) {\n      for ( int i = 0; i < nrNonEmpty; i++ ) {\n        if ( m_fieldsView.getNonEmpty( i ).getText( 1 ).equals( Mapping.TupleMapping.KEY.toString() ) || m_fieldsView\n          .getNonEmpty( i ).getText( 1 ).equals( Mapping.TupleMapping.FAMILY.toString() ) || m_fieldsView.getNonEmpty(\n          i ).getText( 1 ).equals( Mapping.TupleMapping.COLUMN.toString() ) || m_fieldsView.getNonEmpty( i )\n          .getText( 1 ).equals( Mapping.TupleMapping.VALUE.toString() ) || m_fieldsView.getNonEmpty( i )\n          .getText( 1 ).equals( Mapping.TupleMapping.TIMESTAMP.toString() ) || m_fieldsView.getNonEmpty(\n          i ).getText( 1 ).equals( MappingUtils.TUPLE_MAPPING_VISIBILITY ) ) {\n          tupleIdCount++;\n        }\n      }\n    }\n\n    if ( tupleIdCount == 5 || tupleIdCount == 6 ) {\n      isTupleMapping = true;\n      theMapping.setTupleMapping( true );\n    }\n\n    for ( int i = 0; i < nrNonEmpty; i++ ) {\n      TableItem item = m_fieldsView.getNonEmpty( i );\n      boolean isKey = false;\n      String alias = null;\n      if ( !Utils.isEmpty( item.getText( 1 ) ) ) {\n        alias = item.getText( 1 ).trim();\n      }\n      if ( !Utils.isEmpty( item.getText( 2 ) ) ) {\n        isKey = item.getText( 2 ).trim().equalsIgnoreCase( \"Y\" );\n\n        if ( isKey && keyDefined ) {\n          // more than one key, break here\n          moreThanOneKey = true;\n          break;\n        }\n        if ( isKey ) {\n          keyDefined = true;\n        }\n      }\n      // String family = null;\n      String family = \"\";\n      if ( !Utils.isEmpty( item.getText( 3 ) ) ) {\n        family = item.getText( 3 );\n      } else {\n        if ( !isKey && !isTupleMapping ) {\n          missingFamilies.add( item.getText( 0 ) );\n        }\n      }\n      // String colName = null;\n      String colName = \"\";\n      if ( !Utils.isEmpty( item.getText( 4 ) ) ) {\n        colName = item.getText( 4 );\n      } else {\n        if ( !isKey && !isTupleMapping ) {\n          missingColumnNames.add( item.getText( 0 ) );\n        }\n      }\n      String type = null;\n      if ( !Utils.isEmpty( item.getText( 5 ) ) ) {\n        type = item.getText( 5 );\n      } else {\n        missingTypes.add( item.getText( 0 ) );\n      }\n      String indexedVals = null;\n      if ( !Utils.isEmpty( item.getText( 6 ) ) ) {\n        indexedVals = item.getText( 6 );\n      }\n\n      HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n      // only add if we have all data and its all correct\n      if ( isKey && !moreThanOneKey ) {\n        if ( Utils.isEmpty( alias ) ) {\n          // pop up an error dialog - key must have an alias because it does not\n          // belong to a column family or have a column name\n          if ( performChecksAndShowGUIErrorDialog ) {\n            MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.NoAliasForKey\" ), Messages\n              .getString( \"MappingDialog.Error.Message.NoAliasForKey\" ) );\n          }\n          if ( problems != null ) {\n            problems.add( Messages.getString( \"MappingDialog.Error.Message.NoAliasForKey\" ) );\n          }\n          return null;\n        }\n\n        if ( Utils.isEmpty( type ) ) {\n          // pop up an error dialog - must have a type for the key\n          if ( performChecksAndShowGUIErrorDialog ) {\n            MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.NoTypeForKey\" ), Messages\n              .getString( \"MappingDialog.Error.Message.NoTypeForKey\" ) );\n          }\n          if ( problems != null ) {\n            problems.add( Messages.getString( \"MappingDialog.Error.Message.NoTypeForKey\" ) );\n          }\n          return null;\n        }\n\n        if ( moreThanOneKey ) {\n          // popup an error and then return\n          if ( performChecksAndShowGUIErrorDialog ) {\n            MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.MoreThanOneKey\" ), Messages\n              .getString( \"MappingDialog.Error.Message.MoreThanOneKey\" ) );\n          }\n          if ( problems != null ) {\n            problems.add( Messages.getString( \"MappingDialog.Error.Message.MoreThanOneKey\" ) );\n          }\n          return null;\n        }\n\n        if ( isTupleMapping ) {\n          theMapping.setKeyName( alias );\n          theMapping.setTupleFamilies( family );\n        } else {\n          theMapping.setKeyName( alias );\n        }\n        HBaseValueMetaInterface vm =\n          valueMetaInterfaceFactory.createHBaseValueMetaInterface( null, null, alias, 0, -1, -1 );\n        vm.setKey( true );\n        try {\n          theMapping.setKeyTypeAsString( type );\n          vm.setType( HBaseInput.getKettleTypeByKeyType( theMapping.getKeyType() ) );\n          if ( includeKeyToColumns ) {\n            theMapping.addMappedColumn( vm, isTupleMapping );\n          }\n        } catch ( Exception ex ) {\n          // Ignore\n        }\n      } else {\n        ByteConversionUtil byteConversionUtil = hBaseService.getByteConversionUtil();\n        // don't bother adding if there are any errors\n        if ( missingFamilies.size() == 0 && missingColumnNames.size() == 0 && missingTypes.size() == 0 ) {\n          // Set the alias name to the column name if no alias value is detected\n          if ( Utils.isEmpty( alias ) ) {\n            alias = colName;\n            item.setText( 1, colName );\n          }\n          HBaseValueMetaInterface vm =\n            valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, colName, alias, 0, -1, -1 );\n          try {\n            vm.setHBaseTypeFromString( type );\n          } catch ( IllegalArgumentException e ) {\n            // TODO pop up an error dialog for this one\n            return null;\n          }\n          if ( vm.isString() && indexedVals != null && indexedVals.length() > 0 ) {\n            Object[] vals = byteConversionUtil.stringIndexListToObjects( indexedVals );\n            vm.setIndex( vals );\n            vm.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n          }\n\n          try {\n            theMapping.addMappedColumn( vm, isTupleMapping );\n          } catch ( Exception ex ) {\n            // popup an error if this family:column is already in the mapping\n            // and\n            // then return.\n            if ( performChecksAndShowGUIErrorDialog ) {\n              MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.DuplicateColumn\" ),\n                Messages.getString( \"MappingDialog.Error.Message1.DuplicateColumn\" ) + family + \",\" + colName\n                  + Messages.getString( \"MappingDialog.Error.Message2.DuplicateColumn\" ) );\n            }\n            if ( problems != null ) {\n              problems.add( Messages.getString( \"MappingDialog.Error.Message1.DuplicateColumn\" ) + family + \",\"\n                + colName + Messages.getString( \"MappingDialog.Error.Message2.DuplicateColumn\" ) );\n            }\n\n            return null;\n          }\n        }\n      }\n    }\n\n    // now check for any errors in our Lists\n    if ( !keyDefined ) {\n      if ( performChecksAndShowGUIErrorDialog ) {\n        MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.NoKeyDefined\" ), Messages\n          .getString( \"MappingDialog.Error.Message.NoKeyDefined\" ) );\n      }\n      if ( problems != null ) {\n        problems.add( Messages.getString( \"MappingDialog.Error.Message.NoKeyDefined\" ) );\n      }\n      return null;\n    }\n\n    if ( missingFamilies.size() > 0 || missingColumnNames.size() > 0 || missingTypes.size() > 0 ) {\n      StringBuffer buff = new StringBuffer();\n      buff.append( Messages.getString( \"MappingDialog.Error.Message.IssuesPreventingSaving\" ) + \":\\n\\n\" );\n      if ( missingFamilies.size() > 0 ) {\n        buff.append( Messages.getString( \"MappingDialog.Error.Message.FamilyIssue\" ) + \":\\n\" );\n        buff.append( missingFamilies.toString() ).append( \"\\n\\n\" );\n      }\n      if ( missingColumnNames.size() > 0 ) {\n        buff.append( Messages.getString( \"MappingDialog.Error.Message.ColumnIssue\" ) + \":\\n\" );\n        buff.append( missingColumnNames.toString() ).append( \"\\n\\n\" );\n      }\n      if ( missingTypes.size() > 0 ) {\n        buff.append( Messages.getString( \"MappingDialog.Error.Message.TypeIssue\" ) + \":\\n\" );\n        buff.append( missingTypes.toString() ).append( \"\\n\\n\" );\n      }\n\n      if ( performChecksAndShowGUIErrorDialog ) {\n        MessageDialog.openError( m_shell, Messages.getString( \"MappingDialog.Error.Title.IssuesPreventingSaving\" ), buff\n          .toString() );\n      }\n      if ( problems != null ) {\n        problems.add( buff.toString() );\n      }\n      return null;\n    }\n\n    return theMapping;\n  }\n\n  private void saveMapping() {\n    if ( namedClusterWidget != null && namedClusterWidget.getSelectedNamedCluster() == null ) {\n      MessageDialog.openError( m_shell, BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Title.NamedClusterNotSelected\" ), BaseMessages.getString( PKG,\n        \"MappingDialog.Error.Message.NamedClusterNotSelected.Msg\" ) );\n      return;\n    }\n\n    Mapping theMapping = getMapping( true, null, false );\n    if ( theMapping == null ) {\n      // some problem with the mapping (user will have been informed via dialog)\n      return;\n    }\n\n    if ( notInitializedMappingAdmin() ) {\n      try {\n        m_admin = MappingUtils.getMappingAdmin( m_configProducer );\n      } catch ( HBaseConnectionException e ) {\n        showConnectionErrorDialog( e );\n        return;\n      }\n    }\n\n    String tableName = theMapping.getTableName();\n\n    if ( m_allowTableCreate ) {\n      // check for existence of the table. If table doesn't exist\n      // prompt for creation\n      HBaseConnection hbAdmin = m_admin.getConnection();\n\n      try {\n        if ( !hbAdmin.getTable( tableName ).exists() ) {\n          boolean result =\n            MessageDialog.openConfirm( m_shell, \"Create table\", \"Table \\\"\" + tableName\n              + \"\\\" does not exist. Create it?\" );\n\n          if ( !result ) {\n            return;\n          }\n\n          if ( theMapping.getMappedColumns().size() == 0 ) {\n            MessageDialog.openError( m_shell, \"No columns defined\",\n              \"A HBase table requires at least one column family to be defined.\" );\n            return;\n          }\n\n          // collect up all the column families so that we can create the table\n          Set<String> cols = theMapping.getMappedColumns().keySet();\n          Set<String> families = new TreeSet<String>();\n          for ( String col : cols ) {\n            String family = theMapping.getMappedColumns().get( col ).getColumnFamily();\n            families.add( family );\n          }\n\n          // do we have additional parameters supplied in the table name field\n          // String compression = Compression.Algorithm.NONE.getName();\n          String compression = null;\n          // String bloomFilter = \"NONE\";\n          String bloomFilter = null;\n          String[] opts = m_existingTableNamesCombo.getText().trim().split( \"@\" );\n          if ( opts.length > 1 ) {\n            compression = opts[ 1 ];\n            if ( opts.length == 3 ) {\n              bloomFilter = opts[ 2 ];\n            }\n          }\n\n          Properties creationProps = new Properties();\n          if ( compression != null ) {\n            creationProps.setProperty( HBaseConnection.COL_DESCRIPTOR_COMPRESSION_KEY, compression );\n          }\n          if ( bloomFilter != null ) {\n            creationProps.setProperty( HBaseConnection.COL_DESCRIPTOR_BLOOM_FILTER_KEY, bloomFilter );\n          }\n          List<String> familyList = new ArrayList<String>();\n          for ( String fam : families ) {\n            familyList.add( fam );\n          }\n\n          // create the table\n          hbAdmin.getTable( tableName ).create( familyList, creationProps );\n\n          // refresh the table combo\n          populateTableCombo( true );\n        }\n      } catch ( Exception ex ) {\n        new ErrorDialog( m_shell, Messages.getString( \"MappingDialog.Error.Title.ErrorCreatingTable\" ), Messages\n          .getString( \"MappingDialog.Error.Message.ErrorCreatingTable\" ) + \" \\\"\" + m_existingTableNamesCombo.getText()\n          .trim() + \"\\\"\", ex );\n        return;\n      }\n    }\n\n    try {\n      // now check to see if the mapping exists\n      if ( m_admin.mappingExists( tableName, m_existingMappingNamesCombo.getText().trim() ) ) {\n        // prompt for overwrite\n        boolean result =\n          MessageDialog.openConfirm( m_shell, Messages.getString( \"MappingDialog.Info.Title.MappingExists\" ), Messages\n            .getString( \"MappingDialog.Info.Message1.MappingExists\" ) + m_existingMappingNamesCombo.getText().trim()\n            + Messages.getString( \"MappingDialog.Info.Message2.MappingExists\" ) + tableName + Messages.getString(\n            \"MappingDialog.Info.Message3.MappingExists\" ) );\n        if ( !result ) {\n          return;\n        }\n      }\n      // finally add the mapping.\n      m_admin.putMapping( theMapping, true );\n      MessageDialog.openConfirm( m_shell, Messages.getString( \"MappingDialog.Info.Title.MappingSaved\" ), Messages\n        .getString( \"MappingDialog.Info.Message1.MappingSaved\" ) + m_existingMappingNamesCombo.getText().trim()\n        + Messages.getString( \"MappingDialog.Info.Message2.MappingSaved\" ) + tableName + Messages.getString(\n        \"MappingDialog.Info.Message3.MappingSaved\" ) );\n    } catch ( Exception ex ) {\n      // inform the user via popup\n      new ErrorDialog( m_shell, Messages.getString( \"MappingDialog.Error.Title.ErrorSaving\" ), Messages.getString(\n        \"MappingDialog.Error.Message.ErrorSaving\" ), ex );\n    }\n  }\n\n  public void setMapping( Mapping mapping ) {\n    if ( mapping == null ) {\n      return;\n    }\n    if ( !Utils.isEmpty( mapping.getTableName() ) ) {\n      m_existingTableNamesCombo.setText( mapping.getTableName() );\n    }\n\n    if ( !Utils.isEmpty( mapping.getMappingName() ) ) {\n      m_existingMappingNamesCombo.setText( mapping.getMappingName() );\n    }\n\n    m_fieldsView.clearAll();\n    // do the key first\n    TableItem keyItem = new TableItem( m_fieldsView.table, SWT.NONE );\n    if ( !Utils.isEmpty( mapping.getKeyName() ) ) {\n      keyItem.setText( 1, mapping.getKeyName() );\n    }\n    keyItem.setText( 2, \"Y\" );\n    if ( mapping.getKeyType() != null && !Utils.isEmpty( mapping.getKeyType().toString() ) ) {\n      keyItem.setText( 5, mapping.getKeyType().toString() );\n    }\n    if ( mapping.isTupleMapping() && !Utils.isEmpty( mapping.getTupleFamilies() ) ) {\n      keyItem.setText( 3, mapping.getTupleFamilies() );\n    }\n\n    // the rest of the fields in the mapping\n    Map<String, HBaseValueMetaInterface> mappedFields = mapping.getMappedColumns();\n    for ( String alias : mappedFields.keySet() ) {\n      HBaseValueMetaInterface vm = mappedFields.get( alias );\n      TableItem item = new TableItem( m_fieldsView.table, SWT.NONE );\n      item.setText( 1, alias );\n      item.setText( 2, \"N\" );\n      item.setText( 3, vm.getColumnFamily() );\n      item.setText( 4, vm.getColumnName() );\n\n      if ( vm.isInteger() ) {\n        if ( vm.getIsLongOrDouble() ) {\n          item.setText( 5, \"Long\" );\n        } else {\n          item.setText( 5, \"Integer\" );\n        }\n      } else if ( vm.isNumber() ) {\n        if ( vm.getIsLongOrDouble() ) {\n          item.setText( 5, \"Double\" );\n        } else {\n          item.setText( 5, \"Float\" );\n        }\n      } else {\n        item.setText( 5, vm.getTypeDesc() );\n      }\n\n      if ( vm.getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED ) {\n        item.setText( 6, m_admin.getConnection().getByteConversionUtil().objectIndexValuesToString( vm\n          .getIndex() ) );\n      }\n    }\n\n    m_fieldsView.removeEmptyRows();\n    m_fieldsView.setRowNums();\n    m_fieldsView.optWidth( true );\n  }\n\n  private void loadTableViewFromMapping() {\n    String tableName = \"\";\n    if ( !Utils.isEmpty( m_existingTableNamesCombo.getText().trim() ) ) {\n      tableName = m_existingTableNamesCombo.getText().trim();\n\n      if ( tableName.indexOf( '@' ) > 0 ) {\n        tableName = tableName.substring( 0, tableName.indexOf( '@' ) );\n      }\n    }\n\n    try {\n      if ( m_admin.mappingExists( tableName, m_existingMappingNamesCombo.getText().trim() ) ) {\n\n        Mapping mapping = m_admin.getMapping( tableName, m_existingMappingNamesCombo.getText().trim() );\n\n        setMapping( mapping );\n      }\n\n    } catch ( Exception ex ) {\n      // inform the user via popup\n      new ErrorDialog( m_shell, Messages.getString( \"MappingDialog.Error.Title.ErrorLoadingMapping\" ), Messages\n        .getString( \"MappingDialog.Error.Message.ErrorLoadingMapping\" ), ex );\n    }\n  }\n\n  private void populateMappingComboAndFamilyStuff() {\n    String tableName = \"\";\n    if ( !Utils.isEmpty( m_existingTableNamesCombo.getText().trim() ) ) {\n      tableName = m_existingTableNamesCombo.getText().trim();\n\n      if ( tableName.indexOf( '@' ) > 0 ) {\n        tableName = tableName.substring( 0, tableName.indexOf( '@' ) );\n      }\n    }\n\n    // defaults if we fail to connect, table doesn't exist etc..\n    m_familyCI.setComboValues( new String[] { \"\" } );\n    m_existingMappingNamesCombo.removeAll();\n\n    if ( m_admin != null && !Utils.isEmpty( tableName ) ) {\n      try {\n\n        // first get the existing mapping names (if any)\n        List<String> mappingNames = m_admin.getMappingNames( tableName );\n        for ( String m : mappingNames ) {\n          m_existingMappingNamesCombo.add( m );\n        }\n\n        // now get family information for this table\n        HBaseConnection hbAdmin = m_admin.getConnection();\n\n        HBaseTable hBaseTable = hbAdmin.getTable( tableName );\n        if ( !HbaseUtil.parseQualifierFromTableName( tableName ).isEmpty() && hBaseTable.exists() ) {\n          List<String> colFams = hBaseTable.getColumnFamilies();\n          String[] familyNames = colFams.toArray( new String[ 1 ] );\n          m_familyCI.setComboValues( familyNames );\n        } else {\n          m_familyCI.setComboValues( new String[] { \"\" } );\n        }\n\n        m_familiesInvalidated = false;\n        return;\n\n      } catch ( Exception e ) {\n        showConnectionErrorDialog( e );\n      }\n    }\n  }\n\n  @Override\n  public HBaseService getHBaseService() throws ClusterInitializationException {\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    return namedClusterServiceLocator.getService( nc, HBaseService.class );\n  }\n\n  public HBaseConnection getHBaseConnection() throws IOException, ClusterInitializationException {\n    return getHBaseService().getHBaseConnection( m_transMeta, null, null, null );\n  }\n\n  public String getCurrentConfiguration() {\n    String host = \"\";\n    String port = \"\";\n\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n\n    if ( nc != null ) {\n      host = m_transMeta.environmentSubstitute( nc.getZooKeeperHost() );\n      port = m_transMeta.environmentSubstitute( nc.getZooKeeperPort() );\n    }\n    return host + \":\" + port;\n  }\n\n  @Override\n  public void dispose() {\n    // TODO Auto-generated method stub\n    super.dispose();\n  }\n\n  /**\n   * @param name\n   */\n  public void setSelectedNamedCluster( String name ) {\n    namedClusterWidget.setSelectedNamedCluster( name );\n  }\n\n  /**\n   * @return\n   */\n  public NamedCluster getSelectedNamedCluster() {\n    return namedClusterWidget.getSelectedNamedCluster();\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MappingUtils.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport java.io.IOException;\nimport java.util.HashSet;\nimport java.util.List;\nimport java.util.Set;\n\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.hbase.HBaseConnectionException;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition.MappingColumn;\nimport org.pentaho.big.data.kettle.plugins.hbase.input.HBaseInput;\nimport org.pentaho.big.data.kettle.plugins.hbase.input.Messages;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\n\npublic class MappingUtils {\n\n  public static final int TUPLE_COLUMNS_COUNT = 5;\n\n  public static final int UNDEFINED_VALUE = -1;\n\n  private static final Set<String> TUPLE_COLUMNS = new HashSet<String>();\n\n  public static final String TUPLE_MAPPING_VISIBILITY = \"Visibility\";\n\n  static {\n    TUPLE_COLUMNS.add( Mapping.TupleMapping.KEY.toString() );\n    TUPLE_COLUMNS.add( Mapping.TupleMapping.FAMILY.toString() );\n    TUPLE_COLUMNS.add( Mapping.TupleMapping.COLUMN.toString() );\n    TUPLE_COLUMNS.add( Mapping.TupleMapping.VALUE.toString() );\n    TUPLE_COLUMNS.add( Mapping.TupleMapping.TIMESTAMP.toString() );\n  }\n\n  public static MappingAdmin getMappingAdmin( ConfigurationProducer cProducer ) throws HBaseConnectionException {\n    HBaseConnection hbConnection = null;\n    try {\n      hbConnection = cProducer.getHBaseConnection();\n      hbConnection.checkHBaseAvailable();\n      return new MappingAdmin( hbConnection );\n    } catch ( ClusterInitializationException | IOException e ) {\n      throw new HBaseConnectionException( Messages.getString( \"MappingDialog.Error.Message.UnableToConnect\" ), e );\n    }\n  }\n\n  public static MappingAdmin getMappingAdmin( HBaseService hBaseService, VariableSpace variableSpace, String siteConfig,\n      String defaultConfig ) throws IOException {\n    HBaseConnection hBaseConnection = hBaseService.getHBaseConnection( variableSpace, siteConfig, defaultConfig, null );\n    return new MappingAdmin( hBaseConnection );\n  }\n\n  public static Mapping getMapping( MappingDefinition mappingDefinition, HBaseService hBaseService )\n    throws KettleException {\n    final String tableName = mappingDefinition.getTableName();\n    // empty table name or mapping name does not force an abort\n    if ( Const.isEmpty( tableName ) || Const.isEmpty( mappingDefinition.getMappingName() ) ) {\n      throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.MissingTableMappingName\" ) );\n    }\n    // do we have any non-empty mapping definition?\n    if ( mappingDefinition.getMappingColumns() == null || mappingDefinition.getMappingColumns().isEmpty() ) {\n      throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.NoFieldsDefined\" ) );\n    }\n\n    Mapping theMapping =\n        hBaseService.getMappingFactory().createMapping( tableName, mappingDefinition.getMappingName() );\n    // is the mapping a tuple mapping?\n    final boolean isTupleMapping = isTupleMapping( mappingDefinition );\n    if ( isTupleMapping ) {\n      theMapping.setTupleMapping( true );\n    }\n\n    List<MappingColumn> mappingColumns = mappingDefinition.getMappingColumns();\n    // think about more specific identifier then a row number\n    int columnNumber = 0;\n    boolean keyDefined = false;\n    for ( MappingColumn column : mappingColumns ) {\n      columnNumber++;\n      final String alias = column.getAlias();\n      final boolean isKey = column.isKey();\n      if ( isKey ) {\n        if ( keyDefined ) {\n          throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.MoreThanOneKey\" ) );\n        }\n        keyDefined = true;\n      }\n\n      String family = null;\n      if ( !Const.isEmpty( column.getColumnFamily() ) ) {\n        family = column.getColumnFamily();\n      } else if ( !isKey && !isTupleMapping ) {\n        throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.FamilyIssue\" ) + \": \"\n            + columnNumber );\n      }\n      String colName = null;\n      if ( !Const.isEmpty( column.getColumnName() ) ) {\n        colName = column.getColumnName();\n      } else if ( !isKey && !isTupleMapping ) {\n        throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.ColumnIssue\" ) + \": \"\n            + columnNumber );\n      }\n      String type = null;\n      if ( !Const.isEmpty( column.getType() ) ) {\n        type = column.getType();\n      } else {\n        throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.TypeIssue\" ) + \": \"\n            + columnNumber );\n      }\n\n      HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n      if ( isKey ) {\n        if ( Const.isEmpty( alias ) ) {\n          throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.NoAliasForKey\" ) );\n        }\n\n        if ( isTupleMapping ) {\n          theMapping.setKeyName( alias );\n          theMapping.setTupleFamilies( family );\n        } else {\n          theMapping.setKeyName( alias );\n        }\n        HBaseValueMetaInterface valueMeta =\n            valueMetaInterfaceFactory.createHBaseValueMetaInterface( null, null, alias, 0, UNDEFINED_VALUE,\n                UNDEFINED_VALUE );\n        valueMeta.setKey( true );\n        try {\n          theMapping.setKeyTypeAsString( type );\n          valueMeta.setType( HBaseInput.getKettleTypeByKeyType( theMapping.getKeyType() ) );\n        } catch ( Exception ex ) {\n          // Ignore\n        }\n      } else {\n        try {\n          HBaseValueMetaInterface valueMeta =\n              buildNonKeyValueMeta( alias, family, colName, type, column.getIndexedValues(), hBaseService );\n          theMapping.addMappedColumn( valueMeta, isTupleMapping );\n        } catch ( Exception ex ) {\n          String message =\n              Messages.getString( \"MappingDialog.Error.Message1.DuplicateColumn\" ) + family + \",\" + colName + Messages\n                  .getString( \"MappingDialog.Error.Message2.DuplicateColumn\" );\n          throw new KettleException( message );\n        }\n      }\n    }\n\n    if ( !keyDefined ) {\n      throw new KettleException( Messages.getString( \"MappingDialog.Error.Message.NoKeyDefined\" ) );\n    }\n    return theMapping;\n  }\n\n  public static HBaseValueMetaInterface buildNonKeyValueMeta( String alias, String family, String columnName,\n      String type, String indexedVals, HBaseService hBaseService ) throws KettleException {\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = hBaseService.getHBaseValueMetaInterfaceFactory();\n    HBaseValueMetaInterface valueMeta =\n        valueMetaInterfaceFactory.createHBaseValueMetaInterface( family, columnName, alias, 0, UNDEFINED_VALUE,\n            UNDEFINED_VALUE );\n    try {\n      valueMeta.setHBaseTypeFromString( type );\n      if ( valueMeta.isString() && !Const.isEmpty( indexedVals ) ) {\n        ByteConversionUtil byteConversionUtil = hBaseService.getByteConversionUtil();\n        Object[] vals = byteConversionUtil.stringIndexListToObjects( indexedVals );\n        valueMeta.setIndex( vals );\n        valueMeta.setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n      }\n      return valueMeta;\n    } catch ( IllegalArgumentException e ) {\n      throw new KettleException( e );\n    }\n  }\n\n  public static boolean isTupleMapping( MappingDefinition mappingDefinition ) {\n    List<MappingColumn> mappingColumns = mappingDefinition.getMappingColumns();\n    int mappingSize = mappingColumns.size();\n    if ( !( mappingSize == TUPLE_COLUMNS_COUNT || mappingSize == TUPLE_COLUMNS_COUNT + 1 ) ) {\n      return false;\n    }\n    int tupleIdCount = 0;\n    for ( MappingColumn column : mappingColumns ) {\n      if ( isTupleMappingColumn( column.getAlias() ) ) {\n        tupleIdCount++;\n      }\n    }\n    return tupleIdCount == TUPLE_COLUMNS_COUNT || tupleIdCount == TUPLE_COLUMNS_COUNT + 1;\n  }\n\n  public static boolean isTupleMappingColumn( String columnName ) {\n    return TUPLE_COLUMNS.contains( columnName ) || columnName.equals( TUPLE_MAPPING_VISIBILITY );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin;\nimport org.pentaho.big.data.kettle.plugins.hbase.output.KettleRowToHBaseTuple.FieldException;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseDelete;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTable;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\n/**\n * Class providing an output step for writing data to an HBase table according to meta data column/type mapping info\n * stored in a separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the\n * meta data format.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class HBaseOutput extends BaseStep implements StepInterface {\n\n  protected HBaseOutputMeta m_meta;\n  protected HBaseOutputData m_data;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private HBaseService hBaseService;\n  private HBaseTableWriteOperationManager targetTableWriteOperationManager;\n\n  public HBaseOutput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n      Trans trans, NamedClusterServiceLocator namedClusterServiceLocator ) {\n\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  /** Configuration object for connecting to HBase */\n  protected HBaseConnection m_hbAdmin;\n\n  /** Byte utilities */\n  protected ByteConversionUtil m_bytesUtil;\n\n  /** The mapping admin object for interacting with mapping information */\n  protected MappingAdmin m_mappingAdmin;\n\n  /** The mapping information to use in order to decode HBase column values */\n  protected Mapping m_tableMapping;\n\n  /** Information from the mapping */\n  protected Map<String, HBaseValueMetaInterface> m_columnsMappedByAlias;\n\n  /** True if the target table has been connected to successfully */\n  protected HBaseTable targetTable;\n\n  /** Index of the key in the incoming fields */\n  protected int m_incomingKeyIndex;\n\n  /** The ValueMetaInterface of the incoming key field */\n  protected ValueMetaInterface m_incomingKeyValueMeta;\n\n  /** Object used when a tuple is supplied as the incoming fields */\n  protected KettleRowToHBaseTuple tupleRowConverter;\n\n  @Override\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n\n    Object[] r = getRow();\n\n    if ( r == null ) {\n      // no more input\n\n      // clean up/close connections etc.\n      // target table will be null if we haven't seen any input\n      if ( targetTable != null ) {\n        if ( targetTableWriteOperationManager != null ) {\n          try {\n            if ( !targetTableWriteOperationManager.isAutoFlush() ) {\n              logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.FlushingWriteBuffer\" ) );\n              targetTableWriteOperationManager.flushCommits();\n            }\n          } catch ( Exception ex ) {\n            throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n                \"HBaseOutput.Error.ProblemFlushingBufferedData\", ex.getMessage() ), ex );\n          } finally {\n            try {\n              targetTableWriteOperationManager.close();\n            } catch ( IOException e ) {\n              // Ignore\n            }\n          }\n        }\n        try {\n          targetTable.close();\n        } catch ( IOException e ) {\n          // Ignore\n        }\n\n        try {\n          logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.ClosingConnectionToTable\" ) );\n          targetTable = null;\n          m_hbAdmin.close();\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.ProblemWhenClosingConnection\", ex.getMessage() ), ex );\n        }\n      }\n\n      setOutputDone();\n      return false;\n    }\n\n    if ( first ) {\n      first = false;\n      m_meta = (HBaseOutputMeta) smi;\n      m_data = (HBaseOutputData) sdi;\n\n      // Get the connection to HBase\n      try {\n        logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.ConnectingToHBase\" ) );\n\n        List<String> connectionMessages = new ArrayList<String>();\n        hBaseService = namedClusterServiceLocator.getService( m_meta.getNamedCluster(), HBaseService.class );\n        m_hbAdmin =\n            hBaseService.getHBaseConnection( this, environmentSubstitute( m_meta.getCoreConfigURL() ),\n                environmentSubstitute( m_meta.getDefaultConfigURL() ), log );\n        m_bytesUtil = hBaseService.getByteConversionUtil();\n\n        if ( connectionMessages.size() > 0 ) {\n          for ( String m : connectionMessages ) {\n            logBasic( m );\n          }\n        }\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseOutput.Error.UnableToObtainConnection\", ex.getMessage() ), ex );\n      }\n      try {\n        m_mappingAdmin = new MappingAdmin( m_hbAdmin );\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseOutput.Error.UnableToObtainConnection\", ex.getMessage() ), ex );\n      }\n\n      // check on the existence and readiness of the target table\n      String targetName = environmentSubstitute( m_meta.getTargetTableName() );\n      if ( Utils.isEmpty( targetName ) ) {\n        throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseOutput.Error.NoTargetTableSpecified\" ) );\n      }\n      try {\n        targetTable = m_hbAdmin.getTable( targetName );\n        if ( !targetTable.exists() ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.TargetTableDoesNotExist\", targetName ) );\n        }\n\n        if ( targetTable.disabled() || !targetTable.available() ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.TargetTableIsNotAvailable\", targetName ) );\n        }\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseOutput.Error.ProblemWhenCheckingAvailReadiness\", targetName, ex.getMessage() ), ex );\n      }\n\n      // Get mapping details for the target table\n\n      if ( m_meta.getMapping() != null && Utils.isEmpty( m_meta.getTargetMappingName() ) ) {\n        m_tableMapping = m_meta.getMapping();\n      } else {\n        try {\n          logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.RetrievingMappingDetails\" ) );\n\n          m_tableMapping =\n              m_mappingAdmin.getMapping( environmentSubstitute( m_meta.getTargetTableName() ), environmentSubstitute(\n                  m_meta.getTargetMappingName() ) );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.ProblemGettingMappingInfo\", ex.getMessage() ), ex );\n        }\n      }\n      m_columnsMappedByAlias = m_tableMapping.getMappedColumns();\n\n      if ( !m_meta.m_deleteRowKey && m_tableMapping.isTupleMapping() ) {\n        /*\n         * We are not executing a delete and the mapping is a tuple mapping\n         * Deletes need to go through the other branch of code to decode the incoming key field index\n         */\n        try {\n          tupleRowConverter = new KettleRowToHBaseTuple( getInputRowMeta(), m_tableMapping, m_columnsMappedByAlias );\n        } catch ( Exception e ) {\n          throw new KettleException( e );\n        }\n\n      } else {\n\n        // check that all incoming fields are in the mapping.\n        // fewer fields than the mapping defines is OK as long as we have\n        // the key as an incoming field. Can either use strict type checking\n        // or use an error stream for rows where type-conversion to the mapping\n        // types fail. Probably should use an error stream - e.g. could get rows\n        // with negative numeric key value where mapping specifies an unsigned key\n        boolean incomingKey = false;\n        RowMetaInterface inMeta = getInputRowMeta();\n        for ( int i = 0; i < inMeta.size(); i++ ) {\n          ValueMetaInterface vm = inMeta.getValueMeta( i );\n          String inName = vm.getName();\n\n          if ( m_tableMapping.getKeyName().equals( inName ) ) {\n            incomingKey = true;\n            m_incomingKeyIndex = i;\n            m_incomingKeyValueMeta = vm;\n            // should we check the type?\n          } else {\n            HBaseValueMetaInterface hvm = m_columnsMappedByAlias.get( inName.trim() );\n            if ( hvm == null && !m_meta.getDeleteRowKey() ) {\n              throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n                  \"HBaseOutput.Error.CantFindIncomingField\", inName, m_tableMapping.getMappingName() ) );\n            }\n          }\n        }\n\n        if ( !incomingKey ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.TableKeyNotPresentInIncomingFields\", m_tableMapping.getKeyName(), m_tableMapping\n                  .getMappingName() ) );\n        }\n\n      }\n\n      try {\n        logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.ConnectingToTargetTable\" ) );\n\n        // set a write buffer size (and disable auto flush)\n        Long writeBufferSize = null;\n        if ( !Utils.isEmpty( m_meta.getWriteBufferSize() ) ) {\n          writeBufferSize = Long.parseLong( environmentSubstitute( m_meta.getWriteBufferSize() ) );\n\n          logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.SettingWriteBuffer\", writeBufferSize ) );\n\n          if ( m_meta.getDisableWriteToWAL() ) {\n            logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.DisablingWriteToWAL\" ) );\n          }\n        }\n        targetTableWriteOperationManager = targetTable.createWriteOperationManager( writeBufferSize );\n      } catch ( Exception e ) {\n        throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseOutput.Error.ProblemConnectingToTargetTable\", e.getMessage() ), e );\n      }\n\n      // output (downstream) is the same as input\n      m_data.setOutputRowMeta( getInputRowMeta() );\n    }\n\n\n    if ( m_meta.getDeleteRowKey() ) {\n\n      try {\n\n        if ( m_incomingKeyValueMeta.isNull( r[m_incomingKeyIndex] ) ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.IncomingRowHasNullKeyValue\" ) );\n        }\n\n        byte[] encodedKeyBytes = m_bytesUtil.encodeKeyValue( r[m_incomingKeyIndex], m_incomingKeyValueMeta, m_tableMapping.getKeyType() );\n        HBaseDelete hBaseDelete = targetTableWriteOperationManager.createDelete( encodedKeyBytes );\n        hBaseDelete.execute();\n\n      } catch ( Exception ex ) {\n\n        if ( getStepMeta().isDoingErrorHandling() ) {\n          String errorDescriptions = \"\";\n          if ( !Utils.isEmpty( ex.getMessage() ) ) {\n            errorDescriptions = ex.getMessage();\n          } else {\n            errorDescriptions = BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.ErrorCreatingDelete\" );\n          }\n          putError( getInputRowMeta(), r, 1, errorDescriptions, m_tableMapping.getKeyName(), \"HBaseOutput004\" );\n\n          return true;\n        } else {\n          throw new KettleException( ex );\n        }\n      }\n\n    } else {\n      // Put the data\n      HBasePut hBasePut;\n\n      if ( tupleRowConverter != null ) {\n\n        try {\n\n          hBasePut =\n              tupleRowConverter.createTuplePut( targetTableWriteOperationManager, m_bytesUtil, r, !m_meta\n                  .getDisableWriteToWAL() );\n        } catch ( Exception ex ) {\n\n          if ( getStepMeta().isDoingErrorHandling() ) {\n            String errorDescriptions = \"\";\n            String errorFields = \"Unknown\";\n            if ( ex instanceof FieldException ) {\n              errorFields =  ( (FieldException) ex ).getFieldString();\n              errorDescriptions = BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.MissingFieldData\", errorFields );\n            } else if ( !Utils.isEmpty( ex.getMessage() ) ) {\n              errorDescriptions = ex.getMessage();\n            } else {\n              errorDescriptions = BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.ErrorCreatingPut\" );\n            }\n            putError( getInputRowMeta(), r, 1, errorDescriptions, errorFields, \"HBaseOutput003\" );\n\n            return true;\n          } else {\n            throw new KettleException( ex );\n          }\n\n        }\n\n      } else {\n\n        try {\n          // key must not be null\n          hBasePut =\n              HBaseOutputData.initializeNewPut( getInputRowMeta(), m_incomingKeyIndex, r, m_tableMapping, m_bytesUtil,\n                  targetTableWriteOperationManager, !m_meta.getDisableWriteToWAL() );\n          if ( hBasePut == null ) {\n            String errorDescriptions =\n                BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.IncomingRowHasNullKeyValue\" );\n            if ( getStepMeta().isDoingErrorHandling() ) {\n              String errorFields = m_tableMapping.getKeyName();\n              putError( getInputRowMeta(), r, 1, errorDescriptions, errorFields, \"HBaseOutput001\" );\n\n              return true;\n            } else {\n              throw new KettleException( errorDescriptions );\n            }\n          }\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.UnableToSetTargetTable\" ), ex );\n        }\n\n        // now encode the rest of the fields. Nulls do not get inserted of course\n        HBaseOutputData.addColumnsToPut( getInputRowMeta(), r, m_incomingKeyIndex, m_columnsMappedByAlias, hBasePut,\n            m_bytesUtil );\n      }\n\n      try {\n        hBasePut.execute();\n      } catch ( Exception e ) {\n        String errorDescriptions =\n            BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.ProblemInsertingRowIntoHBase\", e\n                .getMessage() );\n        if ( getStepMeta().isDoingErrorHandling() ) {\n          String errorFields = \"Unknown\";\n          putError( getInputRowMeta(), r, 1, errorDescriptions, errorFields, \"HBaseOutput002\" );\n        } else {\n          throw new KettleException( errorDescriptions, e );\n        }\n      }\n    }\n\n    // pass on the data to any downstream steps\n    putRow( m_data.getOutputRowMeta(), r );\n\n    if ( log.isRowLevel() ) {\n      log.logRowlevel( toString(), \"Read row #\" + getLinesRead() + \" : \" + r );\n    }\n\n    if ( checkFeedback( getLinesRead() ) ) {\n      logBasic( \"Linenr \" + getLinesRead() );\n    }\n\n    return true;\n  }\n\n  @Override\n  public boolean init( StepMetaInterface smi, StepDataInterface sdi ) {\n    if ( super.init( smi, sdi ) ) {\n      HBaseOutputMeta meta = (HBaseOutputMeta) smi;\n      try {\n        // Set Embedded NamedCluter MetatStore Provider Key so that it can be passed to VFS\n        if ( getTransMeta().getNamedClusterEmbedManager() != null ) {\n          getTransMeta().getNamedClusterEmbedManager().passEmbeddedMetastoreKey( getTransMeta(),\n            getTransMeta().getEmbeddedMetastoreProviderKey() );\n        }\n        meta.applyInjection( this );\n        return true;\n      } catch ( KettleException e ) {\n        logError( \"Error while injecting properties\", e );\n      }\n    }\n    return false;\n  }\n\n  @Override\n  public void setStopped( boolean stopped ) {\n    if ( isStopped() && stopped == true ) {\n      return;\n    }\n    super.setStopped( stopped );\n\n    if ( stopped ) {\n      if ( targetTable != null ) {\n        try {\n          if ( !targetTableWriteOperationManager.isAutoFlush() ) {\n            logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.FlushingWriteBuffer\" ) );\n            targetTableWriteOperationManager.flushCommits();\n          }\n        } catch ( Exception ex ) {\n          logError( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.ProblemFlushingBufferedData\", ex\n              .getMessage() ), ex );\n        }\n      }\n      if ( m_hbAdmin != null ) {\n        try {\n          logBasic( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.ClosingConnectionToTable\" ) );\n          m_hbAdmin.close();\n        } catch ( Exception ex ) {\n          logError( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.ProblemWhenClosingConnection\", ex\n              .getMessage() ), ex );\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.util.Map;\n\n/**\n * Class providing an output step for writing data to an HBase table according to meta data column/type mapping info\n * stored in a separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the\n * meta data format.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class HBaseOutputData extends BaseStepData implements StepDataInterface {\n\n  /** The output data format */\n  protected RowMetaInterface m_outputRowMeta;\n\n  public RowMetaInterface getOutputRowMeta() {\n    return m_outputRowMeta;\n  }\n\n  public void setOutputRowMeta( RowMetaInterface rmi ) {\n    m_outputRowMeta = rmi;\n  }\n\n  /**\n   * Sets up a new target table put operation using the connection shim\n   *\n   * @param inRowMeta\n   *          the incoming kettle row meta data\n   * @param keyIndex\n   *          the index of the key in the incoming row structure\n   * @param kettleRow\n   *          the current incoming kettle row\n   * @param tableMapping\n   *          the HBase table mapping to use\n   * @param bu\n   *          the byte util shim to use for conversion to and from byte arrays\n   * @param hbAdmin\n   *          the connection shim\n   * @param writeToWAL\n   *          true if the write ahead log should be written to\n   * @return false if the key is null (missing) for the current incoming kettle row\n   * @throws Exception\n   *           if a problem occurs when initializing the new put operation\n   */\n  public static HBasePut initializeNewPut( RowMetaInterface inRowMeta, int keyIndex, Object[] kettleRow,\n      Mapping tableMapping, ByteConversionUtil bu, HBaseTableWriteOperationManager hBaseTableWriteOperationManager,\n      boolean writeToWAL ) throws Exception {\n    ValueMetaInterface keyvm = inRowMeta.getValueMeta( keyIndex );\n\n    if ( keyvm.isNull( kettleRow[keyIndex] ) ) {\n      return null;\n    }\n\n    byte[] encodedKey = bu.encodeKeyValue( kettleRow[keyIndex], keyvm, tableMapping.getKeyType() );\n\n    HBasePut hBaseTablePut = hBaseTableWriteOperationManager.createPut( encodedKey );\n    hBaseTablePut.setWriteToWAL( writeToWAL );\n    return hBaseTablePut;\n  }\n\n  /**\n   * Adds those incoming kettle field values that are defined in the table mapping for the current row to the target\n   * table put operation\n   *\n   * @param inRowMeta\n   *          the incoming kettle row meta data\n   * @param kettleRow\n   *          the current incoming kettle row\n   * @param keyIndex\n   *          the index of the key in the incoming row structure\n   * @param columnsMappedByAlias\n   *          the columns in the table mapping\n   * @param hbAdmin\n   *          the connection shim\n   * @param bu\n   *          the byte util shim to use for conversion to and from byte arrays\n   * @throws KettleException\n   *           if a problem occurs when adding a column to the put operation\n   */\n  public static void addColumnsToPut( RowMetaInterface inRowMeta, Object[] kettleRow, int keyIndex,\n      Map<String, HBaseValueMetaInterface> columnsMappedByAlias, HBasePut hBasePut, ByteConversionUtil bu )\n    throws KettleException {\n\n    for ( int i = 0; i < inRowMeta.size(); i++ ) {\n      ValueMetaInterface current = inRowMeta.getValueMeta( i );\n      if ( i != keyIndex && !current.isNull( kettleRow[i] ) ) {\n        HBaseValueMetaInterface hbaseColMeta = columnsMappedByAlias.get( current.getName() );\n        String columnFamily = hbaseColMeta.getColumnFamily();\n        String columnName = hbaseColMeta.getColumnName();\n\n        boolean binaryColName = false;\n        if ( columnName.startsWith( \"@@@binary@@@\" ) ) {\n          // assume hex encoded column name\n          columnName = columnName.replace( \"@@@binary@@@\", \"\" );\n          binaryColName = true;\n        }\n        byte[] encoded = hbaseColMeta.encodeColumnValue( kettleRow[i], current );\n\n        try {\n          hBasePut.addColumn( columnFamily, columnName, binaryColName, encoded );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG,\n              \"HBaseOutput.Error.UnableToAddColumnToTargetTablePut\" ), ex );\n        }\n      }\n    }\n  }\n\n  public static URL stringToURL( String pathOrURL ) throws MalformedURLException {\n    URL result = null;\n\n    if ( !Const.isEmpty( pathOrURL ) ) {\n      if ( pathOrURL.toLowerCase().startsWith( \"http://\" ) || pathOrURL.toLowerCase().startsWith( \"file://\" ) ) {\n        result = new URL( pathOrURL );\n      } else {\n        String c = \"file://\" + pathOrURL;\n        result = new URL( c );\n      }\n    }\n\n    return result;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.Cursor;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.ConfigurationProducer;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.FieldProducer;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingEditor;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Set;\n\n/**\n * Dialog class for HBaseOutput\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\n@PluginDialog( id = \"HBaseOutput\", image = \"HBO.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"Products/HBase_Output\" )\npublic class HBaseOutputDialog extends BaseStepDialog implements StepDialogInterface, ConfigurationProducer,\n  FieldProducer {\n\n  private final HBaseOutputMeta m_currentMeta;\n  private final HBaseOutputMeta m_originalMeta;\n  private final HBaseOutputMeta m_configurationMeta;\n\n  /**\n   * various UI bits and pieces for the dialog\n   */\n  private Label m_stepnameLabel;\n  private Text m_stepnameText;\n\n  // The tabs of the dialog\n  private CTabFolder m_wTabFolder;\n  private CTabItem m_wConfigTab;\n\n  private CTabItem m_editorTab;\n\n  NamedClusterWidgetImpl namedClusterWidget;\n\n  // Core config line\n  private Button m_coreConfigBut;\n  private TextVar m_coreConfigText;\n\n  // Default config line\n  private Button m_defaultConfigBut;\n  private TextVar m_defaultConfigText;\n\n  // Table name line\n  private Button m_mappedTableNamesBut;\n  private CCombo m_mappedTableNamesCombo;\n\n  // Mapping name line\n  private Button m_mappingNamesBut;\n  private CCombo m_mappingNamesCombo;\n\n  //Delete row key line\n  private Button m_deleteRowKeyBut;\n\n  /** Store the mapping information in the step's meta data */\n  private Button m_storeMappingInStepMetaData;\n\n  // Disable write to WAL check box\n  private Button m_disableWriteToWALBut;\n\n  // Write buffer size line\n  private TextVar m_writeBufferSizeText;\n\n  // mapping editor composite\n  private MappingEditor m_mappingEditor;\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n\n  public HBaseOutputDialog( Shell parent, Object in, TransMeta tr, String name ) {\n\n    super( parent, (BaseStepMeta) in, tr, name );\n\n    m_currentMeta = (HBaseOutputMeta) in;\n    m_originalMeta = (HBaseOutputMeta) m_currentMeta.clone();\n    m_configurationMeta = (HBaseOutputMeta) m_currentMeta.clone();\n    namedClusterService = m_currentMeta.getNamedClusterService();\n    runtimeTestActionService = m_currentMeta.getRuntimeTestActionService();\n    runtimeTester = m_currentMeta.getRuntimeTester();\n    namedClusterServiceLocator = m_currentMeta.getNamedClusterServiceLocator();\n  }\n\n  public String open() {\n\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MIN | SWT.MAX );\n\n    props.setLook( shell );\n    setShellImage( shell, m_currentMeta );\n\n    // used to listen to a text field (m_wStepname)\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    };\n\n    changed = m_currentMeta.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.Shell.Title\" ) );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    m_stepnameLabel = new Label( shell, SWT.RIGHT );\n    m_stepnameLabel.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.StepName.Label\" ) );\n    props.setLook( m_stepnameLabel );\n\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( middle, -margin );\n    fd.top = new FormAttachment( 0, margin );\n    m_stepnameLabel.setLayoutData( fd );\n    m_stepnameText = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_stepnameText.setText( stepname );\n    props.setLook( m_stepnameText );\n    m_stepnameText.addModifyListener( lsMod );\n\n    // format the text field\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_stepnameText.setLayoutData( fd );\n\n    m_wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( m_wTabFolder, Props.WIDGET_STYLE_TAB );\n    m_wTabFolder.setSimple( false );\n\n    // Start of the config tab\n    m_wConfigTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wConfigTab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.ConfigTab.TabTitle\" ) );\n\n    Composite wConfigComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wConfigComp );\n\n    FormLayout configLayout = new FormLayout();\n    configLayout.marginWidth = 3;\n    configLayout.marginHeight = 3;\n    wConfigComp.setLayout( configLayout );\n\n    Label namedClusterLab = new Label( wConfigComp, SWT.RIGHT );\n    namedClusterLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.NamedCluster.Label\" ) );\n    namedClusterLab.setToolTipText(\n      BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.NamedCluster.TipText\" ) );\n    props.setLook( namedClusterLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 10 );\n    fd.right = new FormAttachment( middle, -margin );\n    namedClusterLab.setLayoutData( fd );\n\n    namedClusterWidget =\n      new NamedClusterWidgetImpl( wConfigComp, false, namedClusterService, runtimeTestActionService, runtimeTester, false );\n    namedClusterWidget.initiate();\n    props.setLook( namedClusterWidget );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    namedClusterWidget.setLayoutData( fd );\n\n    // core config line\n    Label coreConfigLab = new Label( wConfigComp, SWT.RIGHT );\n    coreConfigLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.CoreConfig.Label\" ) );\n    coreConfigLab\n      .setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.CoreConfig.TipText\" ) );\n    props.setLook( coreConfigLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    coreConfigLab.setLayoutData( fd );\n\n    m_coreConfigBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_coreConfigBut );\n    m_coreConfigBut.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.Button.Browse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, 0 );\n    m_coreConfigBut.setLayoutData( fd );\n\n    m_coreConfigBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n        String[] extensions = null;\n        String[] filterNames = null;\n\n        extensions = new String[ 2 ];\n        filterNames = new String[ 2 ];\n        extensions[ 0 ] = \"*.xml\";\n        filterNames[ 0 ] = BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.FileType.XML\" );\n        extensions[ 1 ] = \"*\";\n        filterNames[ 1 ] = BaseMessages.getString( HBaseOutputMeta.PKG, \"System.FileType.AllFiles\" );\n\n        dialog.setFilterExtensions( extensions );\n\n        if ( dialog.open() != null ) {\n          m_coreConfigText.setText( dialog.getFilterPath() + System.getProperty( \"file.separator\" )\n            + dialog.getFileName() );\n        }\n\n      }\n    } );\n\n    m_coreConfigText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_coreConfigText );\n    m_coreConfigText.addModifyListener( lsMod );\n\n    // set the tool tip to the contents with any env variables expanded\n    m_coreConfigText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_coreConfigText.setToolTipText( transMeta.environmentSubstitute( m_coreConfigText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( namedClusterWidget, margin );\n    fd.right = new FormAttachment( m_coreConfigBut, -margin );\n    m_coreConfigText.setLayoutData( fd );\n\n    // default config line\n    Label defaultConfigLab = new Label( wConfigComp, SWT.RIGHT );\n    defaultConfigLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.DefaultConfig.Label\" ) );\n    defaultConfigLab.setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG,\n      \"HBaseOutputDialog.DefaultConfig.TipText\" ) );\n    props.setLook( defaultConfigLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    defaultConfigLab.setLayoutData( fd );\n\n    m_defaultConfigBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_defaultConfigBut );\n    m_defaultConfigBut.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.Button.Browse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, 0 );\n    m_defaultConfigBut.setLayoutData( fd );\n\n    m_defaultConfigBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n        String[] extensions = null;\n        String[] filterNames = null;\n\n        extensions = new String[ 2 ];\n        filterNames = new String[ 2 ];\n        extensions[ 0 ] = \"*.xml\";\n        filterNames[ 0 ] = BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseInputDialog.FileType.XML\" );\n        extensions[ 1 ] = \"*\";\n        filterNames[ 1 ] = BaseMessages.getString( HBaseOutputMeta.PKG, \"System.FileType.AllFiles\" );\n\n        dialog.setFilterExtensions( extensions );\n\n        if ( dialog.open() != null ) {\n          m_defaultConfigText.setText( dialog.getFilterPath() + System.getProperty( \"file.separator\" )\n            + dialog.getFileName() );\n        }\n\n      }\n    } );\n\n    m_defaultConfigText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_defaultConfigText );\n    m_defaultConfigText.addModifyListener( lsMod );\n\n    // set the tool tip to the contents with any env variables expanded\n    m_defaultConfigText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_defaultConfigText.setToolTipText( transMeta.environmentSubstitute( m_defaultConfigText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_coreConfigText, margin );\n    fd.right = new FormAttachment( m_defaultConfigBut, -margin );\n    m_defaultConfigText.setLayoutData( fd );\n\n    // table name\n    Label tableNameLab = new Label( wConfigComp, SWT.RIGHT );\n    tableNameLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.TableName.Label\" ) );\n    tableNameLab.setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.TableName.TipText\" ) );\n    props.setLook( tableNameLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    tableNameLab.setLayoutData( fd );\n\n    m_mappedTableNamesBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_mappedTableNamesBut );\n    m_mappedTableNamesBut.setText(\n      BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.TableName.Button\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, 0 );\n    m_mappedTableNamesBut.setLayoutData( fd );\n\n    m_mappedTableNamesCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_mappedTableNamesCombo );\n\n    m_mappedTableNamesCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_mappedTableNamesCombo.setToolTipText( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ) );\n      }\n    } );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_defaultConfigText, margin );\n    fd.right = new FormAttachment( m_mappedTableNamesBut, -margin );\n    m_mappedTableNamesCombo.setLayoutData( fd );\n\n    m_mappedTableNamesBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        setupMappedTableNames();\n        if (  m_mappedTableNamesCombo.getItemCount() > 0 ) {\n          m_mappedTableNamesCombo.setListVisible( true );\n        }\n      }\n    } );\n\n    // mapping name\n    Label mappingNameLab = new Label( wConfigComp, SWT.RIGHT );\n    mappingNameLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.MappingName.Label\" ) );\n    mappingNameLab.setToolTipText( BaseMessages\n      .getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.MappingName.TipText\" ) );\n    props.setLook( mappingNameLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    mappingNameLab.setLayoutData( fd );\n\n    m_mappingNamesBut = new Button( wConfigComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_mappingNamesBut );\n    m_mappingNamesBut.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.MappingName.Button\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, 0 );\n    m_mappingNamesBut.setLayoutData( fd );\n\n    m_mappingNamesBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        setupMappingNamesForTable( false );\n        if (  m_mappingNamesCombo.getItemCount() > 0 ) {\n          m_mappingNamesCombo.setListVisible( true );\n        }\n      }\n    } );\n\n    m_mappingNamesCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_mappingNamesCombo );\n\n    m_mappingNamesCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n\n        m_mappingNamesCombo.setToolTipText( transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) );\n        m_storeMappingInStepMetaData.setSelection( false );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_mappedTableNamesCombo, margin );\n    fd.right = new FormAttachment( m_mappingNamesBut, -margin );\n    m_mappingNamesCombo.setLayoutData( fd );\n\n\n    // store mapping in meta data\n    Label storeMapping = new Label( wConfigComp, SWT.RIGHT );\n    storeMapping.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.StoreMapping.Label\" ) );\n    storeMapping\n      .setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.StoreMapping.TipText\" ) );\n    props.setLook( storeMapping );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_mappingNamesCombo, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    storeMapping.setLayoutData( fd );\n\n    m_storeMappingInStepMetaData = new Button( wConfigComp, SWT.CHECK );\n    props.setLook( m_storeMappingInStepMetaData );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_mappingNamesCombo, margin );\n    m_storeMappingInStepMetaData.setLayoutData( fd );\n\n\n    //delete rows by key option\n    Label deleteRows = new Label( wConfigComp, SWT.RIGHT );\n    deleteRows.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.DeleteRowKey.Label\" ) );\n    deleteRows.setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.DeleteRowKey.TipText\" ) );\n    props.setLook( deleteRows );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_storeMappingInStepMetaData, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    deleteRows.setLayoutData( fd );\n\n    m_deleteRowKeyBut = new Button( wConfigComp, SWT.CHECK );\n    props.setLook( m_deleteRowKeyBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_storeMappingInStepMetaData, margin );\n    m_deleteRowKeyBut.setLayoutData( fd );\n\n    m_deleteRowKeyBut.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent se ) {\n        walEnabled();\n      };\n    } );\n\n    // disable write to WAL\n    Label disableWALLab = new Label( wConfigComp, SWT.RIGHT );\n    disableWALLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.DisableWAL.Label\" ) );\n    disableWALLab\n      .setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.DisableWAL.TipText\" ) );\n    props.setLook( disableWALLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_deleteRowKeyBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    disableWALLab.setLayoutData( fd );\n\n    m_disableWriteToWALBut = new Button( wConfigComp, SWT.CHECK | SWT.CENTER );\n    m_disableWriteToWALBut.setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG,\n      \"HBaseOutputDialog.DisableWAL.TipText\" ) );\n    props.setLook( m_disableWriteToWALBut );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_deleteRowKeyBut, margin );\n    // fd.right = new FormAttachment(middle, -margin);\n    m_disableWriteToWALBut.setLayoutData( fd );\n\n    // write buffer size line\n    Label writeBufferLab = new Label( wConfigComp, SWT.RIGHT );\n    writeBufferLab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.WriteBufferSize.Label\" ) );\n    writeBufferLab.setToolTipText( BaseMessages.getString( HBaseOutputMeta.PKG,\n      \"HBaseOutputDialog.WriteBufferSize.TipText\" ) );\n    props.setLook( writeBufferLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_disableWriteToWALBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    writeBufferLab.setLayoutData( fd );\n\n    m_writeBufferSizeText = new TextVar( transMeta, wConfigComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_writeBufferSizeText );\n    m_writeBufferSizeText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_writeBufferSizeText.setToolTipText( transMeta.environmentSubstitute( m_writeBufferSizeText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_disableWriteToWALBut, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_writeBufferSizeText.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wConfigComp.setLayoutData( fd );\n\n    wConfigComp.layout();\n    m_wConfigTab.setControl( wConfigComp );\n\n    // mapping editor tab\n    m_editorTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_editorTab.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.MappingEditorTab.TabTitle\" ) );\n\n    m_mappingEditor =\n      new MappingEditor( shell, m_wTabFolder, this, this, SWT.FULL_SELECTION | SWT.MULTI, true, props, transMeta,\n        namedClusterService, runtimeTestActionService, runtimeTester, namedClusterServiceLocator );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( 0, 0 );\n    m_mappingEditor.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_mappingEditor.setLayoutData( fd );\n\n    m_mappingEditor.layout();\n    m_editorTab.setControl( m_mappingEditor );\n\n    // -----------------\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_stepnameText, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -50 );\n    m_wTabFolder.setLayoutData( fd );\n\n    // Buttons inherited from BaseStepDialog\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.Button.OK\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.Button.Cancel\" ) );\n\n    setButtonPositions( new Button[] { wOK, wCancel }, margin, m_wTabFolder );\n\n    // Add listeners\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wCancel.addListener( SWT.Selection, lsCancel );\n    wOK.addListener( SWT.Selection, lsOK );\n\n    lsDef = new SelectionAdapter() {\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    m_stepnameText.addSelectionListener( lsDef );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    m_wTabFolder.setSelection( 0 );\n    setSize();\n\n    getData();\n\n    ServiceStatus serviceStatus = m_currentMeta.getServiceStatus();\n    if ( !serviceStatus.isOk() ) {\n      new ErrorDialog( shell, Messages.getString( \"Dialog.Error\" ),\n        Messages.getString( \"HBaseOutput.Error.ServiceStatus\" ),\n        serviceStatus.getException() );\n    }\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n\n    return stepname;\n  }\n\n  protected void cancel() {\n    stepname = null;\n    m_currentMeta.setChanged( changed );\n\n    dispose();\n  }\n\n  protected void ok() {\n    if ( Utils.isEmpty( m_stepnameText.getText() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.StepJobEntryNameMissing.Title\" ) );\n      mb.setMessage( BaseMessages.getString( HBaseOutputMeta.PKG, \"System.JobEntryNameMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n    if ( namedClusterWidget.getSelectedNamedCluster() == null ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"Dialog.Error\" ) );\n      mb.setMessage( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.NamedClusterNotSelected.Msg\" ) );\n      mb.open();\n      return;\n    } else {\n      NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n      if ( !nc.isUseGateway() && StringUtils.isEmpty( nc.getZooKeeperHost() ) ) {\n        MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n        mb.setText( BaseMessages.getString( HBaseOutputMeta.PKG, \"Dialog.Error\" ) );\n        mb.setMessage( BaseMessages.getString(\n          HBaseOutputMeta.PKG, \"HBaseOutputDialog.NamedClusterMissingValues.Msg\" ) );\n        mb.open();\n        return;\n      }\n    }\n\n    stepname = m_stepnameText.getText();\n\n    updateMetaConnectionDetails( m_currentMeta );\n\n    if ( m_storeMappingInStepMetaData.getSelection() ) {\n      if ( Utils.isEmpty( m_mappingNamesCombo.getText() ) ) {\n        List<String> problems = new ArrayList<String>();\n        Mapping toSet = m_mappingEditor.getMapping( false, problems, false );\n        if ( problems.size() > 0 ) {\n          StringBuffer p = new StringBuffer();\n          for ( String s : problems ) {\n            p.append( s ).append( \"\\n\" );\n          }\n          MessageDialog md =\n            new MessageDialog(\n              shell,\n              BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.Error.IssuesWithMapping.Title\" ),\n              null,\n              BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.Error.IssuesWithMapping\" ) + \":\\n\\n\"\n                + p.toString(),\n              MessageDialog.WARNING,\n              new String[] {\n                BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.Error.IssuesWithMapping.ButtonOK\" ),\n                BaseMessages.getString( HBaseOutputMeta.PKG,\n                  \"HBaseOutputDialog.Error.IssuesWithMapping.ButtonCancel\" ) }, 0 );\n          MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n          int idx = md.open() & 0xFF;\n          if ( idx == 1 || idx == 255 /* 255 = escape pressed */ ) {\n            return; // Cancel\n          }\n        }\n        m_currentMeta.setMapping( toSet );\n      } else {\n        HBaseConnection connection = null;\n        try {\n          connection = getHBaseConnection();\n          MappingAdmin admin = new MappingAdmin( connection );\n          Mapping current = null;\n\n          current =\n            admin.getMapping( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ), transMeta\n              .environmentSubstitute( m_mappingNamesCombo.getText() ) );\n\n          m_currentMeta.setMapping( current );\n          m_currentMeta.setTargetMappingName( \"\" );\n        } catch ( Exception e ) {\n          logError( Messages.getString( \"HBaseOutputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), e );\n          new ErrorDialog( shell, Messages.getString( \"HBaseOutputDialog.ErrorMessage.UnableToGetMapping\" ), Messages\n            .getString( \"HBaseOutputDialog.ErrorMessage.UnableToGetMapping\" )\n            + \" \\\"\"\n            + transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() + \",\"\n            + transMeta.environmentSubstitute( m_mappingNamesCombo.getText() ) + \"\\\"\" ), e );\n        } finally {\n          try {\n            if ( connection != null ) {\n              connection.close();\n            }\n          } catch ( Exception e ) {\n            String msg = Messages.getString( \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n            logError( msg, e );\n            new ErrorDialog( shell, msg, msg, e );\n          }\n        }\n      }\n    } else {\n      // we're going to use a mapping stored in HBase - null out any stored\n      // mapping\n      m_currentMeta.setMapping( null );\n    }\n\n    if ( !m_originalMeta.equals( m_currentMeta ) ) {\n      m_currentMeta.setChanged();\n      changed = m_currentMeta.hasChanged();\n    }\n\n    dispose();\n  }\n\n  protected void updateMetaConnectionDetails( HBaseOutputMeta meta ) {\n    if ( Utils.isEmpty( m_stepnameText.getText() ) ) {\n      return;\n    }\n\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    if ( nc != null ) {\n      meta.setNamedCluster( nc );\n    }\n\n    meta.setCoreConfigURL( m_coreConfigText.getText() );\n    meta.setDefaulConfigURL( m_defaultConfigText.getText() );\n    meta.setTargetTableName( m_mappedTableNamesCombo.getText() );\n    meta.setTargetMappingName( m_mappingNamesCombo.getText() );\n\n    meta.setDeleteRowKey( m_deleteRowKeyBut.getSelection() );\n\n    meta.setDisableWriteToWAL( m_disableWriteToWALBut.getSelection() );\n    meta.setWriteBufferSize( m_writeBufferSizeText.getText() );\n\n  }\n\n  private void getData() {\n\n    namedClusterWidget.setSelectedNamedCluster( m_currentMeta.getNamedCluster().getName() );\n\n    if ( !Utils.isEmpty( m_currentMeta.getCoreConfigURL() ) ) {\n      m_coreConfigText.setText( m_currentMeta.getCoreConfigURL() );\n    }\n\n    if ( !Utils.isEmpty( m_currentMeta.getDefaultConfigURL() ) ) {\n      m_defaultConfigText.setText( m_currentMeta.getDefaultConfigURL() );\n    }\n\n    if ( !Utils.isEmpty( m_currentMeta.getTargetTableName() ) ) {\n      m_mappedTableNamesCombo.setText( m_currentMeta.getTargetTableName() );\n    }\n\n    if ( !Utils.isEmpty( m_currentMeta.getTargetMappingName() ) ) {\n      m_mappingNamesCombo.setText( m_currentMeta.getTargetMappingName() );\n    }\n\n    m_deleteRowKeyBut.setSelection( m_currentMeta.getDeleteRowKey() );\n\n    m_disableWriteToWALBut.setSelection( m_currentMeta.getDisableWriteToWAL() );\n\n    walEnabled();\n\n    if ( !Utils.isEmpty( m_currentMeta.getWriteBufferSize() ) ) {\n      m_writeBufferSizeText.setText( m_currentMeta.getWriteBufferSize() );\n    }\n\n    if ( Utils.isEmpty( m_currentMeta.getTargetMappingName() ) && m_currentMeta.getMapping() != null ) {\n      m_mappingEditor.setMapping( m_currentMeta.getMapping() );\n      m_storeMappingInStepMetaData.setSelection( true );\n    }\n\n\n  }\n\n  @Override public HBaseService getHBaseService() throws ClusterInitializationException {\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    return namedClusterServiceLocator.getService( nc, HBaseService.class );\n  }\n\n  @Override public HBaseConnection getHBaseConnection() throws IOException, ClusterInitializationException {\n    /*\n     * URL coreConf = null; URL defaultConf = null;\n     */\n    String coreConf = \"\";\n    String defaultConf = \"\";\n    String zookeeperHosts = \"\";\n\n    if ( !Utils.isEmpty( m_coreConfigText.getText() ) ) {\n      coreConf = transMeta.environmentSubstitute( m_coreConfigText.getText() );\n    }\n\n    if ( !Utils.isEmpty( m_defaultConfigText.getText() ) ) {\n      defaultConf = transMeta.environmentSubstitute( m_defaultConfigText.getText() );\n    }\n\n    NamedCluster nc = namedClusterWidget.getSelectedNamedCluster();\n    if ( nc != null && !nc.isUseGateway() ) {\n      zookeeperHosts = transMeta.environmentSubstitute( nc.getZooKeeperHost() );\n    }\n\n    if ( Utils.isEmpty( zookeeperHosts ) && Utils.isEmpty( coreConf ) && Utils.isEmpty( defaultConf ) && ( nc != null\n      && !nc.isUseGateway() ) ) {\n      throw new IOException( BaseMessages.getString( HBaseOutputMeta.PKG,\n        \"MappingDialog.Error.Message.CantConnectNoConnectionDetailsProvided\" ) );\n    }\n\n    return getHBaseService().getHBaseConnection( transMeta, coreConf, defaultConf, null );\n  }\n\n  private void setupMappedTableNames() {\n    HBaseConnection connection = null;\n    Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n    try {\n      shell.setCursor( busy );\n      connection = getHBaseConnection();\n      MappingAdmin admin = new MappingAdmin( connection );\n      Set<String> tableNames = admin.getMappedTables( parseNamespaceFromTableName( null ) );\n\n      m_mappedTableNamesCombo.removeAll();\n      for ( String s : tableNames ) {\n        m_mappedTableNamesCombo.add( s );\n      }\n\n    } catch ( Exception ex ) {\n      logError( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.ErrorMessage.UnableToConnect\" ), ex );\n      new ErrorDialog( shell, BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutputDialog.ErrorMessage.\"\n        + \"UnableToConnect\" ), BaseMessages.getString( HBaseOutputMeta.PKG,\n        \"HBaseOutputDialog.ErrorMessage.UnableToConnect\" ), ex );\n    } finally {\n      shell.setCursor( null );\n      busy.dispose();\n      try {\n        if ( connection != null ) {\n          connection.close();\n        }\n      } catch ( Exception e ) {\n        String msg = BaseMessages.getString(\n          HBaseOutputMeta.PKG, \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n        logError( msg, e );\n        new ErrorDialog( shell, msg, msg, e );\n      }\n    }\n  }\n\n  private void setupMappingNamesForTable( boolean quiet ) {\n    m_mappingNamesCombo.removeAll();\n\n    if ( !Utils.isEmpty( m_mappedTableNamesCombo.getText() ) ) {\n      HBaseConnection connection = null;\n      try {\n        connection = getHBaseConnection();\n        MappingAdmin admin = new MappingAdmin( connection );\n\n        String mappedTableName =\n          MappingAdmin.getTableNameFromVariable( m_currentMeta, m_mappedTableNamesCombo.getText().trim() );\n\n        List<String> mappingNames = admin.getMappingNames( mappedTableName );\n\n        for ( String n : mappingNames ) {\n          m_mappingNamesCombo.add( n );\n        }\n      } catch ( Exception ex ) {\n        if ( !quiet ) {\n          logError(\n            BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), ex );\n          new ErrorDialog( shell, BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseInputDialog.ErrorMessage.\"\n            + \"UnableToConnect\" ), BaseMessages.getString( HBaseOutputMeta.PKG,\n            \"HBaseInputDialog.ErrorMessage.UnableToConnect\" ), ex );\n        }\n      } finally {\n        try {\n          if ( connection != null ) {\n            connection.close();\n          }\n        } catch ( Exception e ) {\n          if ( !quiet ) {\n            String msg = BaseMessages.getString(\n              HBaseOutputMeta.PKG, \"HBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection\" );\n            logError( msg, e );\n            new ErrorDialog( shell, msg, msg, e );\n          }\n        }\n      }\n    }\n  }\n\n  public RowMetaInterface getIncomingFields() {\n    StepMeta stepMeta = transMeta.findStep( stepname );\n    RowMetaInterface result = null;\n\n    try {\n      if ( stepMeta != null ) {\n        result = transMeta.getPrevStepFields( stepMeta );\n      }\n    } catch ( KettleException ex ) {\n      // quietly ignore\n    }\n\n    return result;\n  }\n\n  public String getCurrentConfiguration() {\n    updateMetaConnectionDetails( m_configurationMeta );\n    return m_configurationMeta.getXML();\n  }\n\n  public void walEnabled() {\n    m_disableWriteToWALBut.setEnabled( !m_deleteRowKeyBut.getSelection() );\n  }\n\n  private String parseNamespaceFromTableName( String defaultNamespaceIfNoneSpecified ) {\n    return HbaseUtil.parseNamespaceFromTableName( transMeta.environmentSubstitute( m_mappedTableNamesCombo.getText() ),\n      defaultNamespaceIfNoneSpecified );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hbase.HbaseUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingUtils;\nimport org.pentaho.big.data.kettle.plugins.hbase.meta.AELHBaseMappingImpl;\nimport org.pentaho.di.core.CheckResult;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.util.Collection;\nimport java.util.List;\n\n/**\n * Class providing an output step for writing data to an HBase table according to meta data column/type mapping info\n * stored in a separate HBase table called \"pentaho_mappings\". See org.pentaho.hbase.mapping.Mapping for details on the\n * meta data format.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\n@Step( id = \"HBaseOutput\", image = \"HBO.svg\", name = \"HBaseOutput.Name\", description = \"HBaseOutput.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  documentationUrl = \"pdi-transformation-steps-reference-overview/hbase-output\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.hbaseoutput\" )\n@InjectionSupported( localizationPrefix = \"HBaseOutput.Injection.\", groups = { \"MAPPING\" } )\npublic class HBaseOutputMeta extends BaseStepMeta implements StepMetaInterface {\n\n  protected static Class<?> PKG = HBaseOutputMeta.class;\n\n  /**\n   * path/url to hbase-site.xml\n   */\n  @Injection( name = \"HBASE_SITE_XML_URL\" )\n  protected String m_coreConfigURL;\n\n  /**\n   * path/url to hbase-default.xml\n   */\n  @Injection( name = \"HBASE_DEFAULT_XML_URL\" )\n  protected String m_defaultConfigURL;\n\n  /**\n   * the name of the HBase table to write to\n   */\n  @Injection( name = \"TARGET_TABLE_NAME\" )\n  protected String m_targetTableName;\n\n  /**\n   * the name of the mapping for columns/types for the target table\n   */\n  @Injection( name = \"TARGET_MAPPING_NAME\" )\n  protected String m_targetMappingName;\n\n  /**\n   * if true then the incoming column with row key from the mapping will be deleted\n   */\n  @Injection( name = \"DELETE_ROW_KEY\" )\n  protected boolean m_deleteRowKey;\n\n  /**\n   * if true then the WAL will not be written to\n   */\n  @Injection( name = \"DISABLE_WRITE_TO_WAL\" )\n  protected boolean m_disableWriteToWAL;\n\n  /**\n   * The size of the write buffer in bytes (empty - default from hbase-default.xml is used)\n   */\n  @Injection( name = \"WRITE_BUFFER_SIZE\" )\n  protected String m_writeBufferSize;\n\n  /**\n   * The mapping to use if we are not loading one dynamically at runtime from HBase itself\n   */\n  protected Mapping m_mapping;\n\n  @InjectionDeep\n  protected MappingDefinition mappingDefinition;\n\n  private NamedCluster namedCluster;\n\n  private final NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n  private final NamedClusterService namedClusterService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private MetastoreLocator metaStoreService;\n  private ServiceStatus serviceStatus = ServiceStatus.OK;\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public NamedClusterServiceLocator getNamedClusterServiceLocator() {\n    return namedClusterServiceLocator;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public HBaseOutputMeta() {\n    this( NamedClusterManager.getInstance(), BigDataServicesHelper.getNamedClusterServiceLocator(),\n      RuntimeTestActionServiceImpl.getInstance(), RuntimeTesterImpl.getInstance(), new NamedClusterLoadSaveUtil(), null );\n  }\n\n  public HBaseOutputMeta( NamedClusterService namedClusterService,\n                          NamedClusterServiceLocator namedClusterServiceLocator,\n                          RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this( namedClusterService, namedClusterServiceLocator,\n      runtimeTestActionService, runtimeTester, new NamedClusterLoadSaveUtil(), null );\n  }\n\n  protected synchronized MetastoreLocator getMetastoreService() {\n    if ( this.metaStoreService == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metaStoreService = metastoreLocators.stream().findFirst().get();\n      } catch ( Exception e ) {\n        getLog().logError( \"Error getting MetastoreLocator\", e );\n      }\n    }\n    return this.metaStoreService;\n  }\n\n  @VisibleForTesting\n  HBaseOutputMeta( NamedClusterService namedClusterService,\n                             NamedClusterServiceLocator namedClusterServiceLocator,\n                             RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                             NamedClusterLoadSaveUtil namedClusterLoadSaveUtil, MetastoreLocator metaStore ) {\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTestActionService = runtimeTestActionService;\n\n    this.runtimeTester = runtimeTester;\n    this.namedClusterLoadSaveUtil = namedClusterLoadSaveUtil;\n    this.metaStoreService = metaStore;\n  }\n\n  /**\n   * Set the mapping to use for decoding the row\n   *\n   * @param m the mapping to use\n   */\n  public void setMapping( Mapping m ) {\n    m_mapping = m;\n  }\n\n  /**\n   * Get the mapping to use for decoding the row\n   *\n   * @return the mapping to use\n   */\n  public Mapping getMapping() {\n    return m_mapping;\n  }\n\n  public void setCoreConfigURL( String coreConfig ) {\n    m_coreConfigURL = coreConfig;\n  }\n\n  public String getCoreConfigURL() {\n    return m_coreConfigURL;\n  }\n\n  public void setDefaulConfigURL( String defaultConfig ) {\n    m_defaultConfigURL = defaultConfig;\n  }\n\n  public String getDefaultConfigURL() {\n    return m_defaultConfigURL;\n  }\n\n  public void setTargetTableName( String targetTable ) {\n    m_targetTableName = targetTable;\n  }\n\n  public String getTargetTableName() {\n    return m_targetTableName;\n  }\n\n  public void setTargetMappingName( String targetMapping ) {\n    m_targetMappingName = targetMapping;\n  }\n\n  public String getTargetMappingName() {\n    return m_targetMappingName;\n  }\n\n  public boolean getDeleteRowKey() {\n    return m_deleteRowKey;\n  }\n\n  public void setDeleteRowKey( boolean m_deleteRowKey ) {\n    this.m_deleteRowKey = m_deleteRowKey;\n  }\n\n  public void setDisableWriteToWAL( boolean d ) {\n    m_disableWriteToWAL = d;\n  }\n\n  public boolean getDisableWriteToWAL() {\n    return m_disableWriteToWAL;\n  }\n\n  public void setWriteBufferSize( String size ) {\n    m_writeBufferSize = size;\n  }\n\n  public String getWriteBufferSize() {\n    return m_writeBufferSize;\n  }\n\n  void applyInjection( VariableSpace space ) throws KettleException {\n    if ( namedCluster == null ) {\n      throw new KettleException( \"Named cluster was not initialized!\" );\n    }\n    if ( namedCluster.getShimIdentifier() == null && getParentStepMeta() != null\n      && getParentStepMeta().getParentTransMeta() != null ) {\n      // If here we have a template for the named cluster, not the real thing.  This is likely due to not having\n      // the namedCluster present in the local metastore.  Time to load it from the embedded Metastore which is only\n      // present at runtime\n      NamedCluster nc = namedClusterService.getNamedClusterByName( namedCluster.getName(),\n        getMetastoreService().getExplicitMetastore( getParentStepMeta().getParentTransMeta().getEmbeddedMetastoreProviderKey() ) );\n      if ( nc != null && nc.getShimIdentifier() != null ) {\n        namedCluster = nc; //Overwrite with the real one\n      }\n    }\n    try {\n      if ( mappingDefinition == null ) {\n        ServiceStatus serviceStatus = this.getServiceStatus();\n        if ( !serviceStatus.isOk() ) {\n          throw serviceStatus.getException();\n        }\n        return;\n      }\n      HBaseService hBaseService = getService();\n      Mapping tempMapping = null;\n      tempMapping = getMapping( mappingDefinition, hBaseService );\n      setMapping( tempMapping );\n    } catch ( Exception e ) {\n      throw new KettleException( e );\n    }\n  }\n\n  @VisibleForTesting\n  Mapping getMapping( MappingDefinition mappingDefinition, HBaseService hBaseService ) throws KettleException {\n    return MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  public void check( List<CheckResultInterface> remarks, TransMeta transMeta, StepMeta stepMeta, RowMetaInterface prev,\n                     String[] input, String[] output, RowMetaInterface info ) {\n\n    CheckResult cr;\n\n    if ( ( prev == null ) || ( prev.size() == 0 ) ) {\n      cr = new CheckResult(\n        CheckResult.TYPE_RESULT_WARNING, \"Not receiving any fields from previous steps!\", stepMeta );\n      remarks.add( cr );\n    } else {\n      cr =\n        new CheckResult( CheckResult.TYPE_RESULT_OK, \"Step is connected to previous one, receiving \" + prev.size()\n          + \" fields\", stepMeta );\n      remarks.add( cr );\n    }\n\n    // See if we have input streams leading to this step!\n    if ( input.length > 0 ) {\n      cr = new CheckResult( CheckResult.TYPE_RESULT_OK, \"Step is receiving info from other steps.\", stepMeta );\n      remarks.add( cr );\n    } else {\n      cr = new CheckResult( CheckResult.TYPE_RESULT_ERROR, \"No input received from other steps!\", stepMeta );\n      remarks.add( cr );\n    }\n  }\n\n  @Override\n  public String getXML() {\n    try {\n      applyInjection( new Variables() );\n    } catch ( KettleException e ) {\n      logError( \"Error occurred while injecting metadata. Transformation meta could be incorrect!\", e );\n    }\n    StringBuilder retval = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXml( retval, namedClusterService, namedCluster, MetaStoreConst.getDefaultMetastore(),\n        getLog() );\n\n    if ( parentStepMeta != null && parentStepMeta.getParentTransMeta() != null ) {\n      parentStepMeta.getParentTransMeta().getNamedClusterEmbedManager().addClusterToMeta( namedCluster.getName() );\n    }\n\n    if ( !Utils.isEmpty( m_coreConfigURL ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"core_config_url\", m_coreConfigURL ) );\n    }\n    if ( !Utils.isEmpty( m_defaultConfigURL ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"default_config_url\", m_defaultConfigURL ) );\n    }\n    if ( !Utils.isEmpty( m_targetTableName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"target_table_name\", m_targetTableName ) );\n    }\n    if ( !Utils.isEmpty( m_targetMappingName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"target_mapping_name\", m_targetMappingName ) );\n    }\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"delete_rows_by_key\", m_deleteRowKey ) );\n\n    if ( !Utils.isEmpty( m_writeBufferSize ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"write_buffer_size\", m_writeBufferSize ) );\n    }\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"disable_wal\", m_disableWriteToWAL ) );\n\n\n    if ( m_mapping != null ) {\n      retval.append( m_mapping.getXML() );\n    }\n\n    return retval.toString();\n  }\n\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n                                TransMeta transMeta, Trans trans ) {\n    return new HBaseOutput( stepMeta, stepDataInterface, copyNr, transMeta, trans, namedClusterServiceLocator );\n  }\n\n  public StepDataInterface getStepData() {\n    return new HBaseOutputData();\n  }\n\n  @Override public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore )\n    throws KettleXMLException {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreService().getMetastore();\n    }\n\n    this.namedCluster =\n      namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, null, null, metaStore, stepnode, getLog() );\n\n    m_coreConfigURL = XMLHandler.getTagValue( stepnode, \"core_config_url\" );\n    m_defaultConfigURL = XMLHandler.getTagValue( stepnode, \"default_config_url\" );\n    m_targetTableName =\n      HbaseUtil.expandLegacyTableNameOnLoad( XMLHandler.getTagValue( stepnode, \"target_table_name\" ) );\n    m_targetMappingName = XMLHandler.getTagValue( stepnode, \"target_mapping_name\" );\n    String deleteKeys = XMLHandler.getTagValue( stepnode, \"delete_rows_by_key\" );\n    if ( !Utils.isEmpty( deleteKeys ) ) {\n      m_deleteRowKey = deleteKeys.equalsIgnoreCase( \"Y\" );\n    }\n    m_writeBufferSize = XMLHandler.getTagValue( stepnode, \"write_buffer_size\" );\n    String disableWAL = XMLHandler.getTagValue( stepnode, \"disable_wal\" );\n    m_disableWriteToWAL = disableWAL.equalsIgnoreCase( \"Y\" );\n\n    Mapping tempMapping = null;\n    try {\n      tempMapping =\n        getService().getMappingFactory().createMapping();\n    } catch ( Exception e ) {\n      getLog().logError( e.getMessage() );\n    }\n\n    /**\n     * Assume that null mappings indicate\n     * a missing HBaseService.  Try loading\n     * from KTR\n     */\n    if ( tempMapping == null ) {\n      tempMapping = new AELHBaseMappingImpl();\n    }\n\n    if ( tempMapping != null && tempMapping.loadXML( stepnode ) ) {\n      m_mapping = tempMapping;\n    } else {\n      m_mapping = null;\n    }\n  }\n\n  @Override public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreService().getMetastore();\n    }\n\n    this.namedCluster =\n      namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, id_step, rep, metaStore, null, getLog() );\n    m_coreConfigURL = rep.getStepAttributeString( id_step, 0, \"core_config_url\" );\n    m_defaultConfigURL = rep.getStepAttributeString( id_step, 0, \"default_config_url\" );\n    m_targetTableName =\n      HbaseUtil.expandLegacyTableNameOnLoad( rep.getStepAttributeString( id_step, 0, \"target_table_name\" ) );\n    m_targetMappingName = rep.getStepAttributeString( id_step, 0, \"target_mapping_name\" );\n    m_deleteRowKey = rep.getStepAttributeBoolean( id_step, 0, \"delete_rows_by_key\" );\n    m_writeBufferSize = rep.getStepAttributeString( id_step, 0, \"write_buffer_size\" );\n    m_disableWriteToWAL = rep.getStepAttributeBoolean( id_step, 0, \"disable_wal\" );\n\n    Mapping tempMapping = null;\n    try {\n      tempMapping =\n        getService().getMappingFactory().createMapping();\n    } catch ( Exception e ) {\n      getLog().logError( e.getMessage() );\n    }\n    if ( tempMapping != null && tempMapping.readRep( rep, id_step ) ) {\n      m_mapping = tempMapping;\n    } else {\n      m_mapping = null;\n    }\n  }\n\n  @Override public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n\n    if ( metaStore == null ) {\n      metaStore = getMetastoreService().getMetastore();\n    }\n\n    namedClusterLoadSaveUtil\n      .saveRep( rep, metaStore, id_transformation, id_step, namedClusterService, namedCluster, getLog() );\n\n    if ( !Utils.isEmpty( m_coreConfigURL ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"core_config_url\", m_coreConfigURL );\n    }\n    if ( !Utils.isEmpty( m_defaultConfigURL ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"default_config_url\", m_defaultConfigURL );\n    }\n    if ( !Utils.isEmpty( m_targetTableName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"target_table_name\", m_targetTableName );\n    }\n    if ( !Utils.isEmpty( m_targetMappingName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"target_mapping_name\", m_targetMappingName );\n    }\n\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"delete_rows_by_key\", m_deleteRowKey );\n\n    if ( !Utils.isEmpty( m_writeBufferSize ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"write_buffer_size\", m_writeBufferSize );\n    }\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"disable_wal\", m_disableWriteToWAL );\n\n    if ( m_mapping != null ) {\n      m_mapping.saveRep( rep, id_transformation, id_step );\n    }\n  }\n\n  public void setDefault() {\n    m_coreConfigURL = null;\n    m_defaultConfigURL = null;\n    m_targetTableName = null;\n    m_targetMappingName = null;\n    m_deleteRowKey = false;\n    m_disableWriteToWAL = false;\n    m_writeBufferSize = null;\n    namedCluster = namedClusterService.getClusterTemplate();\n  }\n\n  @Override\n  public boolean supportsErrorHandling() {\n    return true;\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public MappingDefinition getMappingDefinition() {\n    return mappingDefinition;\n  }\n\n  public void setMappingDefinition( MappingDefinition mappingDefinition ) {\n    this.mappingDefinition = mappingDefinition;\n  }\n\n  protected HBaseService getService() throws ClusterInitializationException {\n    HBaseService service = null;\n    try {\n      String embeddedMetastoreProviderKey =\n        parentStepMeta == null || parentStepMeta.getParentTransMeta() == null ? null\n          : parentStepMeta.getParentTransMeta().getEmbeddedMetastoreProviderKey();\n      service = namedClusterServiceLocator.getService( this.namedCluster, HBaseService.class,\n        embeddedMetastoreProviderKey );\n      this.serviceStatus = ServiceStatus.OK;\n    } catch ( Exception e ) {\n      this.serviceStatus = ServiceStatus.notOk( e );\n      logError( Messages.getString( \"HBaseOutput.Error.ServiceStatus\" ) );\n      throw e;\n    }\n    return service;\n  }\n\n  public ServiceStatus getServiceStatus() {\n    if ( this.serviceStatus == null ) {\n      this.serviceStatus = ServiceStatus.OK;\n    }\n    return this.serviceStatus;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/KettleRowToHBaseTuple.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport java.util.Map;\n\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingUtils;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping.KeyType;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\n\npublic class KettleRowToHBaseTuple {\n\n  private int keyIndex = -1;\n  private ValueMetaInterface keyInMeta;\n  private KeyType keyType;\n\n  private int familyIndex = -1;\n  private ValueMetaInterface familyInMeta;\n\n  private int columnIndex = -1;\n  private ValueMetaInterface columnInMeta;\n\n  private int valueIndex = -1;\n  private ValueMetaInterface valueInMeta;\n  private HBaseValueMetaInterface valueMeta;\n\n  private int visibilityIndex = -1;\n  private ValueMetaInterface visibilityInMeta;\n  private HBaseValueMetaInterface visibilityMeta;\n\n  /**\n   * Creates a conversion class that converts an incoming row object with values for the various Tuple fields <KEY,\n   * Family, Column, Value> into an HBasePut\n   *\n   * @param inputRowMeta\n   *          The row meta of the incoming row structure\n   * @param tupleMapping\n   *          The mapping in use for the step\n   * @param columnMapping\n   *          The non-KEY columns in the mapping mapped by column alias\n   * @throws KettleException\n   */\n  public KettleRowToHBaseTuple( RowMetaInterface inputRowMeta, Mapping tupleMapping,\n      Map<String, HBaseValueMetaInterface> columnMapping ) throws KettleException {\n\n    String keyName = tupleMapping.getKeyName();\n    keyIndex = inputRowMeta.indexOfValue( keyName );\n    if ( keyIndex < 0 ) {\n      // No Key Column\n      throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.NoKeyColumn\" ) );\n    }\n    keyInMeta = inputRowMeta.getValueMeta( keyIndex );\n    keyType = tupleMapping.getKeyType();\n\n    familyIndex = inputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() );\n    if ( familyIndex < 0 ) {\n      throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.NoFamilyColumn\" ) );\n    }\n    familyInMeta = inputRowMeta.getValueMeta( familyIndex );\n\n    columnIndex = inputRowMeta.indexOfValue( Mapping.TupleMapping.COLUMN.toString() );\n    if ( columnIndex < 0 ) {\n      throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.NoColumnColumn\" ) );\n    }\n    columnInMeta = inputRowMeta.getValueMeta( columnIndex );\n\n    // NOTE: TIMESTAMPS cannot be written via HBase Put, so the column is useless for writing\n\n    valueIndex = inputRowMeta.indexOfValue( Mapping.TupleMapping.VALUE.toString() );\n    if ( valueIndex < 0 ) {\n      throw new KettleException( BaseMessages.getString( HBaseOutputMeta.PKG, \"HBaseOutput.Error.NoValueColumn\" ) );\n    }\n    valueInMeta = inputRowMeta.getValueMeta( valueIndex );\n    valueMeta = columnMapping.get( valueInMeta.getName() );\n\n    // NOTE: The Visibility Index is optional\n    visibilityIndex = inputRowMeta.indexOfValue( MappingUtils.TUPLE_MAPPING_VISIBILITY );\n    if ( visibilityIndex >= 0 ) {\n      visibilityInMeta = inputRowMeta.getValueMeta( visibilityIndex );\n      visibilityMeta = columnMapping.get( visibilityInMeta.getName() );\n      if ( visibilityMeta == null ) {\n        // There is no column mapping for Visibility, so disable it by removing the index in the RowMeta\n        visibilityInMeta = null;\n        visibilityIndex = -1;\n      }\n    }\n\n  }\n\n  /**\n   * Creates an HBasePut representing the tuple by extracting data from a row\n   *\n   * @param hBaseTableWriteOperationManager\n   *          HBase write manager\n   * @param bu\n   *          The Byte Conversion utility (Required for key conversion)\n   * @param row\n   *          Object containing row data\n   * @param writeToWAL\n   *          Should data be written to WAL?\n   * @return An HBase Put for the tuple\n   * @throws Exception\n   */\n  public HBasePut createTuplePut( HBaseTableWriteOperationManager hBaseTableWriteOperationManager,\n      ByteConversionUtil bu, Object[] row, boolean writeToWAL ) throws Exception {\n\n    if ( keyInMeta.isNull( row[keyIndex] ) ) {\n      throw new FieldException( Mapping.TupleMapping.KEY );\n    }\n    if ( familyInMeta.isNull( row[familyIndex] ) ) {\n      throw new FieldException( Mapping.TupleMapping.FAMILY );\n    }\n    if ( columnInMeta.isNull( row[columnIndex] ) ) {\n      throw new FieldException( Mapping.TupleMapping.COLUMN );\n    }\n    if ( valueInMeta.isNull( row[valueIndex] ) ) {\n      throw new FieldException( Mapping.TupleMapping.VALUE );\n    }\n\n    byte[] encodedKey = bu.encodeKeyValue( row[keyIndex], keyInMeta, keyType );\n\n    HBasePut put = hBaseTableWriteOperationManager.createPut( encodedKey );\n\n    // Note: Families must always be string with the implementation of HBasePut\n    String columnFamily = familyInMeta.getString( row[familyIndex] );\n\n    boolean binaryColName = false;\n    String columnName = columnInMeta.getString( row[columnIndex] );\n    if ( columnName.startsWith( \"@@@binary@@@\" ) ) {\n      // assume hex encoded column name\n      columnName = columnName.replace( \"@@@binary@@@\", \"\" );\n      binaryColName = true;\n    }\n\n    byte[] encodedValue = valueMeta.encodeColumnValue( row[valueIndex], valueInMeta );\n    put.addColumn( columnFamily, columnName, binaryColName, encodedValue );\n\n    if ( visibilityIndex >= 0 && !visibilityInMeta.isNull( row[visibilityIndex] ) ) {\n      byte[] encodedVisibility = visibilityMeta.encodeColumnValue( row[visibilityIndex], visibilityInMeta );\n      put.addColumn( columnFamily, MappingUtils.TUPLE_MAPPING_VISIBILITY, false, encodedVisibility );\n    }\n\n    put.setWriteToWAL( writeToWAL );\n    return put;\n  }\n\n  public static class FieldException extends Exception {\n\n    public Mapping.TupleMapping field;\n\n    public FieldException( Mapping.TupleMapping field ) {\n      super();\n      this.field = field;\n    }\n\n    public String getFieldString() {\n      return field.toString();\n    }\n\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/output/Messages.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.pentaho.di.i18n.BaseMessages;\n\npublic class Messages {\n  public static final Class<Messages> PKG = Messages.class;\n\n  public static String getString( String key ) {\n    return BaseMessages.getString( PKG, key );\n  }\n\n  public static String getString( String key, String param1 ) {\n    return BaseMessages.getString( PKG, key, param1 );\n  }\n\n  public static String getString( String key, String param1, String param2 ) {\n    return BaseMessages.getString( PKG, key, param1, param2 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3, String param4 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4 );\n  }\n\n  public static String getString(\n      String key, String param1, String param2, String param3, String param4, String param5 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4, param5 );\n  }\n\n  public static String getString( String key, String param1, String param2, String param3, String param4,\n      String param5, String param6 ) {\n    return BaseMessages.getString( PKG, key, param1, param2, param3, param4, param5, param6 );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoder.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport java.lang.reflect.InvocationTargetException;\nimport java.util.List;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.HBaseRowToKettleTuple;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowDataUtil;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\n/**\n * Step for decoding incoming HBase row objects using a supplied mapping. Can be used in a Hadoop MR job for processing\n * tables split by org.pentaho.hbase.mapred.PentahoTableInputFormat (see the javadoc for this class for properties that\n * can be set in the job to control the query)\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class HBaseRowDecoder extends BaseStep implements StepInterface {\n  public static final String HBASE_ROW_DECODER_ERROR_NOT_RESULT = \"HBaseRowDecoder.Error.NotResult\";\n  public static final String HBASE_ROW_DECODER_ERROR_NOT_IMMUTABLE_BYTES_WRITABLE =\n    \"HBaseRowDecoder.Error.NotImmutableBytesWritable\";\n  private static Class<?> hBaseRowDecoderMetaClass = HBaseRowDecoderMeta.class;\n\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n\n  protected HBaseRowDecoderMeta hBaseRowDecoderMeta;\n  protected HBaseRowDecoderData hBaseRowDecoderData;\n  private HBaseService hBaseService;\n\n  public HBaseRowDecoder( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                          Trans trans, NamedClusterServiceLocator namedClusterServiceLocator ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  /**\n   * The mapping information to use in order to decode HBase column values\n   */\n  protected Mapping mTableMapping;\n\n  /**\n   * Information from the mapping\n   */\n  protected HBaseValueMetaInterface[] mOutputColumns;\n\n  /**\n   * Index of incoming key value\n   */\n  protected int mKeyInIndex = -1;\n\n  /**\n   * Index of incoming HBase row (Result object)\n   */\n  protected int mResultInIndex = -1;\n\n  /**\n   * Used when decoding columns to <key, family, column, value, time stamp> tuples\n   */\n  protected HBaseRowToKettleTuple mTupleHandler;\n\n  /**\n   * Bytes util\n   */\n  protected ByteConversionUtil mBytesUtil;\n\n  @Override\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n\n    Object[] inputRow = getRow();\n\n    if ( inputRow == null ) {\n      setOutputDone();\n      return false;\n    }\n\n    if ( first ) {\n      first = false;\n      hBaseRowDecoderMeta = (HBaseRowDecoderMeta) smi;\n      hBaseRowDecoderData = (HBaseRowDecoderData) sdi;\n\n      try {\n        hBaseService =\n          namedClusterServiceLocator.getService( hBaseRowDecoderMeta.getNamedCluster(), HBaseService.class );\n        mBytesUtil = hBaseService.getByteConversionUtil();\n\n        // no configuration needed here because we don't need access to the\n        // actual database, just a few utility routines from HBaseShim for\n        // decoding row objects handed to us by the table input format\n      } catch ( Exception ex ) {\n        throw new KettleException( ex.getMessage(), ex );\n      }\n\n      mTableMapping = hBaseRowDecoderMeta.getMapping();\n\n      if ( mTableMapping == null || StringUtils.isEmpty( mTableMapping.getKeyName() ) ) {\n        throw new KettleException(\n          BaseMessages.getString( hBaseRowDecoderMetaClass, \"HBaseRowDecoder.Error.NoMappingInfo\" ) );\n      }\n\n      if ( mTableMapping.isTupleMapping() ) {\n        mTupleHandler = new HBaseRowToKettleTuple( mBytesUtil );\n      }\n\n      mOutputColumns = new HBaseValueMetaInterface[ mTableMapping.getMappedColumns().keySet().size() ];\n      int k = 0;\n      for ( String alias : mTableMapping.getMappedColumns().keySet() ) {\n        mOutputColumns[ k++ ] = mTableMapping.getMappedColumns().get( alias );\n      }\n\n      hBaseRowDecoderData.setOutputRowMeta( getInputRowMeta().clone() );\n      hBaseRowDecoderMeta.getFields( getTransMeta().getBowl(), hBaseRowDecoderData.getOutputRowMeta(), getStepname(),\n        null, null, this );\n\n      // check types first\n      RowMetaInterface inputMeta = getInputRowMeta();\n      String inKey = environmentSubstitute( hBaseRowDecoderMeta.getIncomingKeyField() );\n\n      mKeyInIndex = inputMeta.indexOfValue( inKey );\n      if ( mKeyInIndex == -1 ) {\n        throw new KettleException(\n          BaseMessages.getString( hBaseRowDecoderMetaClass, \"HBaseRowDecoder.Error.UnableToFindHBaseKey\", inKey ) );\n      }\n\n      try {\n        inputRow[ mKeyInIndex ] = mBytesUtil.convertToImmutableBytesWritable( inputRow[ mKeyInIndex ] );\n      } catch ( InvocationTargetException | IllegalAccessException | NoSuchMethodException e ) {\n        throw new KettleException( BaseMessages.getString( hBaseRowDecoderMetaClass,\n          HBASE_ROW_DECODER_ERROR_NOT_IMMUTABLE_BYTES_WRITABLE,\n          hBaseRowDecoderMeta.getIncomingKeyField() ) );\n      }\n\n      if ( !mBytesUtil.isImmutableBytesWritable( inputRow[ mKeyInIndex ] ) ) {\n        throw new KettleException( BaseMessages.getString( hBaseRowDecoderMetaClass,\n          HBASE_ROW_DECODER_ERROR_NOT_IMMUTABLE_BYTES_WRITABLE,\n          hBaseRowDecoderMeta.getIncomingKeyField() ) );\n      }\n\n      String inResult = environmentSubstitute( hBaseRowDecoderMeta.getIncomingResultField() );\n      mResultInIndex = inputMeta.indexOfValue( inResult );\n      if ( mResultInIndex == -1 ) {\n        throw new KettleException(\n          BaseMessages.getString( hBaseRowDecoderMetaClass, \"HBaseRowDecoder.Error.UnableToFindHBaseRow\", inResult ) );\n      }\n    }\n\n    try {\n      inputRow[ mKeyInIndex ] = mBytesUtil.convertToImmutableBytesWritable( inputRow[ mKeyInIndex ] );\n    } catch ( InvocationTargetException | IllegalAccessException | NoSuchMethodException e ) {\n      throw new KettleException( BaseMessages.getString( hBaseRowDecoderMetaClass,\n        HBASE_ROW_DECODER_ERROR_NOT_IMMUTABLE_BYTES_WRITABLE,\n        hBaseRowDecoderMeta.getIncomingKeyField() ) );\n    }\n\n    Object hRow = inputRow[ mResultInIndex ];\n    if ( inputRow[ mKeyInIndex ] != null && hRow != null ) {\n      if ( mTableMapping.isTupleMapping() ) {\n        List<Object[]> hrowToKettleRow =\n          mTupleHandler.hbaseRowToKettleTupleMode( hBaseService.getHBaseValueMetaInterfaceFactory(), hRow,\n            mTableMapping, mTableMapping\n              .getMappedColumns(), hBaseRowDecoderData.getOutputRowMeta() );\n\n        for ( Object[] tuple : hrowToKettleRow ) {\n          putRow( hBaseRowDecoderData.getOutputRowMeta(), tuple );\n        }\n      } else {\n        Object[] outputRowData = RowDataUtil.allocateRowData( mOutputColumns.length + 1 ); // + 1 for key\n\n        byte[] rowKey = null;\n        try {\n          rowKey = (byte[]) hRow.getClass().getMethod( \"getRow\" ).invoke( hRow );\n        } catch ( Exception ex ) {\n          throw new KettleException(\n            BaseMessages.getString( hBaseRowDecoderMetaClass, \"HBaseRowDecoder.Error.UnableToGetRowKey\" ), ex );\n        }\n        Object decodedKey = mTableMapping.decodeKeyValue( rowKey );\n        outputRowData[ 0 ] = decodedKey;\n\n        for ( int i = 0; i < mOutputColumns.length; i++ ) {\n          HBaseValueMetaInterface current = mOutputColumns[ i ];\n\n          byte[] colFamilyName = current.getColumnFamily().getBytes();\n          byte[] qualifier = current.getColumnName().getBytes();\n\n          byte[] kv = null;\n          try {\n            kv = (byte[]) hRow.getClass().getMethod( \"getValue\", byte[].class, byte[].class )\n              .invoke( hRow, colFamilyName, qualifier );\n          } catch ( Exception ex ) {\n            throw new KettleException(\n              BaseMessages.getString( hBaseRowDecoderMetaClass, \"HBaseRowDecoder.Error.UnableToGetColumnValue\" ),\n              ex );\n          }\n\n          Object decodedVal = current.decodeColumnValue( ( kv == null ) ? null : kv );\n          outputRowData[ i + 1 ] = decodedVal;\n        }\n\n        // output the row\n        putRow( hBaseRowDecoderData.getOutputRowMeta(), outputRowData );\n      }\n    }\n\n    return true;\n  }\n\n  @Override\n  public boolean init( StepMetaInterface smi, StepDataInterface sdi ) {\n    if ( super.init( smi, sdi ) ) {\n      HBaseRowDecoderMeta meta = (HBaseRowDecoderMeta) smi;\n      try {\n        meta.applyInjection();\n        return true;\n      } catch ( KettleException e ) {\n        logError( \"Error while injecting properties\", e );\n      }\n    }\n    return false;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoderData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\n/**\n * Data class for the HBase row decoder step\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class HBaseRowDecoderData extends BaseStepData implements StepDataInterface {\n\n  /** The output data format */\n  protected RowMetaInterface m_outputRowMeta;\n\n  /**\n   * Get the output row format\n   * \n   * @return the output row format\n   */\n  public RowMetaInterface getOutputRowMeta() {\n    return m_outputRowMeta;\n  }\n\n  /**\n   * Set the output row format\n   * \n   * @param rmi\n   *          the output row format\n   */\n  public void setOutputRowMeta( RowMetaInterface rmi ) {\n    m_outputRowMeta = rmi;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoderDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport org.eclipse.jface.dialogs.MessageDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingEditor;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * UI dialog for the HBase row decoder step\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\n@PluginDialog( id = \"HBaseRowDecoder\", image = \"HBRD.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"Products/HBase_Row_Decoder\" )\npublic class HBaseRowDecoderDialog extends BaseStepDialog implements StepDialogInterface {\n\n  private static final Class<?> PKG = HBaseRowDecoderMeta.class;\n\n  /** various UI bits and pieces for the dialog */\n  private Label m_stepnameLabel;\n  private Text m_stepnameText;\n\n  // The tabs of the dialog\n  private CTabFolder m_wTabFolder;\n  private CTabItem m_wConfigTab;\n  private CTabItem m_editorTab;\n\n  private CCombo m_incomingKeyCombo;\n  private CCombo m_incomingResultCombo;\n\n  // mapping editor composite\n  private MappingEditor m_mappingEditor;\n\n  private final HBaseRowDecoderMeta m_currentMeta;\n  private final HBaseRowDecoderMeta m_originalMeta;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n\n  public HBaseRowDecoderDialog( Shell parent, Object in, TransMeta tr, String name,\n                                NamedClusterService namedClusterService,\n                                RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                                NamedClusterServiceLocator namedClusterServiceLocator ) {\n\n    super( parent, (BaseStepMeta) in, tr, name );\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n\n    m_currentMeta = (HBaseRowDecoderMeta) in;\n    m_originalMeta = (HBaseRowDecoderMeta) m_currentMeta.clone();\n\n  }\n\n  public String open() {\n\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MIN | SWT.MAX );\n\n    props.setLook( shell );\n    setShellImage( shell, m_currentMeta );\n\n    // used to listen to a text field (m_wStepname)\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    };\n\n    changed = m_currentMeta.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.Shell.Title\" ) );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    m_stepnameLabel = new Label( shell, SWT.RIGHT );\n    m_stepnameLabel.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.StepName.Label\" ) );\n    props.setLook( m_stepnameLabel );\n\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( middle, -margin );\n    fd.top = new FormAttachment( 0, margin );\n    m_stepnameLabel.setLayoutData( fd );\n    m_stepnameText = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_stepnameText.setText( stepname );\n    props.setLook( m_stepnameText );\n    m_stepnameText.addModifyListener( lsMod );\n\n    // format the text field\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_stepnameText.setLayoutData( fd );\n\n    m_wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( m_wTabFolder, Props.WIDGET_STYLE_TAB );\n    m_wTabFolder.setSimple( false );\n\n    // Start of the config tab\n    m_wConfigTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wConfigTab.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.ConfigTab.TabTitle\" ) );\n\n    Composite wConfigComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wConfigComp );\n\n    FormLayout configLayout = new FormLayout();\n    configLayout.marginWidth = 3;\n    configLayout.marginHeight = 3;\n    wConfigComp.setLayout( configLayout );\n\n    // incoming key field line\n    Label inKeyLab = new Label( wConfigComp, SWT.RIGHT );\n    inKeyLab.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.KeyField.Label\" ) );\n    props.setLook( inKeyLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    inKeyLab.setLayoutData( fd );\n\n    m_incomingKeyCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_incomingKeyCombo );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_incomingKeyCombo.setLayoutData( fd );\n\n    m_incomingKeyCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_incomingKeyCombo.setToolTipText( transMeta.environmentSubstitute( m_incomingKeyCombo.getText() ) );\n      }\n    } );\n\n    // incoming result line\n    Label inResultLab = new Label( wConfigComp, SWT.RIGHT );\n    inResultLab.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.ResultField.Label\" ) );\n    props.setLook( inResultLab );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_incomingKeyCombo, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    inResultLab.setLayoutData( fd );\n\n    m_incomingResultCombo = new CCombo( wConfigComp, SWT.BORDER );\n    props.setLook( m_incomingResultCombo );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_incomingKeyCombo, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_incomingResultCombo.setLayoutData( fd );\n\n    m_incomingResultCombo.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_incomingResultCombo.setToolTipText( transMeta.environmentSubstitute( m_incomingResultCombo.getText() ) );\n      }\n    } );\n\n    populateFieldsCombo();\n\n    wConfigComp.layout();\n    m_wConfigTab.setControl( wConfigComp );\n\n    // --- mapping editor tab\n    m_editorTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_editorTab.setText( BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.MappingEditorTab.TabTitle\" ) );\n\n    m_mappingEditor =\n        new MappingEditor( shell, m_wTabFolder, null, null, SWT.FULL_SELECTION | SWT.MULTI, false, props, transMeta,\n          namedClusterService, runtimeTestActionService, runtimeTester, namedClusterServiceLocator );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_mappingEditor.setLayoutData( fd );\n\n    m_mappingEditor.layout();\n    m_editorTab.setControl( m_mappingEditor );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_stepnameText, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -50 );\n    m_wTabFolder.setLayoutData( fd );\n\n    // Buttons inherited from BaseStepDialog\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n\n    setButtonPositions( new Button[] { wOK, wCancel }, margin, m_wTabFolder );\n\n    // Add listeners\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wCancel.addListener( SWT.Selection, lsCancel );\n    wOK.addListener( SWT.Selection, lsOK );\n\n    lsDef = new SelectionAdapter() {\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    m_stepnameText.addSelectionListener( lsDef );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    m_wTabFolder.setSelection( 0 );\n    setSize();\n\n    getData();\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n\n    return stepname;\n  }\n\n  protected void cancel() {\n    stepname = null;\n    m_currentMeta.setChanged( changed );\n\n    dispose();\n  }\n\n  protected void ok() {\n    if ( Const.isEmpty( m_stepnameText.getText() ) ) {\n      return;\n    }\n\n    stepname = m_stepnameText.getText();\n\n    m_currentMeta.setIncomingKeyField( m_incomingKeyCombo.getText() );\n    m_currentMeta.setIncomingResultField( m_incomingResultCombo.getText() );\n    List<String> problems = new ArrayList<String>();\n    Mapping mapping = m_mappingEditor.getMapping( false, problems, false );\n    if ( problems.size() > 0 ) {\n      StringBuffer p = new StringBuffer();\n      for ( String s : problems ) {\n        p.append( s ).append( \"\\n\" );\n      }\n      MessageDialog md =\n          new MessageDialog( shell,\n              BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.Error.IssuesWithMapping.Title\" ), null, BaseMessages\n                  .getString( PKG, \"HBaseRowDecoderDialog.Error.IssuesWithMapping\" )\n                  + \":\\n\\n\" + p.toString(), MessageDialog.WARNING, new String[] {\n                      BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.Error.IssuesWithMapping.ButtonOK\" ),\n                      BaseMessages.getString( PKG, \"HBaseRowDecoderDialog.Error.IssuesWithMapping.ButtonCancel\" ) }, 0 );\n      MessageDialog.setDefaultImage( GUIResource.getInstance().getImageSpoon() );\n      int idx = md.open() & 0xFF;\n      if ( idx == 1 || idx == 255 /* 255 = escape pressed */ ) {\n        return; // Cancel\n      }\n    }\n    if ( mapping != null ) {\n      m_currentMeta.setMapping( mapping );\n    }\n    NamedCluster selectedNamedCluster = m_mappingEditor.getSelectedNamedCluster();\n    if ( selectedNamedCluster != null ) {\n      m_currentMeta.setNamedCluster( selectedNamedCluster );\n    }\n\n    if ( !m_originalMeta.equals( m_currentMeta ) ) {\n      m_currentMeta.setChanged();\n      changed = m_currentMeta.hasChanged();\n    }\n\n    dispose();\n  }\n\n  protected void getData() {\n    if ( !Const.isEmpty( m_currentMeta.getIncomingKeyField() ) ) {\n      m_incomingKeyCombo.setText( m_currentMeta.getIncomingKeyField() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getIncomingResultField() ) ) {\n      m_incomingResultCombo.setText( m_currentMeta.getIncomingResultField() );\n    }\n\n    m_mappingEditor.setSelectedNamedCluster( m_currentMeta.getNamedCluster().getName() );\n    if ( m_currentMeta.getMapping() != null ) {\n      m_mappingEditor.setMapping( m_currentMeta.getMapping() );\n    }\n  }\n\n  private void populateFieldsCombo() {\n    StepMeta stepMeta = transMeta.findStep( stepname );\n    String currentKey = m_incomingKeyCombo.getText();\n    String currentResult = m_incomingResultCombo.getText();\n    int keyIndex = -1;\n    int valueIndex = -1;\n\n    if ( stepMeta != null ) {\n      try {\n        RowMetaInterface rowMeta = transMeta.getPrevStepFields( stepMeta );\n        if ( rowMeta != null && rowMeta.size() > 0 ) {\n          m_incomingKeyCombo.removeAll();\n          m_incomingResultCombo.removeAll();\n          for ( int i = 0; i < rowMeta.size(); i++ ) {\n            ValueMetaInterface vm = rowMeta.getValueMeta( i );\n            String fieldName = vm.getName();\n            if ( fieldName.equalsIgnoreCase( \"key\" ) ) {\n              keyIndex = i;\n            } else if ( fieldName.equalsIgnoreCase( \"value\" ) ) {\n              valueIndex = i;\n            }\n\n            m_incomingKeyCombo.add( fieldName );\n            m_incomingResultCombo.add( fieldName );\n          }\n\n          if ( !Const.isEmpty( currentKey ) ) {\n            m_incomingKeyCombo.setText( currentKey );\n          } else if ( keyIndex >= 0 ) {\n            // auto set key field\n            m_incomingKeyCombo.select( keyIndex );\n          }\n          if ( !Const.isEmpty( currentResult ) ) {\n            m_incomingResultCombo.setText( currentResult );\n          } else if ( valueIndex >= 0 ) {\n            // auto set value (Result) field\n            m_incomingResultCombo.select( valueIndex );\n          }\n        }\n      } catch ( KettleException ex ) {\n        if ( log.isError() ) {\n          log.logError( \"Error populating fields\", ex );\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoderMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingUtils;\nimport org.pentaho.di.core.CheckResult;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\n\nimport static org.pentaho.di.core.CheckResult.TYPE_RESULT_ERROR;\nimport static org.pentaho.di.core.CheckResult.TYPE_RESULT_OK;\nimport static org.pentaho.di.core.CheckResult.TYPE_RESULT_WARNING;\n\n/**\n * Meta class for the HBase row decoder.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n *\n */\n@Step( id = \"HBaseRowDecoder\", image = \"HBRD.svg\", name = \"HBaseRowDecoder.Name\",\n    description = \"HBaseRowDecoder.Description\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    documentationUrl = \"pdi-transformation-steps-reference-overview/hbase-row-decoder-pdi\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.hbaserowdecoder\" )\n@InjectionSupported( localizationPrefix = \"HBaseRowDecoder.Injection.\", groups = { \"MAPPING\" } )\npublic class HBaseRowDecoderMeta extends BaseStepMeta implements StepMetaInterface {\n\n  public static final String INCOMING_KEY_FIELD = \"incoming_key_field\";\n  public static final String INCOMING_RESULT_FIELD = \"incoming_result_field\";\n  protected NamedCluster namedCluster;\n\n  /** The incoming field that contains the HBase row key */\n  @Injection( name = \"KEY_FIELD\" )\n  protected String mIncomingKeyField = \"\";\n\n  /** The incoming field that contains the HBase row Result object */\n  @Injection( name = \"HBASE_RESULT_FIELD\" )\n  protected String mIncomingResultField = \"\";\n\n  /** The mapping to use */\n  protected Mapping mMapping;\n\n  @InjectionDeep\n  protected MappingDefinition mappingDefinition;\n\n  private MetastoreLocator metaStoreService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  private final NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n\n  public HBaseRowDecoderMeta() {\n    this( BigDataServicesHelper.getNamedClusterServiceLocator(), NamedClusterManager.getInstance(),\n      RuntimeTestActionServiceImpl.getInstance(), RuntimeTesterImpl.getInstance() );\n  }\n\n  public HBaseRowDecoderMeta( NamedClusterServiceLocator namedClusterServiceLocator,\n                              NamedClusterService namedClusterService,\n                              RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this( namedClusterServiceLocator, namedClusterService, runtimeTestActionService, runtimeTester, null );\n  }\n\n  public synchronized MetastoreLocator getMetastoreLocators() {\n    if ( this.metaStoreService == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metaStoreService = metastoreLocators.stream().findFirst().get();\n      } catch ( Exception e ) {\n        logError( \"Error getting MetastoreLocator\", e );\n      }\n    }\n    return this.metaStoreService;\n  }\n\n  @VisibleForTesting\n  HBaseRowDecoderMeta( NamedClusterServiceLocator namedClusterServiceLocator,\n                              NamedClusterService namedClusterService,\n                              RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester, MetastoreLocator metaStore ) {\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n    this.metaStoreService = metaStore;\n  }\n\n\n\n  /**\n   * @param namedCluster the namedCluster to set\n   */\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  /**\n   * @return the namedCluster\n   */\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n\n  /**\n   * Set the incoming field that holds the HBase row key\n   *\n   * @param inKey\n   *          the name of the field that holds the key\n   */\n  public void setIncomingKeyField( String inKey ) {\n    mIncomingKeyField = inKey;\n  }\n\n  /**\n   * Get the incoming field that holds the HBase row key\n   *\n   * @return the name of the field that holds the key\n   */\n  public String getIncomingKeyField() {\n    return mIncomingKeyField;\n  }\n\n  /**\n   * Set the incoming field that holds the HBase row Result object\n   *\n   * @param inResult\n   *          the name of the field that holds the HBase row Result object\n   */\n  public void setIncomingResultField( String inResult ) {\n    mIncomingResultField = inResult;\n  }\n\n  /**\n   * Get the incoming field that holds the HBase row Result object\n   *\n   * @return the name of the field that holds the HBase row Result object\n   */\n  public String getIncomingResultField() {\n    return mIncomingResultField;\n  }\n\n  /**\n   * Set the mapping to use for decoding the row\n   *\n   * @param m\n   *          the mapping to use\n   */\n  public void setMapping( Mapping m ) {\n    mMapping = m;\n  }\n\n  /**\n   * Get the mapping to use for decoding the row\n   *\n   * @return the mapping to use\n   */\n  public Mapping getMapping() {\n    return mMapping;\n  }\n\n  public MappingDefinition getMappingDefinition() {\n    return mappingDefinition;\n  }\n\n  public void setMappingDefinition( MappingDefinition mappingDefinition ) {\n    this.mappingDefinition = mappingDefinition;\n  }\n\n  public void setDefault() {\n    mIncomingKeyField = \"\";\n    mIncomingResultField = \"\";\n    namedCluster = namedClusterService.getClusterTemplate();\n  }\n\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n      VariableSpace space ) throws KettleStepException {\n\n    rowMeta.clear(); // start afresh - eats the input\n\n    if ( mMapping != null ) {\n      int kettleType;\n\n      if ( mMapping.getKeyType() == Mapping.KeyType.DATE\n        || mMapping.getKeyType() == Mapping.KeyType.UNSIGNED_DATE ) {\n        kettleType = ValueMetaInterface.TYPE_DATE;\n      } else if ( mMapping.getKeyType() == Mapping.KeyType.STRING ) {\n        kettleType = ValueMetaInterface.TYPE_STRING;\n      } else if ( mMapping.getKeyType() == Mapping.KeyType.BINARY ) {\n        kettleType = ValueMetaInterface.TYPE_BINARY;\n      } else {\n        kettleType = ValueMetaInterface.TYPE_INTEGER;\n      }\n\n      ValueMetaInterface keyMeta = new ValueMetaBase( mMapping.getKeyName(), kettleType );\n\n      keyMeta.setOrigin( origin );\n      rowMeta.addValueMeta( keyMeta );\n\n      // Add the rest of the fields in the mapping\n      Map<String, HBaseValueMetaInterface> mappedColumnsByAlias = mMapping.getMappedColumns();\n      Set<String> aliasSet = mappedColumnsByAlias.keySet();\n      for ( String alias : aliasSet ) {\n        HBaseValueMetaInterface columnMeta = mappedColumnsByAlias.get( alias );\n        columnMeta.setOrigin( origin );\n        rowMeta.addValueMeta( columnMeta );\n      }\n    }\n  }\n\n  public void check( List<CheckResultInterface> remarks, TransMeta transMeta, StepMeta stepMeta, RowMetaInterface prev,\n      String[] input, String[] output, RowMetaInterface info ) {\n\n    CheckResult cr;\n\n    if ( ( prev == null ) || ( prev.size() == 0 ) ) {\n      cr = new CheckResult(\n          TYPE_RESULT_WARNING, \"Not receiving any fields from previous steps!\", stepMeta );\n      remarks.add( cr );\n    } else {\n      cr =\n          new CheckResult( TYPE_RESULT_OK, \"Step is connected to previous one, receiving \" + prev.size()\n              + \" fields\", stepMeta );\n      remarks.add( cr );\n    }\n\n    // See if we have input streams leading to this step!\n    if ( input.length > 0 ) {\n      cr = new CheckResult( TYPE_RESULT_OK, \"Step is receiving info from other steps.\", stepMeta );\n      remarks.add( cr );\n    } else {\n      cr = new CheckResult( TYPE_RESULT_ERROR, \"No input received from other steps!\", stepMeta );\n      remarks.add( cr );\n    }\n  }\n\n  void applyInjection() throws KettleException {\n    if ( namedCluster == null ) {\n      throw new KettleException( \"Named cluster was not initialized!\" );\n    }\n    try {\n      HBaseService hBaseService = namedClusterServiceLocator.getService( this.namedCluster, HBaseService.class );\n      Mapping tempMapping = null;\n      if ( mappingDefinition != null ) {\n        tempMapping = MappingUtils.getMapping( mappingDefinition, hBaseService );\n        mMapping = tempMapping;\n      }\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleException( e );\n    }\n  }\n\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n      TransMeta transMeta, Trans trans ) {\n\n    return new HBaseRowDecoder( stepMeta, stepDataInterface, copyNr, transMeta, trans, namedClusterServiceLocator );\n  }\n\n  public StepDataInterface getStepData() {\n    return new HBaseRowDecoderData();\n  }\n\n  @Override\n  public String getXML() {\n    try {\n      applyInjection();\n    } catch ( KettleException e ) {\n      log.logError( \"Error occurred while injecting metadata. Transformation meta could be incorrect!\", e );\n    }\n    StringBuilder retval = new StringBuilder();\n\n    if ( StringUtils.isNotEmpty( mIncomingKeyField ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( INCOMING_KEY_FIELD, mIncomingKeyField ) );\n    }\n    if ( StringUtils.isNotEmpty( mIncomingResultField ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( INCOMING_RESULT_FIELD, mIncomingResultField ) );\n    }\n\n    namedClusterLoadSaveUtil.getXml( retval, namedClusterService, namedCluster,\n      MetaStoreConst.getDefaultMetastore(), log );\n    if ( mMapping != null ) {\n      retval.append( mMapping.getXML() );\n    }\n\n    return retval.toString();\n  }\n\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore ) throws KettleXMLException {\n    if ( metaStore == null ) {\n      metaStore = getMetastoreLocators().getMetastore();\n    }\n\n    mIncomingKeyField = XMLHandler.getTagValue( stepnode, INCOMING_KEY_FIELD );\n    mIncomingResultField = XMLHandler.getTagValue( stepnode, INCOMING_RESULT_FIELD );\n    this.namedCluster =\n        namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, null, repository, metaStore, stepnode, log );\n    try {\n      HBaseService hbaseService = namedClusterServiceLocator.getService( this.namedCluster, HBaseService.class );\n      mMapping = ( hbaseService == null ? null : hbaseService.getMappingFactory().createMapping() );\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleXMLException( e );\n    }\n    if ( mMapping != null ) {\n      mMapping.loadXML( stepnode );\n    }\n  }\n\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId idStep, List<DatabaseMeta> databases )\n    throws KettleException {\n\n    mIncomingKeyField = rep.getStepAttributeString( idStep, 0, INCOMING_KEY_FIELD );\n    mIncomingResultField = rep.getStepAttributeString( idStep, 0, INCOMING_RESULT_FIELD );\n    this.namedCluster =\n        namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, idStep, rep, metaStore, null, log );\n    try {\n      mMapping =\n          namedClusterServiceLocator.getService( this.namedCluster, HBaseService.class ).getMappingFactory()\n              .createMapping();\n    } catch ( ClusterInitializationException e ) {\n      throw new KettleXMLException( e );\n    }\n    mMapping.readRep( rep, idStep );\n  }\n\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId idTransformation, ObjectId idStep ) throws KettleException {\n\n    if ( StringUtils.isNotEmpty( mIncomingKeyField ) ) {\n      rep.saveStepAttribute( idTransformation, idStep, 0, INCOMING_KEY_FIELD, mIncomingKeyField );\n    }\n    if ( StringUtils.isNotEmpty( mIncomingResultField ) ) {\n      rep.saveStepAttribute( idTransformation, idStep, 0, INCOMING_RESULT_FIELD, mIncomingResultField );\n    }\n\n    namedClusterLoadSaveUtil.saveRep( rep, metaStore, idTransformation, idStep, namedClusterService, namedCluster, log );\n\n    if ( mMapping != null ) {\n      mMapping.saveRep( rep, idTransformation, idStep );\n    }\n  }\n\n  /**\n   * Get the UI for this step.\n   *\n   * @param shell\n   *          a <code>Shell</code> value\n   * @param meta\n   *          a <code>StepMetaInterface</code> value\n   * @param transMeta\n   *          a <code>TransMeta</code> value\n   * @param name\n   *          a <code>String</code> value\n   * @return a <code>StepDialogInterface</code> value\n   */\n  public StepDialogInterface getDialog( Shell shell, StepMetaInterface meta, TransMeta transMeta, String name ) {\n    return new HBaseRowDecoderDialog( shell, meta, transMeta, name, namedClusterService, runtimeTestActionService,\n      runtimeTester, namedClusterServiceLocator );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xmlns:pen=\"http://www.pentaho.com/xml/schemas/pentaho-blueprint\"\n           xsi:schemaLocation=\"\n            http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n            http://www.pentaho.com/xml/schemas/pentaho-blueprint http://www.pentaho.com/xml/schemas/pentaho-blueprint.xsd\">\n  <bean id=\"hBaseInputMeta\" class=\"org.pentaho.big.data.kettle.plugins.hbase.input.HBaseInputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"hBaseOutputMeta\" class=\"org.pentaho.big.data.kettle.plugins.hbase.output.HBaseOutputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"hBaseRowDecoderMeta\" class=\"org.pentaho.big.data.kettle.plugins.hbase.rowdecoder.HBaseRowDecoderMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n\n  <reference id=\"namedClusterService\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n  <reference id=\"namedClusterServiceLocator\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator\"/>\n  <reference id=\"runtimeTester\" interface=\"org.pentaho.runtime.test.RuntimeTester\"/>\n  <reference id=\"runtimeTestActionService\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionService\"/>\n</blueprint>"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hbase/input/messages/messages_en_US.properties",
    "content": "HBaseInput.Name=HBase input\nHBaseInput.Description=Reads data from a HBase table according to a mapping \n\nHBaseInputDialog.Shell.Title=HBase input\nHBaseInputDialog.StepName.Label=Step name\nHBaseInputDialog.ConfigTab.TabTitle=Configure query\nHBaseInputDialog.FilterTab.TabTitle=Filter result set\nHBaseInputDialog.MappingEditorTab.TabTitle=Create/Edit mappings\nHBaseInputDialog.Zookeeper.Label=Zookeeper host(s)\nHBaseInputDialog.ZookeeperPort.Label=Zookeeper port\nHBaseInputDialog.Zookeeper.TipText=Comma separated list of hosts in the zookeeper quorum\nHBaseInputDialog.CoreConfig.Label=URL to hbase-site.xml\nHBaseInputDialog.CoreConfig.TipText=URL to hbase-site.xml (leave blank if in classpath)\nHBaseInputDialog.DefaultConfig.Label=URL to hbase-default.xml\nHBaseInputDialog.DefaultConfig.TipText=URL to hbase-default.xml (leave blank if in classpath)\nHBaseInputDialog.TableName.Label=HBase table name\nHBaseInputDialog.TableName.TipText=The name of the HBase table to read from\nHBaseInputDialog.TableName.Button=Get mapped table names\nHBaseInputDialog.FileType.XML=XML config file\n\nHBaseInputDialog.MappingName.Label=Mapping name\nHBaseInputDialog.MappingName.TipText=Mapping to use for the above HBase table\nHBaseInputDialog.MappingName.Button=Get mappings\nHBaseInputDialog.MappingName.Button=Get mappings for the specified table\nHBaseInputDialog.StoreMapping.Label=Store mapping info in step meta data\nHBaseInputDialog.StoreMapping.TipText=Store the mapping in the step''s meta data, rather than load it from HBase at runtime\nHBaseInputDialog.NamedCluster.Label=Hadoop Cluster\nHBaseInputDialog.NamedCluster.TipText=Hadoop cluster to use for setting ZooKeeper host(s) and port\nHBaseInputDialog.NamedClusterMissingValues.Msg=The selected Hadoop cluster is missing required values.\nHBaseInputDialog.NamedClusterNotSelected.Msg=You must select a Hadoop cluster to continue.\n\nHBaseInputDialog.KeyStart.Label=Start key value (inclusive) for table scan\nHBaseInputDialog.KeyStart.TipText=Start key value (inclusive) for table scan. Leave this and stop key value blank for a full scan.\n\nHBaseInputDialog.KeyStop.Label=Stop key value (exclusive) for table scan\nHBaseInputDialog.KeyStop.TipText=Stop key value (exclusive) for table scan. Leave this and start key value blank for a full scan.\n\nHBaseInputDialog.ScannerCache.Label=Scanner row cache size\nHBaseInputDialog.ScannerCache.TipText=Number of rows for caching. More rows = faster scans, but higher memory consumption (leave empty for default).\n\nHBaseInputDialog.IncludeKey.Label=Include the key as a column\n\nHBaseInputDialog.ErrorMessage.UnableToConnect=Problem connecting to HBase\nHBaseInputDialog.ErrorMessage.UnableToGetMapping=Unable to retrieve mapping information\nHBaseInputDialog.ErrorMessage.FailedClosingHBaseConnection=Unable to close HBase connection\n\nHBaseInputDialog.Fields.FIELD_ALIAS=Alias\nHBaseInputDialog.Fields.FIELD_KEY=Key\nHBaseInputDialog.Fields.FIELD_FAMILY=Column family\nHBaseInputDialog.Fields.FIELD_NAME=Column name\nHBaseInputDialog.Fields.FIELD_TYPE=Type\nHBaseInputDialog.Fields.FIELD_FORMAT=Format\nHBaseInputDialog.Fields.FIELD_INDEXED=Indexed values\nHBaseInputDialog.Fields.FIELD_LENGTH=Length\nHBaseInputDialog.Fields.FIELD_PRECISION=Precision\nHBaseInputDialog.Fields.FIELD_CURRENCY=Currency\nHBaseInputDialog.Fields.FIELD_DECIMAL=Decimal\nHBaseInputDialog.Fields.FIELD_GROUP=Group\nHBaseInputDialog.Fields.FIELD_TRIM_TYPE=Trim type\n\nHBaseInputDialog.Filters.RADIO_ALL=Match all\nHBaseInputDialog.Filters.RADIO_ANY=Match any\nHBaseInputDialog.Filters.FIELD_ALIAS=Alias\nHBaseInputDialog.Filters.FIELD_FAMILY=Column family\nHBaseInputDialog.Filters.FIELD_NAME=Column name\nHBaseInputDialog.Filters.FIELD_TYPE=Type\nHBaseInputDialog.Filters.FIELD_OPERATOR=Operator\nHBaseInputDialog.Filters.FIELD_COMPARISON=Comparison value\nHBaseInputDialog.Filters.FIELD_FORMAT=Format\nHBaseInputDialog.Filters.FIELD_SIGNED=Signed comparison\n\nMappingDialog.TableName.Label=HBase table name\nMappingDialog.TableName.GetTableNames=Get table names\nMappingDialog.MappingName.Label=Mapping name\nMappingDialog.SaveMapping=Save mapping\nMappingDialog.SaveMapping.TipText=Persist the mapping in HBase\nMappingDialog.DeleteMapping=Delete mapping\nMappingDialog.GetIncomingFields=Get incoming fields\nMappingDialog.KeyValueTemplate=Create a tuple template\nMappingDialog.KeyValueTemplate.TipText=Creates a template mapping for outputting <key, family, column, value, time stamp> rows\nMappingDialog.NamedCluster.Label=Hadoop Cluster\n\nMappingDialog.Error.Title.MissingTableMappingName=Missing table/mapping name\nMappingDialog.Error.Message.MissingTableMappingName=You must specify a table and mapping name\nMappingDialog.Error.Title.NoFieldsDefined=No fields defined\nMappingDialog.Error.Message.NoFieldsDefined=No fields have been defined for this mapping\nMappingDialog.Error.Title.NoAliasForKey=Missing alias for key\nMappingDialog.Error.Message.NoAliasForKey=The key must have an alias defined\nMappingDialog.Error.Title.NoTypeForKey=Missing type for key\nMappingDialog.Error.Message.NoTypeForKey=Missing type for key\nMappingDialog.Error.Title.MoreThanOneKey=More than one key\nMappingDialog.Error.Message.MoreThanOneKey=More than one key is defined in the list of fields\nMappingDialog.Error.Title.DuplicateColumn=Duplicate column\nMappingDialog.Error.Message1.DuplicateColumn=Column \"\nMappingDialog.Error.Message2.DuplicateColumn=\" has already been defined for this mapping\nMappingDialog.Error.Title.NoKeyDefined=No key defined\nMappingDialog.Error.Message.NoKeyDefined=No key has been defined for this mapping\nMappingDialog.Error.Title.UnableToConnect=Unable to connect to HBase\nMappingDialog.Error.Message.UnableToConnect=Unable to connect to HBase\n\nMappingDialog.Error.Title.IssuesPreventingSaving=Error(s) in field definitions\nMappingDialog.Error.Message.IssuesPreventingSaving=The following issues prevent this mapping from being created\nMappingDialog.Error.Message.FamilyIssue=These fields do not have a column family defined\nMappingDialog.Error.Message.ColumnIssue=These fields do not have a column name defined\nMappingDialog.Error.Message.TypeIssue=These fields do not have type information specified\n\nMappingDialog.Error.Title.ErrorCreatingTable=Problem creating table\nMappingDialog.Error.Message.ErrorCreatingTable=A problem occurred while trying to create table\n\nHBaseInputDialog.Error.IssuesWithMapping.Title=Problems with mapping\nHBaseInputDialog.Error.IssuesWithMapping=There are some problems with the mapping that need rectification\nHBaseInputDialog.Error.IssuesWithMapping.ButtonOK=OK and close\nHBaseInputDialog.Error.IssuesWithMapping.ButtonCancel=Cancel and rectify\n\nMappingDialog.Info.Title.MappingExists=Mapping exists\nMappingDialog.Info.Message1.MappingExists=A mapping called \"\nMappingDialog.Info.Message2.MappingExists=\" already exists for table \"\nMappingDialog.Info.Message3.MappingExists=\". Overwrite?\nMappingDialog.Info.Title.MappingSaved=Mapping saved\nMappingDialog.Info.Message1.MappingSaved=Mapping \"\nMappingDialog.Info.Message2.MappingSaved=\" on table \"\nMappingDialog.Info.Message3.MappingSaved=\" saved successfully.\n\nMappingDialog.Info.Title.ConfirmDelete=OK to delete?\nMappingDialog.Info.Message.ConfirmDelete=Delete mapping \"{0}\" on table \"{1}\"?\n\nMappingDialog.Info.Title.MappingDeleted=Mapping deleted\nMappingDialog.Info.Message.MappingDeleted=Mapping \"{0}\" on table \"{1}\" deleted successfully.\n\nMappingDialog.Error.Title.ErrorSaving=Error during save\nMappingDialog.Error.Message.ErrorSaving=An error occurred while trying to save the mapping\nMappingDialog.Error.Title.ErrorLoadingMapping=Error during load\nMappingDialog.Error.Message.ErrorLoadingMapping=An error occurred while trying to load the mapping definition\nMappingDialog.Error.Message.CantConnectNoConnectionDetailsProvided=Can't connect to HBase as no connection details have been provided\n\nMappingDialog.GetFieldsChoice.Title=Question\nMappingDialog.GetFieldsChoice.Message=Data has already been entered - {0} fields were found.\\nHow do you want to add the {1} incoming fields?\n\nMappingDialog.Error.Title.DeleteMapping=An error occurred\nMappingDialog.Error.Message.DeleteMapping=Mapping \"{0}\" for table \"{1}\" does not seem to exist!\nMappingDialog.Error.Message.DeleteMappingIO=An IO error occurred while trying to delete\\nmapping \"{0}\" on table \"{1}\"\\n{2}\n\nMappingDialog.AddNew=Add new\nMappingOutputDialog.Add=Add all\nMappingOutputDialog.ClearAndAdd=Clear and add\nMappingOutputDialog.Cancel=Cancel\n\nHBaseInput.TableName.Missing=HBase table name is required.\nHBaseInput.ClosingConnection=Closing connection...\nHBaseInput.Message.SettingScannerCaching=Set scanner caching to {0} rows.\nHBaseInput.Error.NoMappingName=Reading mapping from HBase, but no mapping name has been supplied!\nHBaseInput.Error.UnableToObtainConnection=Unable to obtain a connection to HBase\nHBaseInput.Error.UnableToCreateAMappingAdminConnection=Unable to create a MappingAdmin connection\nHBaseInput.Error.SourceTableDoesNotExist=Source table \"{0}\" does not exist!\nHBaseInput.Error.SourceTableIsNotAvailable=Source table \"{0}\" is not available!\nHBaseInput.Error.AvailabilityReadinessProblem=A problem occurred when trying to check availability/readiness of source table \"{0}\"\nHBaseInput.Error.UnableToFindUserSelectedColumn=Unable to find user-selected column \"{0}\" in the mapping \"{1}\"\nHBaseInput.Error.UnableToParseLowerBoundKeyValue=Unable to parse lower bound key value \"{0}\"\nHBaseInput.Error.UnableToParseUpperBoundKeyValue=Unable to parse upper bound key value \"{0}\"\nHBaseInput.Error.ColumnFilterIsNotInTheMapping=Column filter \"{0}\" is not in the mapping!\nHBaseInput.Error.FieldTypeMismatch=Type ({0}) of column filter for \"{1}\" does not match type specified for this field in the mapping ({2})\nHBaseInput.Error.ProblemClosingConnection=Problem closing connection to HBase table \"{0}\"\nHBaseInput.Error.ProblemClosingConnection1=A problem occurred while closing connection to HBase: {0}\nHBaseInput.Error.UnableToLookupQualifier=Unable to lookup qualifier/column \"{0}\"\nHBaseInput.Error.ColumnNotDefinedInOutput=HBase column \"{0}\" doesn't seem to be defined in the output\nHBaseInput.Error.UnableToParseZookeeperPort=Unable to parse zookeeper port - using default\nHBaseInput.Error.UnableToRetrieveMapping=Unable to retrieve mapping \"{0}\" on table \"{1}\"\nHBaseInput.Error.UnableToSetSourceTableForScan=Unable to set source table for scan\nHBaseInput.Error.UnableToConfigureSourceTableScan=Unable to configure a new souce table scan\nHBaseInput.Error.UnableToAddColumnToScan=Unable to add a column definition to the current scan\nHBaseInput.Error.UnableToAddColumnFilterToScan=Unable to add column filter to the current scan\nHBaseInput.Error.UnableToExecuteSourceTableScan=Unable to execute source table scan\nHBaseInput.Error.FiltersNotApplicableWithTupleMapping=WARNING: server-side column value filtering is not applicable when using a tuple mapping - ignoring filters...\nHBaseInput.Error.ServiceStatus=Cannot communicate with HBaseService\\nSaving the transformation may lose data.\\nPlease correct the communication issue before working with this transformation\\n\n\nDialog.Error=Error\n\nHBaseInput.Injection.HBASE_SITE_XML_URL=The address of the hbase-site.xml file.\nHBaseInput.Injection.HBASE_DEFAULT_XML_URL=The address of the hbase-default.xml file.\nHBaseInput.Injection.SOURCE_TABLE_NAME=The name of the HBase table to read from.\nHBaseInput.Injection.SOURCE_MAPPING_NAME=The name of the HBase table map to use.\nHBaseInput.Injection.START_KEY_VALUE=The start key value for range scans.\nHBaseInput.Injection.STOP_KEY_VALUE=The stop key value for range scans.\nHBaseInput.Injection.SCANNER_ROW_CACHE_SIZE=The number of rows that are cached each time an HBase fetch request is made.\nHBaseInput.Injection.MATCH_ANY_FILTER=Set this flag to output rows if they match any filter or all filters.\n\nHBaseInput.Injection.OUTPUT_FIELDS=Fields\nHBaseInput.Injection.OUTPUT_FIELD_KEY=This option indicates if the column is the key for the table.\nHBaseInput.Injection.OUTPUT_FIELD_ALIAS=The name that the field will be given in the output stream.\nHBaseInput.Injection.OUTPUT_FIELD_COLUMN_NAME=The name of the column in the HBase table.\nHBaseInput.Injection.OUTPUT_FIELD_FAMILY=The family of the column in the HBase table.\nHBaseInput.Injection.OUTPUT_FIELD_TYPE=This option will let you specify the type of field (string, date, number).\nHBaseInput.Injection.OUTPUT_FIELD_FORMAT=The numeric and date format to apply to the output field.\n\nHBaseInput.Injection.MAPPING=Mappings\nHBaseInput.Injection.TABLE_NAME=The name of the HBase table.\nHBaseInput.Injection.MAPPING_NAME=The name of the map to use for the HBase table.\n\nHBaseInput.Injection.MAPPING_ALIAS=The name to assign to the HBase table key.\nHBaseInput.Injection.MAPPING_KEY=This option indicates if the column is the key for the table.\nHBaseInput.Injection.MAPPING_COLUMN_FAMILY=The family of the column in the HBase table.\nHBaseInput.Injection.MAPPING_COLUMN_NAME=The name of the column in the HBase table.\nHBaseInput.Injection.MAPPING_TYPE=The data type of the column.\nHBaseInput.Injection.MAPPING_INDEXED_VALUES=Optional comma-separated set of legal values if the column is a String type.\n\nHBaseInput.Injection.FILTER=Filters\nHBaseInput.Injection.ALIAS=The name of the field.\nHBaseInput.Injection.FIELD_TYPE=This option will let you specify the type of field (string, date, number).\nHBaseInput.Injection.COMPARISON_TYPE=The type of comparison to perform.\nHBaseInput.Injection.SIGNED_COMPARISON=This option controls if HBase''s native comparisons should be used.\nHBaseInput.Injection.COMPARISON_VALUE=The value used for filtering data.\nHBaseInput.Injection.FORMAT=The numeric and date format to apply to the field.\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hbase/mapping/messages/messages_en_US.properties",
    "content": "MappingDialog.Error.Message.NamedClusterNotSelected.Msg=You must select a named cluster to continue\nMappingDialog.Error.Title.NamedClusterNotSelected=No named cluster selected\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hbase/output/messages/messages_en_US.properties",
    "content": "HBaseOutput.Name=HBase output\nHBaseOutput.Description=Writes data to an HBase table according to a mapping\n\nHBaseOutputDialog.Shell.Title=HBase output\nHBaseOutputDialog.StepName.Label=Step name\nHBaseOutputDialog.ConfigTab.TabTitle=Configure connection\nHBaseOutputDialog.MappingEditorTab.TabTitle=Create/Edit mappings\nHBaseOutputDialog.Zookeeper.Label=Zookeeper host(s)\nHBaseOutputDialog.ZookeeperPort.Label=Zookeeper port\nHBaseOutputDialog.Zookeeper.TipText=Comma separated list of hosts in the zookeeper quorum\nHBaseOutputDialog.CoreConfig.Label=URL to hbase-site.xml\nHBaseOutputDialog.CoreConfig.TipText=URL to hbase-site.xml (leave blank if in classpath)\nHBaseOutputDialog.DefaultConfig.Label=URL to hbase-default.xml\nHBaseOutputDialog.DefaultConfig.TipText=URL to hbase-default.xml (leave blank if in classpath)\nHBaseOutputDialog.TableName.Label=HBase table name\nHBaseOutputDialog.TableName.TipText=The name of the HBase table to write to\nHBaseOutputDialog.TableName.Button=Get table names\nHBaseOutputDialog.FileType.XML=XML config file\nHBaseOutputDialog.NamedCluster.Label=Hadoop cluster\nHBaseOutputDialog.NamedCluster.TipText=Hadoop cluster to use for setting ZooKeeper host(s) and port\nHBaseOutputDialog.NamedClusterMissingValues.Msg=The selected Hadoop cluster is missing required values.\nHBaseOutputDialog.NamedClusterNotSelected.Msg=You must select a Hadoop cluster to continue.\n\nHBaseOutputDialog.MappingName.Label=Mapping name\nHBaseOutputDialog.MappingName.TipText=Mapping to use for the above HBase table\nHBaseOutputDialog.MappingName.Button=Get mappings\n\nHBaseOutputDialog.DeleteRowKey.Label=Delete rows by mapping key\nHBaseOutputDialog.DeleteRowKey.TipText=Deletes data from the HBase table based on the row key defined in the mapping\n\nHBaseOutputDialog.StoreMapping.Label=Store mapping info in step meta data\nHBaseOutputDialog.StoreMapping.TipText=Store the mapping in the step''s meta data, rather than load it from HBase at runtime\n\nHBaseOutputDialog.DisableWAL.Label=Disable write to WAL\nHBaseOutputDialog.DisableWAL.TipText=Speeds up loading at the expense of error-recovery\n\nHBaseOutputDialog.WriteBufferSize.Label=Size of write buffer (bytes)\nHBaseOutputDialog.WriteBufferSize.TipText=Larger buffer = faster/greater memory consumption. Leave blank for no buffering.\n\n\nHBaseOutputDialog.ErrorMessage.UnableToConnect=Problem connecting to HBase\nHBaseOutputDialog.ErrorMessage.UnableToGetMapping=Unable to retrieve mapping information\nHBaseOutputDialog.ErrorMessage.FailedClosingHBaseConnection=Unable to close HBase connection\n\nHBaseOutputDialog.Error.IssuesWithMapping.Title=Problems with mapping\nHBaseOutputDialog.Error.IssuesWithMapping=There are some problems with the mapping that need rectification\nHBaseOutputDialog.Error.IssuesWithMapping.ButtonOK=OK and close\nHBaseOutputDialog.Error.IssuesWithMapping.ButtonCancel=Cancel and rectify\n\n\nMappingDialog.TableName.Label=HBase table name\nMappingDialog.TableName.GetTableNames=Get table names\nMappingDialog.MappingName.Label=Mapping name\nMappingDialog.SaveMapping=Save mapping\nMappingDialog.SaveMapping.TipText=Persist the mapping in HBase\nMappingDialog.DeleteMapping=Delete mapping\nMappingDialog.GetIncomingFields=Get incoming fields\nMappingDialog.KeyValueTemplate=Create a tuple template\nMappingDialog.KeyValueTemplate.TipText=Creates a template mapping for outputting <key, family, column, value, time stamp> rows\n\nMappingDialog.Error.Title.MissingTableMappingName=Missing table/mapping name\nMappingDialog.Error.Message.MissingTableMappingName=You must specify a table and mapping name\nMappingDialog.Error.Title.NoFieldsDefined=No fields defined\nMappingDialog.Error.Message.NoFieldsDefined=No fields have been defined for this mapping\nMappingDialog.Error.Title.MoreThanOneKey=More than one key\nMappingDialog.Error.Message.MoreThanOneKey=More than one key is defined in the list of fields\nMappingDialog.Error.Title.DuplicateColumn=Duplicate column\nMappingDialog.Error.Message1.DuplicateColumn=Column \"\nMappingDialog.Error.Message2.DuplicateColumn=\" has already been defined for this mapping\nMappingDialog.Error.Title.NoKeyDefined=No key defined\nMappingDialog.Error.Message.NoKeyDefined=No key has been defined for this mapping\nMappingDialog.Error.Message.CantConnectNoConnectionDetailsProvided=Can't connect to HBase as no connection details have been provided\n\nMappingDialog.Error.Title.IssuesPreventingSaving=Error(s) in field definitions\nMappingDialog.Error.Message.IssuesPreventingSaving=The following issues prevent this mapping from being created\nMappingDialog.Error.Message.FamilyIssue=These fields do not have a column family defined\nMappingDialog.Error.Message.ColumnIssue=These fields do not have a column name defined\nMappingDialog.Error.Message.TypeIssue=These fields do not have type information specified\nMappingDialog.Error.Title.UnableToConnect=Unable to connect to HBase\nMappingDialog.Error.Message.UnableToConnect=Unable to connect to HBase\n\nMappingDialog.Info.Title.MappingExists=Mapping exists\nMappingDialog.Info.Message1.MappingExists=A mapping called \"\nMappingDialog.Info.Message2.MappingExists=\" already exists for table \"\nMappingDialog.Info.Message3.MappingExists=\". Overwrite?\nMappingDialog.Info.Title.MappingSaved=Mapping saved\nMappingDialog.Info.Message1.MappingSaved=Mapping \"\nMappingDialog.Info.Message2.MappingSaved=\" on table \"\nMappingDialog.Info.Message3.MappingSaved=\" saved successfully.\n\nMappingDialog.Info.Title.ConfirmDelete=OK to delete?\nMappingDialog.Info.Message.ConfirmDelete=Delete mapping \"{0}\" on table \"{1}\"?\n\nMappingDialog.Info.Title.MappingDeleted=Mapping deleted\nMappingDialog.Info.Message.MappingDeleted=Mapping \"{0}\" on table \"{1}\" deleted successfully.\n\nMappingDialog.Error.Title.ErrorSaving=Error during save\nMappingDialog.Error.Message.ErrorSaving=An error occurred while trying to save the mapping\nMappingDialog.Error.Title.ErrorLoadingMapping=Error during load\nMappingDialog.Error.Message.ErrorLoadingMapping=An error occurred while trying to load the mapping definition\n\nMappingDialog.GetFieldsChoice.Title=Question\nMappingDialog.GetFieldsChoice.Message=Data has already been entered - {0} fields were found.\\nHow do you want to add the {1} incoming fields?\n\nMappingDialog.Error.Title.DeleteMapping=An error occurred\nMappingDialog.Error.Message.DeleteMapping=Mapping \"{0}\" for table \"{1}\" does not seem to exist!\nMappingDialog.Error.Message.DeleteMappingIO=An IO error occurred while trying to delete\\nmapping \"{0}\" on table \"{1}\"\\n{2}\n\nMappingDialog.AddNew=Add new\nMappingOutputDialog.Add=Add all\nMappingOutputDialog.ClearAndAdd=Clear and add\nMappingOutputDialog.Cancel=Cancel\n\nHBaseOutput.ConnectingToHBase=Connecting to HBase...\nHBaseOutput.ConnectingToTargetTable=Connecting to target table...\nHBaseOutput.FlushingWriteBuffer=Flushing write buffer...\nHBaseOutput.ClosingConnectionToTable=Closing connection to target table\nHBaseOutput.RetrievingMappingDetails=Retrieving mapping details for target table\nHBaseOutput.SettingWriteBuffer=Setting the write buffer to {0} bytes\nHBaseOutput.DisablingWriteToWAL=Disabling write to WAL\nHBaseOutput.ClosingConnectionToTargetTable=Closing connection to target table\n\nHBaseOutput.Error.ProblemFlushingBufferedData=A problem occurred while flushing buffered data: {0}\nHBaseOutput.Error.ProblemWhenClosingConnection=A problem occurred when closing the connection to the target table: {0}\nHBaseOutput.Error.UnableToObtainConnection=Unable to obtain a connection to HBase: {0}\nHBaseOutput.Error.NoTargetTableSpecified=No target table specified!\nHBaseOutput.Error.TargetTableDoesNotExist=Target table \"{0}\" does not exist!\nHBaseOutput.Error.TargetTableIsNotAvailable=Target table \"{0}\" is not available!\nHBaseOutput.Error.ProblemWhenCheckingAvailReadiness=A problem occurred when trying to check availability/readiness of target table \"{0}\": {1}\nHBaseOutput.Error.ProblemGettingMappingInfo=Problem getting mapping information: {0}\nHBaseOutput.Error.CantFindIncomingField=Can't find incoming field \"{0}\" defined in the mapping \"{1}\"\nHBaseOutput.Error.TableKeyNotPresentInIncomingFields=The table key \"{0}\" defined in mapping \"{1}\" does not seem to be present in the incoming fields\nHBaseOutput.Error.ProblemConnectingToTargetTable=Problem connecting to target table: {0}\nHBaseOutput.Error.IncomingRowHasNullKeyValue=Incoming row has null key value!\nHBaseOutput.Error.ProblemInsertingRowIntoHBase=Problem inserting row into HBase: {0}\nHBaseOutput.Error.UnableToParseZookeeperPort=Unable to parse zookeeper port - using default\nHBaseOutput.Error.UnableToSetTargetTable=Unable to set a new target table to write to\nHBaseOutput.Error.UnableToAddColumnToTargetTablePut=Unable to add a column to the current target table put operation\nHBaseOutput.Error.ServiceStatus=Cannot communicate with HBaseService\\nSaving the transformation may lose data.\\nPlease correct the communication issue before working with this transformation\\n\n\nHBaseOutput.Error.ErrorCreatingDelete=Error creating the HBase delete!\nHBaseOutput.Error.MissingFieldData=The incoming row tuple has a null value in the \"{0}\" field!\nHBaseOutput.Error.ErrorCreatingPut=Error creating the HBase put for tuple row <KEY, Family, Column, Value>\nHBaseOutput.Error.NoKeyColumn=No key field was found in the incoming stream\nHBaseOutput.Error.NoFamilyColumn=No family field was found in the incoming stream\nHBaseOutput.Error.NoColumnColumn=No column name field was found in the incoming stream\nHBaseOutput.Error.NoValueColumn=No value field was found in the incoming stream\n\n\nDialog.Error=Error\n\nHBaseOutput.Injection.HBASE_SITE_XML_URL=The address of the hbase-site.xml file.\nHBaseOutput.Injection.HBASE_DEFAULT_XML_URL=The address of the hbase-default.xml file.\nHBaseOutput.Injection.TARGET_TABLE_NAME=The name of the HBase table to write.\nHBaseOutput.Injection.TARGET_MAPPING_NAME=The name of the HBase table map to use.\nHBaseOutput.Injection.DISABLE_WRITE_TO_WAL=This option will disable writing to the Write Ahead Log (WAL).\nHBaseOutput.Injection.WRITE_BUFFER_SIZE=Specify the size of the write buffer used to transfer data to HBase.\n\n\nHBaseOutput.Injection.MAPPING=Mappings\nHBaseOutput.Injection.TABLE_NAME=The name of the HBase table.\nHBaseOutput.Injection.MAPPING_NAME=The name of the map to use for the HBase table.\nHBaseOutput.Injection.DELETE_ROW_KEY=Delete the row key (specified in the mapping) from the HBase table.\n\nHBaseOutput.Injection.MAPPING_ALIAS=The name to assign to the HBase table key.\nHBaseOutput.Injection.MAPPING_KEY=This option indicates if the column is the key for the table.\nHBaseOutput.Injection.MAPPING_COLUMN_FAMILY=The family of the column in the HBase table.\nHBaseOutput.Injection.MAPPING_COLUMN_NAME=The name of the column in the HBase table.\nHBaseOutput.Injection.MAPPING_TYPE=The data type of the column.\nHBaseOutput.Injection.MAPPING_INDEXED_VALUES=Optional comma-separated set of legal values if the column is a String type.\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/messages/messages_en_US.properties",
    "content": "HBaseRowDecoder.Name=HBase row decoder\nHBaseRowDecoder.Description=Decodes an incoming key and HBase result object according to a mapping \n\nHBaseRowDecoderDialog.Shell.Title=HBase row decoder\nHBaseRowDecoderDialog.StepName.Label=Step name\nHBaseRowDecoderDialog.ConfigTab.TabTitle=Configure fields\nHBaseRowDecoderDialog.MappingEditorTab.TabTitle=Create/Edit mappings\n\nHBaseRowDecoderDialog.KeyField.Label=Key field\nHBaseRowDecoderDialog.ResultField.Label=HBase result field\n\nHBaseRowDecoderDialog.Error.IssuesWithMapping.Title=Problems with mapping\nHBaseRowDecoderDialog.Error.IssuesWithMapping=There are some problems with the mapping that need rectification\nHBaseRowDecoderDialog.Error.IssuesWithMapping.ButtonOK=OK and close\nHBaseRowDecoderDialog.Error.IssuesWithMapping.ButtonCancel=Cancel and rectify\n\nHBaseRowDecoder.Error.NoMappingInfo=No mapping information defined!\nHBaseRowDecoder.Error.UnableToFindHBaseKey=Unable to find HBase key field {0} in the incoming stream!\nHBaseRowDecoder.Error.NotImmutableBytesWritable=HBase key {0} is not ImmutableBytesWritable\nHBaseRowDecoder.Error.UnableToFindHBaseRow=Unable to find HBase result/row field {0} in the incoming stream!\nHBaseRowDecoder.Error.NotResult=HBase row {0} is not a Result object!\nHBaseRowDecoder.Error.UnableToGetRowKey=Unable to get row key from row object\nHBaseRowDecoder.Error.UnableToGetColumnValue=Unable to get current column value from row object\n\nHBaseRowDecoder.Injection.KEY_FIELD=The name of the input key field.\nHBaseRowDecoder.Injection.HBASE_RESULT_FIELD=The name of the HBase result field.\n\nHBaseRowDecoder.Injection.MAPPING=Mappings\nHBaseRowDecoder.Injection.TABLE_NAME=The name of the HBase table.\nHBaseRowDecoder.Injection.MAPPING_NAME=The name of the map to use for the HBase table.\n\nHBaseRowDecoder.Injection.MAPPING_ALIAS=The name to assign to the HBase table key.\nHBaseRowDecoder.Injection.MAPPING_KEY=This option indicates if the column is the key for the table.\nHBaseRowDecoder.Injection.MAPPING_COLUMN_FAMILY=The family of the column in the HBase table.\nHBaseRowDecoder.Injection.MAPPING_COLUMN_NAME=The name of the column in the HBase table.\nHBaseRowDecoder.Injection.MAPPING_TYPE=The data type of the column.\nHBaseRowDecoder.Injection.MAPPING_INDEXED_VALUES=Optional comma-separated set of legal values if the column is a String type.\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/HbaseUtilTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.junit.Test;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\n\nimport static org.junit.Assert.*;\n\npublic class HbaseUtilTest {\n\n  @Test\n  public void testParseNamespaceFromTableName() {\n    assertEquals( \"namespace\", HbaseUtil.parseNamespaceFromTableName( \"namespace:qualifier\" ) );\n    assertEquals( \"namespace\", HbaseUtil.parseNamespaceFromTableName( \"namespace:qualifier\", \"other\" ) );\n    assertEquals( \"other\", HbaseUtil.parseNamespaceFromTableName( \"qualifier\", \"other\" ) );\n    assertEquals( null, HbaseUtil.parseNamespaceFromTableName( \"qualifier\", null ) );\n  }\n\n  @Test\n  public void testParseQualifierFromTableName() {\n    assertEquals( \"qualifier\", HbaseUtil.parseQualifierFromTableName( \"namespace:qualifier\" ) );\n    assertEquals( \"qualifier\", HbaseUtil.parseQualifierFromTableName( \":qualifier\" ) );\n    assertEquals( \"qualifier\", HbaseUtil.parseQualifierFromTableName( \"qualifier\" ) );\n    assertEquals( \"\", HbaseUtil.parseQualifierFromTableName( \"namespace:\" ) );\n  }\n\n  @Test\n  public void testExpandTableName() {\n    assertEquals( \"default:\", HbaseUtil.expandTableName( null ) );\n    assertEquals( \"default:qualifier\", HbaseUtil.expandTableName( \"qualifier\" ) );\n    assertEquals( \"default:qualifier\", HbaseUtil.expandTableName( \":qualifier\" ) );\n    assertEquals( \"default:qualifier\", HbaseUtil.expandTableName( \"qualifier\" ) );\n    assertEquals( \"namespace:qualifier\", HbaseUtil.expandTableName( \"namespace\",\"qualifier\" ) );\n    assertEquals( \"namespace:qualifier\", HbaseUtil.expandTableName( \"namespace\",\"other:qualifier\" ) );\n  }\n\n  @Test(expected = IllegalArgumentException.class)\n  public void testIllegalArgsInExpandTableName() {\n      HbaseUtil.expandTableName( \"\",\"\" );\n  }\n\n  @Test\n  public void expandLegacyTableNameOnLoad() {\n    assertEquals(\"default:\", HbaseUtil.expandLegacyTableNameOnLoad( null ) );\n    assertEquals( \"default:weblogs\", HbaseUtil.expandLegacyTableNameOnLoad( \"weblogs\" ) );\n    assertEquals( \"ns:weblogs\", HbaseUtil.expandLegacyTableNameOnLoad( \"ns:weblogs\" ) );\n    assertEquals( \"ns:${two}\", HbaseUtil.expandLegacyTableNameOnLoad( \"ns:${two}\" ) );\n    assertEquals( \"default:${two}\", HbaseUtil.expandLegacyTableNameOnLoad( \":${two}\" ) );\n    assertEquals( \"${one}\", HbaseUtil.expandLegacyTableNameOnLoad( \"${one}\" ) );\n    assertEquals( \"%%one%%\", HbaseUtil.expandLegacyTableNameOnLoad( \"%%one%%\" ) );\n    assertEquals( \"${one}:${two}\", HbaseUtil.expandLegacyTableNameOnLoad( \"${one}:${two}\" ) );\n    assertEquals( \"default:\", HbaseUtil.expandLegacyTableNameOnLoad( \"\" ) );\n    assertEquals( \"${one}:two\", HbaseUtil.expandLegacyTableNameOnLoad( \"${one}:two\" ) );\n  }\n}"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/LogInjector.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.mockito.Mockito;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LoggingBuffer;\n\nimport java.lang.reflect.Field;\n\nimport static org.mockito.Mockito.mock;\n\npublic class LogInjector {\n\n  public static LoggingBuffer setMockForLoggingBuffer() throws NoSuchFieldException, IllegalAccessException {\n    Field storeReflectionField = KettleLogStore.class.getDeclaredField( \"store\" );\n    storeReflectionField.setAccessible( true );\n    KettleLogStore kettleLogStoreMock = mock( KettleLogStore.class );\n    storeReflectionField.set( null, kettleLogStoreMock );\n    Field appenderReflectionField = KettleLogStore.class.getDeclaredField( \"appender\" );\n    appenderReflectionField.setAccessible( true );\n    LoggingBuffer loggingBuffer = Mockito.spy( new LoggingBuffer( 3 ) );\n    appenderReflectionField.set( kettleLogStoreMock, loggingBuffer );\n    return loggingBuffer;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/NamedClusterLoadSaveUtilTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.assertFalse;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.eq;\nimport static org.mockito.Mockito.anyInt;\nimport static org.mockito.Mockito.never;\n\nimport javax.xml.parsers.DocumentBuilder;\nimport javax.xml.parsers.DocumentBuilderFactory;\n\n/**\n * User: Dzmitry Stsiapanau Date: 02/12/2016 Time: 14:10\n */\n\npublic class NamedClusterLoadSaveUtilTest {\n  public static final String ZOOKEPER_HOST = \"someHost\";\n  public static final String ZOOKEEPER_PORT = \"2181\";\n  public static final String ZOOKEEPER_HOSTS_KEY = \"zookeeper_hosts\";\n  public static final String ZOOKEEPER_PORT_KEY = \"zookeeper_port\";\n  private static String xml1 =\n      \"<step><\" + ZOOKEEPER_HOSTS_KEY + \">\" + ZOOKEPER_HOST + \"</\" + ZOOKEEPER_HOSTS_KEY + \"><\" + ZOOKEEPER_PORT_KEY\n          + \">\" + ZOOKEEPER_PORT + \"</\" + ZOOKEEPER_PORT_KEY + \"></step>\";\n\n  public static final String SOME_CLUSTER_NAME = \"someClusterName\";\n  public static final String CLUSTER_NAME_KEY = \"cluster_name\";\n  private static String xml2 =\n      \"<step><\" + CLUSTER_NAME_KEY + \">\" + SOME_CLUSTER_NAME + \"</\" + CLUSTER_NAME_KEY + \"><\" + ZOOKEEPER_HOSTS_KEY\n          + \">\" + ZOOKEPER_HOST + \"</\" + ZOOKEEPER_HOSTS_KEY + \"><\" + ZOOKEEPER_PORT_KEY + \">\" + ZOOKEEPER_PORT + \"</\"\n          + ZOOKEEPER_PORT_KEY + \"></step>\";\n\n  // mocks\n  private LogChannelInterface log;\n  private NamedClusterService ncs;\n  private IMetaStore metaStore;\n  private Repository repository;\n  private ObjectId jobId;\n  private ObjectId stepId;\n  private ObjectId transId;\n  private NamedCluster namedCluster;\n\n  private NamedClusterLoadSaveUtil util;\n  private DocumentBuilder dBuilder;\n\n  @Before\n  public void setUp() throws Exception {\n    util = new NamedClusterLoadSaveUtil();\n    dBuilder = DocumentBuilderFactory.newInstance().newDocumentBuilder();\n\n    // mocks\n    log = mock( LogChannelInterface.class );\n    namedCluster = mock( NamedCluster.class );\n    metaStore = mock( IMetaStore.class );\n    jobId = mock( ObjectId.class );\n    stepId = mock( ObjectId.class );\n    stepId = mock( ObjectId.class );\n    ncs = mock( NamedClusterService.class );\n    doReturn( true ).when( ncs ).contains( SOME_CLUSTER_NAME, metaStore );\n    when( ncs.getClusterTemplate() ).thenReturn( namedCluster );\n\n    repository = mock( Repository.class );\n    doReturn( ZOOKEPER_HOST ).when( repository ).getJobEntryAttributeString( jobId, ZOOKEEPER_HOSTS_KEY );\n    doReturn( ZOOKEEPER_PORT ).when( repository ).getJobEntryAttributeString( jobId, ZOOKEEPER_PORT_KEY );\n  }\n\n  @Test\n  public void testLoadClusterConfigXML_WithoutClusterName() throws Exception {\n    util.loadClusterConfig( ncs, jobId, repository, metaStore, XMLHandler.loadXMLString( dBuilder, xml1 ).getDocumentElement(), log );\n    verify( ncs ).getClusterTemplate();\n    verify( namedCluster ).setZooKeeperHost( ZOOKEPER_HOST );\n    verify( namedCluster ).setZooKeeperPort( ZOOKEEPER_PORT );\n  }\n\n  @Test\n  public void testLoadClusterConfigXML_WithClusterName() throws Exception {\n    util.loadClusterConfig( ncs, jobId, repository, metaStore, XMLHandler.loadXMLString( dBuilder, xml2 ).getDocumentElement(), log );\n    verify( ncs ).getNamedClusterByName( SOME_CLUSTER_NAME, metaStore );\n    verify( namedCluster ).setZooKeeperHost( ZOOKEPER_HOST );\n    verify( namedCluster ).setZooKeeperPort( ZOOKEEPER_PORT );\n  }\n\n  @Test\n  public void testLoadClusterConfigRepo_WithoutClusterName() throws Exception {\n    doReturn( null ).when( repository ).getJobEntryAttributeString( jobId, CLUSTER_NAME_KEY );\n\n    util.loadClusterConfig( ncs, jobId, repository, metaStore, null, mock( LogChannelInterface.class ) );\n    verify( ncs ).getClusterTemplate();\n    verify( namedCluster ).setZooKeeperHost( ZOOKEPER_HOST );\n    verify( namedCluster ).setZooKeeperPort( ZOOKEEPER_PORT );\n  }\n\n  @Test\n  public void testLoadClusterConfigRepo_WithClusterName() throws Exception {\n    doReturn( SOME_CLUSTER_NAME ).when( repository ).getJobEntryAttributeString( jobId, CLUSTER_NAME_KEY );\n\n    util.loadClusterConfig( ncs, jobId, repository, metaStore, null, mock( LogChannelInterface.class ) );\n    verify( ncs ).getNamedClusterByName( SOME_CLUSTER_NAME, metaStore );\n    verify( namedCluster ).setZooKeeperHost( ZOOKEPER_HOST );\n    verify( namedCluster ).setZooKeeperPort( ZOOKEEPER_PORT );\n  }\n\n  @Test\n  public void testGetXml_WithoutClusterName() throws Exception {\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n\n    StringBuilder retval = new StringBuilder();\n    util.getXml( retval, ncs, namedCluster, metaStore, log );\n    assertTrue( retval.toString().contains( ZOOKEEPER_PORT ) );\n    assertTrue( retval.toString().contains( ZOOKEPER_HOST ) );\n  }\n\n  @Test\n  public void testGetXml_WithClusterName() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n\n    StringBuilder retval = new StringBuilder();\n    util.getXml( retval, ncs, namedCluster, metaStore, log );\n    assertTrue( retval.toString().contains( ZOOKEEPER_PORT ) );\n    assertTrue( retval.toString().contains( ZOOKEPER_HOST ) );\n    assertTrue( retval.toString().contains( SOME_CLUSTER_NAME ) );\n  }\n\n  @Test\n  public void testGetXml_WithoutZooKeeper() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n\n    StringBuilder retval = new StringBuilder();\n    util.getXml( retval, ncs, namedCluster, metaStore, log );\n    assertFalse( retval.toString().contains( ZOOKEEPER_PORT ) );\n    assertFalse( retval.toString().contains( ZOOKEPER_HOST ) );\n    assertTrue( retval.toString().contains( SOME_CLUSTER_NAME ) );\n  }\n\n  @Test\n  public void testGetXml_readFromMetastore() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n    when( ncs.read( SOME_CLUSTER_NAME, metaStore ) ).thenReturn( namedCluster );\n\n    StringBuilder retval = new StringBuilder();\n    util.getXml( retval, ncs, namedCluster, metaStore, log );\n\n    verify( ncs ).read( SOME_CLUSTER_NAME, metaStore );\n    assertTrue( retval.toString().contains( ZOOKEEPER_PORT ) );\n    assertTrue( retval.toString().contains( ZOOKEPER_HOST ) );\n    assertTrue( retval.toString().contains( SOME_CLUSTER_NAME ) );\n  }\n\n  @Test\n  public void testSaveRep_WithoutClusterName() throws Exception {\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n\n    util.saveRep( repository, metaStore, transId, stepId, ncs, namedCluster, log );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_HOSTS_KEY ), eq( ZOOKEPER_HOST ) );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_PORT_KEY ), eq( ZOOKEEPER_PORT ) );\n  }\n\n  @Test\n  public void testSaveRep_WithClusterName() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n    when( ncs.read( SOME_CLUSTER_NAME, metaStore ) ).thenReturn( namedCluster );\n\n    util.saveRep( repository, metaStore, transId, stepId, ncs, namedCluster, log );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_HOSTS_KEY ), eq( ZOOKEPER_HOST ) );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_PORT_KEY ), eq( ZOOKEEPER_PORT ) );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), eq( CLUSTER_NAME_KEY ), eq( SOME_CLUSTER_NAME ) );\n  }\n\n  @Test\n  public void testSaveRep_WithoutZooKeeper() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n    when( ncs.read( SOME_CLUSTER_NAME, metaStore ) ).thenReturn( namedCluster );\n\n    util.saveRep( repository, metaStore, transId, stepId, ncs, namedCluster, log );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), eq( CLUSTER_NAME_KEY ), eq( SOME_CLUSTER_NAME ) );\n    verify( repository, never() ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_HOSTS_KEY ), eq( ZOOKEPER_HOST ) );\n    verify( repository, never() ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_PORT_KEY ), eq( ZOOKEEPER_PORT ) );\n  }\n\n  @Test\n  public void testSaveRep_readFromMetastore() throws Exception {\n    when( namedCluster.getName() ).thenReturn( SOME_CLUSTER_NAME );\n    when( namedCluster.getZooKeeperHost() ).thenReturn( ZOOKEPER_HOST );\n    when( namedCluster.getZooKeeperPort() ).thenReturn( ZOOKEEPER_PORT );\n    when( ncs.read( SOME_CLUSTER_NAME, metaStore ) ).thenReturn( namedCluster );\n\n    util.saveRep( repository, metaStore, transId, stepId, ncs, namedCluster, log );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_HOSTS_KEY ), eq( ZOOKEPER_HOST ) );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), anyInt(), eq( ZOOKEEPER_PORT_KEY ), eq( ZOOKEEPER_PORT ) );\n    verify( repository ).saveStepAttribute( eq( transId ), eq( stepId ), eq( CLUSTER_NAME_KEY ), eq( SOME_CLUSTER_NAME ) );\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.di.core.osgi.api.MetastoreLocatorOsgi;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\npublic class HBaseInputMetaInjectionTest extends BaseMetadataInjectionTest<HBaseInputMeta> {\n\n  @Before\n  public void setup() {\n    NamedClusterService namedClusterService = Mockito.mock( NamedClusterService.class );\n    NamedClusterServiceLocator namedClusterServiceLocator = Mockito.mock( NamedClusterServiceLocator.class );\n    RuntimeTestActionService runtimeTestActionService = Mockito.mock( RuntimeTestActionService.class );\n    RuntimeTester runtimeTester = Mockito.mock( RuntimeTester.class );\n    MetastoreLocator metaStore = Mockito.mock( MetastoreLocator.class );\n\n    setup( new HBaseInputMeta( namedClusterService, namedClusterServiceLocator, runtimeTestActionService, runtimeTester, metaStore ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"HBASE_SITE_XML_URL\", new StringGetter() {\n      public String get() {\n        return meta.getCoreConfigURL();\n      }\n    } );\n    check( \"HBASE_DEFAULT_XML_URL\", new StringGetter() {\n      public String get() {\n        return meta.getDefaultConfigURL();\n      }\n    } );\n    check( \"SOURCE_TABLE_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getSourceTableName();\n      }\n    } );\n    check( \"SOURCE_MAPPING_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getSourceMappingName();\n      }\n    } );\n    check( \"START_KEY_VALUE\", new StringGetter() {\n      public String get() {\n        return meta.getKeyStartValue();\n      }\n    } );\n    check( \"STOP_KEY_VALUE\", new StringGetter() {\n      public String get() {\n        return meta.getKeyStopValue();\n      }\n    } );\n    check( \"SCANNER_ROW_CACHE_SIZE\", new StringGetter() {\n      public String get() {\n        return meta.getScannerCacheSize();\n      }\n    } );\n    check( \"MATCH_ANY_FILTER\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getMatchAnyFilter();\n      }\n    } );\n\n    check( \"OUTPUT_FIELD_KEY\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).isKey();\n      }\n    } );\n    check( \"OUTPUT_FIELD_ALIAS\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).getAlias();\n      }\n    } );\n    check( \"OUTPUT_FIELD_COLUMN_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).getColumnName();\n      }\n    } );\n    check( \"OUTPUT_FIELD_FAMILY\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).getFamily();\n      }\n    } );\n    check( \"OUTPUT_FIELD_TYPE\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).getHbaseType();\n      }\n    } );\n    check( \"OUTPUT_FIELD_FORMAT\", new StringGetter() {\n      public String get() {\n        return meta.getOutputFieldsDefinition().get( 0 ).getFormat();\n      }\n    } );\n\n    check( \"ALIAS\", new StringGetter() {\n      public String get() {\n        return meta.getFiltersDefinition().get( 0 ).getAlias();\n      }\n    } );\n    check( \"FIELD_TYPE\", new StringGetter() {\n      public String get() {\n        return meta.getFiltersDefinition().get( 0 ).getFieldType();\n      }\n    } );\n    check( \"SIGNED_COMPARISON\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getFiltersDefinition().get( 0 ).isSignedComparison();\n      }\n    } );\n    check( \"COMPARISON_VALUE\", new StringGetter() {\n      public String get() {\n        return meta.getFiltersDefinition().get( 0 ).getConstant();\n      }\n    } );\n    check( \"FORMAT\", new StringGetter() {\n      public String get() {\n        return meta.getFiltersDefinition().get( 0 ).getFormat();\n      }\n    } );\n\n    check( \"TABLE_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getTableName();\n      }\n    } );\n    check( \"MAPPING_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingName();\n      }\n    } );\n\n    check( \"MAPPING_ALIAS\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getAlias();\n      }\n    } );\n    check( \"MAPPING_KEY\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).isKey();\n      }\n    } );\n    check( \"MAPPING_COLUMN_FAMILY\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnFamily();\n      }\n    } );\n    check( \"MAPPING_COLUMN_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnName();\n      }\n    } );\n    check( \"MAPPING_TYPE\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getType();\n      }\n    } );\n    check( \"MAPPING_INDEXED_VALUES\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getIndexedValues();\n      }\n    } );\n    skipPropertyTest( \"COMPARISON_TYPE\" );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/input/HBaseInputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase.input;\n\nimport org.apache.commons.io.IOUtils;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hbase.LogInjector;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LoggingBuffer;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.trans.steps.loadsave.MemoryRepository;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Document;\nimport org.w3c.dom.Node;\nimport org.xml.sax.InputSource;\nimport org.xml.sax.SAXException;\n\nimport javax.imageio.metadata.IIOMetadataNode;\nimport javax.xml.parsers.DocumentBuilder;\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.parsers.ParserConfigurationException;\nimport java.io.IOException;\nimport java.io.StringReader;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.atLeast;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class HBaseInputMetaTest {\n\n  @InjectMocks HBaseInputMeta hBaseInputMeta;\n  @Mock NamedCluster namedCluster;\n  @Mock NamedClusterServiceLocator namedClusterServiceLocator;\n  @Mock HBaseService hBaseService;\n  @Mock MappingDefinition mappingDefinition;\n  @Mock NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n  @Mock IMetaStore metaStore;\n  @Mock NamedClusterService namedClusterService;\n\n  /**\n   * actual for bug BACKLOG-9529\n   */\n  @Test\n  public void testLogSuccessfulForGetXml() throws Exception {\n    HBaseInputMeta spy = Mockito.spy( hBaseInputMeta );\n    spy.setNamedCluster( namedCluster );\n\n    LoggingBuffer loggingBuffer = LogInjector.setMockForLoggingBuffer();\n\n    Mockito.doThrow( new KettleException( \"Unexpected error occured\" ) ).when( spy ).applyInjection( any() );\n    spy.getXML();\n    verify( loggingBuffer, atLeast( 1 ) ).addLogggingEvent( any() );\n  }\n\n  /**\n   * actual for bug BACKLOG-9629\n   */\n  @SuppressWarnings( \"unchecked\" )\n  @Test\n  public void testApplyInjectionDefinitionsExists() throws Exception {\n    HBaseInputMeta hBaseInputMetaSpy = Mockito.spy( hBaseInputMeta );\n    hBaseInputMetaSpy.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenReturn( hBaseService );\n    hBaseInputMetaSpy.setMappingDefinition( mappingDefinition );\n    List list = mock( List.class );\n    hBaseInputMetaSpy.setOutputFieldsDefinition( list );\n    hBaseInputMetaSpy.setFiltersDefinition( list );\n    Mockito.doReturn( list ).when( hBaseInputMetaSpy ).createOutputFieldsDefinition( any(), any() );\n    Mockito.doReturn( list ).when( hBaseInputMetaSpy ).createColumnFiltersFromDefinition( any() );\n    Mockito.doReturn( null ).when( hBaseInputMetaSpy ).getMapping( any(), any() );\n\n    hBaseInputMetaSpy.getXML();\n    verify( hBaseInputMetaSpy, times( 1 ) ).setMapping( any() );\n    verify( hBaseInputMetaSpy, times( 1 ) ).setOutputFields( any() );\n    verify( hBaseInputMetaSpy, times( 1 ) ).setColumnFilters( any() );\n  }\n\n  /**\n   * actual for bug BACKLOG-9629\n   */\n  @Test\n  public void testApplyInjectionDefinitionsNull() throws Exception {\n    HBaseInputMeta hBaseInputMetaSpy = Mockito.spy( hBaseInputMeta );\n    hBaseInputMetaSpy.setNamedCluster( namedCluster );\n    hBaseInputMetaSpy.setMappingDefinition( null );\n    hBaseInputMetaSpy.setOutputFieldsDefinition( null );\n    hBaseInputMetaSpy.setFiltersDefinition( null );\n\n    hBaseInputMetaSpy.getXML();\n    verify( hBaseInputMetaSpy, times( 0 ) ).setMapping( any() );\n    verify( hBaseInputMetaSpy, times( 0 ) ).getMapping();\n    verify( hBaseInputMetaSpy, times( 0 ) ).setOutputFields( any() );\n    verify( hBaseInputMetaSpy, times( 0 ) ).setColumnFilters( any() );\n  }\n\n  @Test\n  public void testLoadXmlDoesntBubleUpException() throws Exception {\n    KettleLogStore.init();\n    ClusterInitializationException exception = new ClusterInitializationException( new Exception() );\n    hBaseInputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenThrow( exception );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n\n    IIOMetadataNode node = new IIOMetadataNode();\n    IIOMetadataNode child = new IIOMetadataNode( \"disable_wal\" );\n    IIOMetadataNode grandChild = new IIOMetadataNode();\n    grandChild.setNodeValue( \"N\" );\n    child.appendChild( grandChild );\n    node.appendChild( child );\n\n    hBaseInputMeta.loadXML( node, new ArrayList<>(), metaStore );\n\n    ServiceStatus serviceStatus = hBaseInputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertFalse( serviceStatus.isOk() );\n    assertEquals( exception, serviceStatus.getException() );\n  }\n\n  @Test\n  public void testLoadXmlServiceStatusOk() throws Exception {\n    KettleLogStore.init();\n    hBaseInputMeta.setNamedCluster( namedCluster );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n\n    IIOMetadataNode node = new IIOMetadataNode();\n    IIOMetadataNode child = new IIOMetadataNode( \"disable_wal\" );\n    IIOMetadataNode grandChild = new IIOMetadataNode();\n    grandChild.setNodeValue( \"N\" );\n    child.appendChild( grandChild );\n    node.appendChild( child );\n\n    hBaseInputMeta.loadXML( node, new ArrayList<>(), metaStore );\n\n    ServiceStatus serviceStatus = hBaseInputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertTrue( serviceStatus.isOk() );\n  }\n\n  @Test\n  public void testReadRepDoesntBubleUpException() throws Exception {\n    KettleLogStore.init();\n    ClusterInitializationException exception = new ClusterInitializationException( new Exception() );\n    hBaseInputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenThrow( exception );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n\n    hBaseInputMeta.readRep( new MemoryRepository(), metaStore, mock( ObjectId.class ), new ArrayList<>() );\n\n    ServiceStatus serviceStatus = hBaseInputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertFalse( serviceStatus.isOk() );\n    assertEquals( exception, serviceStatus.getException() );\n  }\n\n  @Test\n  public void testReadRepServiceStatusOk() throws Exception {\n    KettleLogStore.init();\n    hBaseInputMeta.setNamedCluster( namedCluster );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    MappingFactory mappingFactory = mock( MappingFactory.class );\n\n    hBaseInputMeta.readRep( new MemoryRepository(), metaStore, mock( ObjectId.class ), new ArrayList<>() );\n\n    ServiceStatus serviceStatus = hBaseInputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertTrue( serviceStatus.isOk() );\n  }\n\n  @Test\n  public void testLoadingAELMappingFromStepNode() throws Exception {\n    KettleLogStore.init();\n    hBaseInputMeta.setMapping( null );\n    hBaseInputMeta.setNamedCluster( namedCluster );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n\n    hBaseInputMeta.loadXML( getMappingNode(), new ArrayList<>(), metaStore );\n\n    assertNotNull( hBaseInputMeta.m_mapping );\n  }\n\n  private Node getMappingNode() throws IOException, ParserConfigurationException, SAXException {\n    String xml = IOUtils.toString( getClass().getClassLoader().getResourceAsStream( \"StubMapping.xml\" ) );\n\n    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();\n    DocumentBuilder builder = factory.newDocumentBuilder();\n\n    Document doc = builder.parse( new InputSource( new StringReader( xml ) ) );\n\n    return doc.getDocumentElement();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MappingAdminTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport com.pentaho.big.data.bundles.impl.shim.hbase.ByteConversionUtilImpl;\nimport com.pentaho.big.data.bundles.impl.shim.hbase.mapping.MappingFactoryImpl;\nimport com.pentaho.big.data.bundles.impl.shim.hbase.meta.HBaseValueMetaInterfaceFactoryImpl;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.KettleEnvironment;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.Result;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseDelete;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTable;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScanner;\nimport org.pentaho.hadoop.shim.api.hbase.table.ResultScannerBuilder;\nimport org.pentaho.hadoop.shim.api.internal.hbase.HBaseBytesUtilShim;\n\nimport java.io.IOException;\nimport java.util.Arrays;\nimport java.util.Comparator;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.NavigableMap;\nimport java.util.Set;\nimport java.util.TreeMap;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotEquals;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/14/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class MappingAdminTest {\n\n  private TransMeta transMeta;\n  private BaseStepMeta stepMeta;\n  private StepMeta parentStepMeta;\n  private MappingAdmin mappingAdmin;\n  @Mock\n  private HBaseConnection mockHbaseConnection;\n  @Mock\n  private HBaseTable mockPopulatedMappingTable;\n  @Mock\n  HBaseDelete mockHBaseDelete;\n  @Mock\n  HBasePut mockHBasePut;\n\n  private HBaseBytesUtilShim hBaseBytesUtilShim = new MockHBaseByteConverterUsingJavaByteBuffer();\n  private ByteConversionUtil mockByteConversionUtil = new ByteConversionUtilImpl( hBaseBytesUtilShim );\n\n  private final static String MAPPING_TABLE_NAME = \"pentaho_mappings\";\n\n  @Before\n  public void setUp() throws KettleException {\n    KettleEnvironment.init();\n    transMeta = Mockito.spy( new TransMeta() );\n    stepMeta = Mockito.spy( new BaseStepMeta() );\n    parentStepMeta = Mockito.spy( new StepMeta() );\n    parentStepMeta.setParentTransMeta( transMeta );\n    stepMeta.setParentStepMeta( parentStepMeta );\n\n    when( mockHbaseConnection.getByteConversionUtil() ).thenReturn( mockByteConversionUtil );\n    mappingAdmin = new MappingAdmin( mockHbaseConnection );\n  }\n\n  @Test\n  public void testGetTableNameFromVariable_whenVariableValueExists() {\n\n    String expectedTableName = \"hbweblogs\";\n\n    transMeta.setVariable( \"hb_weblogs\", \"hbweblogs\" );\n\n    String tableName = MappingAdmin.getTableNameFromVariable( stepMeta, \"${hb_weblogs}\" );\n\n    assertEquals( expectedTableName, tableName );\n  }\n\n  @Test\n  public void testGetTableNameFromVariable_whenNoVariable() {\n\n    String expectedTableName = \"hbweblogs\";\n    String expectedResult = \"${hb_weblogs}\";\n\n    String tableName = MappingAdmin.getTableNameFromVariable( stepMeta, \"${hb_weblogs}\" );\n\n    assertNotEquals( expectedTableName, tableName );\n    assertEquals( expectedResult, tableName );\n  }\n\n  @Test\n  public void setAndGetMappingTableName() {\n    mappingAdmin.setMappingTableName( \"mappingtbl\" );\n    assertEquals( \"mappingtbl\", mappingAdmin.getMappingTableName() );\n  }\n\n  @Test\n  public void createMappingTable() throws Exception {\n    HBaseTable mockHbaseMappingTable = mock( HBaseTable.class );\n    when( mockHbaseConnection.getTable( \"ns:\" + MAPPING_TABLE_NAME ) ).thenReturn( mockHbaseMappingTable );\n    when( mockHbaseMappingTable.exists() ).thenReturn( false );\n\n    mappingAdmin.createMappingTable( \"ns:tablename\" );\n    verify( mockHbaseMappingTable, times( 1 ) ).create( any(), any() );\n  }\n\n  @Test( expected = IOException.class )\n  public void createMappingTableWhenExists() throws Exception {\n    HBaseTable mockHbaseMappingTable = mock( HBaseTable.class );\n    when( mockHbaseConnection.getTable( \"ns:\" + MAPPING_TABLE_NAME ) ).thenReturn( mockHbaseMappingTable );\n    when( mockHbaseMappingTable.exists() ).thenReturn( true );\n\n    mappingAdmin.createMappingTable( \"ns:tablename\" );\n  }\n\n  @Test\n  public void mappingExists() throws Exception {\n    setupMappingStructure();\n    assertTrue( mappingAdmin.mappingExists( \"populated:table1\", \"map1\" ) );\n  }\n\n  @Test\n  public void testMappingExistsNegative() throws Exception {\n    setupMappingStructure();\n    assertFalse( mappingAdmin.mappingExists( \"populated:table1\", \"mapx\" ) );\n  }\n\n  @Test\n  public void getMappedTables() throws Exception {\n    setupMappingStructure();\n\n    Set<String> mappedTables = mappingAdmin.getMappedTables( null );\n    assertEquals( 2, mappedTables.size() );\n    assertTrue( mappedTables.contains( \"populated:table1\" ) );\n    assertTrue( mappedTables.contains( \"populated:table2\" ) );\n  }\n\n  private void setupMappingStructure() throws Exception {\n    when( mockHbaseConnection.listNamespaces() ).thenReturn( Arrays.asList( \"populated\", \"unpopulated\" ) );\n    when( mockHbaseConnection.getTable( \"populated:\" + MAPPING_TABLE_NAME ) ).thenReturn( mockPopulatedMappingTable );\n    when( mockPopulatedMappingTable.exists() ).thenReturn( true );\n    when( mockPopulatedMappingTable.keyExists( \"table1,map1\".getBytes() ) ).thenReturn( true );\n    ResultScannerBuilder mockResultScannerBuilder = mock( ResultScannerBuilder.class );\n    when( mockPopulatedMappingTable.createScannerBuilder( any(), any() ) ).thenReturn( mockResultScannerBuilder );\n    ResultScanner mockResultScanner = mock( ResultScanner.class );\n    when( mockResultScannerBuilder.build() ).thenReturn( mockResultScanner );\n    Result result1 = mock( Result.class );\n    when( result1.getRow() ).thenReturn( \"table1,map1\".getBytes() );\n    Result result2 = mock( Result.class );\n    when( result2.getRow() ).thenReturn( \"table1,map2\".getBytes() );\n    Result result3 = mock( Result.class );\n    when( result3.getRow() ).thenReturn( \"table2,map1\".getBytes() );\n    when( mockResultScanner.next() ).thenReturn( result1, result2, result3, null );\n\n    HBaseTable mockTwoMappingTable = mock( HBaseTable.class );\n    when( mockHbaseConnection.getTable( \"unpopulated:\" + MAPPING_TABLE_NAME ) ).thenReturn( mockTwoMappingTable );\n    when( mockTwoMappingTable.exists() ).thenReturn( false );\n\n    // From here down added for getMapping test\n    NavigableMap<byte[], byte[]> keyFamilyMap = new TreeMap<>( new ByteArrayComparator() );\n    keyFamilyMap.put( \"key\".getBytes(), \"String\".getBytes() );\n    when( result1.getFamilyMap( \"key\" ) ).thenReturn( keyFamilyMap );\n\n    NavigableMap<byte[], byte[]> columnFamilyMap = new TreeMap<>( new ByteArrayComparator() );\n    columnFamilyMap.put( \"colFamily,colName1,aliascol1\".getBytes(), \"String\".getBytes() );\n    columnFamilyMap.put( \"colFamily,colName2,aliascol2\".getBytes(), \"Integer\".getBytes() );\n    when( result1.getFamilyMap( \"columns\" ) ).thenReturn( columnFamilyMap );\n\n    HBaseValueMetaInterfaceFactoryImpl hBaseValueMetaInterfaceFactory =\n      new HBaseValueMetaInterfaceFactoryImpl( hBaseBytesUtilShim );\n    when( mockHbaseConnection.getHBaseValueMetaInterfaceFactory() ).thenReturn( hBaseValueMetaInterfaceFactory );\n    MappingFactory mappingFactory = new MappingFactoryImpl( hBaseBytesUtilShim, hBaseValueMetaInterfaceFactory );\n    when( mockHbaseConnection.getMappingFactory() ).thenReturn( mappingFactory );\n\n    // From here down added for deleteMapping test\n    HBaseTableWriteOperationManager mockHBaseTableWriteOperationManager = mock( HBaseTableWriteOperationManager.class );\n    when( mockPopulatedMappingTable.createWriteOperationManager( null ) )\n      .thenReturn( mockHBaseTableWriteOperationManager );\n    when( mockHBaseTableWriteOperationManager.createDelete( \"table1,map1\".getBytes() ) ).thenReturn( mockHBaseDelete );\n\n    // From here down added for putMapping test\n    when( mockHBaseTableWriteOperationManager.createPut( \"table1,map1\".getBytes() ) ).thenReturn( mockHBasePut );\n  }\n\n  @Test\n  public void getMappingNames() throws Exception {\n    setupMappingStructure();\n    List<String> mappingNames = mappingAdmin.getMappingNames( \"populated:table1\" );\n    assertEquals( 2, mappingNames.size() );\n    assertTrue( mappingNames.contains( \"map1\" ) );\n    assertTrue( mappingNames.contains( \"map2\" ) );\n  }\n\n  @Test\n  public void getMapping() throws Exception {\n    setupMappingStructure();\n    Mapping mapping = mappingAdmin.getMapping( \"populated:table1\", \"map1\" );\n    assertEquals( \"map1\", mapping.getMappingName() );\n    assertEquals( \"populated:table1\", mapping.getTableName() );\n    assertEquals( \"key\", mapping.getKeyName() );\n    assertEquals( Mapping.KeyType.STRING, mapping.getKeyType() );\n    Map<String, HBaseValueMetaInterface> mappedColumns = mapping.getMappedColumns();\n    assertTrue( mappedColumns.containsKey( \"aliascol1\" ) );\n    assertTrue( mappedColumns.containsKey( \"aliascol2\" ) );\n    assertEquals( \"map1\", mappedColumns.get( \"aliascol1\" ).getMappingName() );\n    assertEquals( \"colFamily\", mappedColumns.get( \"aliascol1\" ).getColumnFamily() );\n    assertEquals( \"colName1\", mappedColumns.get( \"aliascol1\" ).getColumnName() );\n  }\n\n  @Test\n  public void deleteMapping() throws Exception {\n    setupMappingStructure();\n\n\n    Mapping mapping = mappingAdmin.getMapping( \"populated:table1\", \"map1\" );\n    assertNotNull( mapping );\n    mappingAdmin.deleteMapping( mapping );\n    verify( mockHBaseDelete ).execute();\n  }\n\n  @Test\n  public void putMapping() throws Exception {\n    setupMappingStructure();\n    Mapping mapping = mappingAdmin.getMapping( \"populated:table1\", \"map1\" );\n    assertNotNull( mapping );\n\n    mappingAdmin.putMapping( mapping, true );\n    verify( mockHBasePut, times( 1 ) ).createColumnName( \"colFamily\", \"colName1\", \"aliascol1\" );\n    verify( mockHBasePut, times( 1 ) ).createColumnName( \"colFamily\", \"colName2\", \"aliascol2\" );\n    verify( mockHBasePut, times( 1 ) ).createColumnName( \"key\" );\n    verify( mockHBasePut, times( 1 ) ).execute();\n  }\n\n  @Test\n  public void describeMapping() throws Exception {\n    setupMappingStructure();\n    Mapping mapping = mappingAdmin.getMapping( \"populated:table1\", \"map1\" );\n    assertNotNull( mapping );\n\n    String desc = mappingAdmin.describeMapping( mapping );\n    assertNotNull( desc );\n    assertTrue( !desc.isEmpty() );\n  }\n\n  @Test\n  public void close() throws Exception {\n    mappingAdmin.close();\n    verify( mockHbaseConnection ).close();\n  }\n\n  @Test\n  public void getConnection() {\n    assertEquals( mockHbaseConnection, mappingAdmin.getConnection() );\n  }\n\n  static class ByteArrayComparator implements Comparator<byte[]> {\n    @Override\n    public int compare( byte[] a, byte[] b ) {\n      if ( a == b ) {\n        return 0;\n      }\n      if ( a == null || b == null ) {\n        throw new NullPointerException();\n      }\n\n      int length = a.length;\n      int cmp;\n      if ( ( cmp = Integer.compare( length, b.length ) ) != 0 ) {\n        return cmp;\n      }\n\n      for ( int i = 0; i < length; i++ ) {\n        if ( ( cmp = Byte.compare( a[ i ], b[ i ] ) ) != 0 ) {\n          return cmp;\n        }\n      }\n\n      return 0;\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MappingUtilsTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.junit.Test;\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.big.data.kettle.plugins.hbase.HBaseConnectionException;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition.MappingColumn;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseConnection;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterfaceFactory;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertSame;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyInt;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.ArgumentMatchers.same;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * @author Tatsiana_Kasiankova\n */\npublic class MappingUtilsTest {\n\n  private static final String STRING_TYPE = \"String\";\n\n  private static final String TEST_TABLE_NAME = \"TEST_TABLE_NAME\";\n\n  private static final String TEST_MAPPING_NAME = \"TEST_MAPPING_NAME\";\n\n  private static final String ALIAS_STRING = \"alias\";\n\n  private static final String VALUE_STRING = \"value\";\n\n  private static final String KEY_STRING = \"key\";\n\n  private static final int FAMILIY_ARG_INDEX = 0;\n\n  private static final int NAME_ARG_INDEX = 1;\n\n  private static final int ALIAS_ARG_INDEX = 2;\n\n  /**\n   *\n   */\n  private static final String UNABLE_TO_CONNECT_TO_H_BASE = \"Unable to connect to HBase\";\n  private ConfigurationProducer cProducerMock = mock( ConfigurationProducer.class );\n  private HBaseConnection hbConnectionMock = mock( HBaseConnection.class );\n\n  @Test\n  public void testGetMappingAdmin_NoException() {\n    try {\n      when( cProducerMock.getHBaseConnection() ).thenReturn( hbConnectionMock );\n      MappingAdmin mappingAdmin = MappingUtils.getMappingAdmin( cProducerMock );\n      assertNotNull( mappingAdmin );\n      assertSame( hbConnectionMock, mappingAdmin.getConnection() );\n      verify( hbConnectionMock ).checkHBaseAvailable();\n    } catch ( Exception e ) {\n      fail( \"No exception expected but it occurs!\" );\n    }\n  }\n\n  @Test\n  public void testGetMappingAdmin_ClusterInitializationExceptionToHBaseConnectionException() throws Exception {\n    ClusterInitializationException clusterInitializationException =\n      new ClusterInitializationException( new Exception( \"ClusterInitializationException\" ) );\n    try {\n      when( cProducerMock.getHBaseConnection() ).thenThrow( clusterInitializationException );\n      MappingUtils.getMappingAdmin( cProducerMock );\n      fail( \"Expected HBaseConnectionException but it doen not occur!\" );\n    } catch ( HBaseConnectionException e ) {\n      assertEquals( UNABLE_TO_CONNECT_TO_H_BASE, e.getMessage() );\n      assertSame( clusterInitializationException, e.getCause() );\n    }\n  }\n\n  @Test\n  public void testGetMappingAdmin_IOExceptionToHBaseConnectionException() throws Exception {\n    IOException ioException = new IOException( \"IOException\" );\n    try {\n      when( cProducerMock.getHBaseConnection() ).thenThrow( ioException );\n      MappingUtils.getMappingAdmin( cProducerMock );\n      fail( \"Expected HBaseConnectionException but it doen not occur!\" );\n    } catch ( HBaseConnectionException e ) {\n      assertEquals( UNABLE_TO_CONNECT_TO_H_BASE, e.getMessage() );\n      assertSame( ioException, e.getCause() );\n    }\n  }\n\n  @Test\n  public void testIsTupleMappingColumn() {\n    for ( Mapping.TupleMapping tupleColumn : Mapping.TupleMapping.values() ) {\n      boolean result = MappingUtils.isTupleMappingColumn( tupleColumn.toString() );\n      assertTrue( result );\n    }\n  }\n\n  @Test\n  public void testIsTupleMappingColumn_NotTupleColumn() {\n    boolean result = MappingUtils.isTupleMappingColumn( \"NOT_A_TUPLE_COLUMN\" );\n    assertFalse( result );\n  }\n\n  @Test\n  public void testIsTupleMapping() {\n    MappingDefinition tupleMappingDefinition = new MappingDefinition();\n    tupleMappingDefinition.setMappingColumns( buildTupleMapping() );\n\n    boolean result = MappingUtils.isTupleMapping( tupleMappingDefinition );\n    assertTrue( result );\n  }\n\n  @Test\n  public void testIsTupleMapping_NoTupleMapping() {\n    MappingDefinition tupleMappingDefinition = new MappingDefinition();\n    tupleMappingDefinition.setMappingColumns( buildNoTupleMapping() );\n\n    boolean result = MappingUtils.isTupleMapping( tupleMappingDefinition );\n    assertFalse( result );\n  }\n\n  @Test\n  public void testGetMappingAdmin() throws IOException {\n    HBaseService hBaseService = mock( HBaseService.class );\n    HBaseConnection hBaseConnection = mock( HBaseConnection.class );\n    when(\n      hBaseService.getHBaseConnection( any( VariableSpace.class ), anyString(), anyString(),\n        any( LogChannelInterface.class ) ) ).thenReturn( hBaseConnection );\n    VariableSpace variableSpace = mock( VariableSpace.class );\n\n    MappingUtils.getMappingAdmin( hBaseService, variableSpace, \"SITE_CONFIG\", \"DEFAULT_CONFIG\" );\n  }\n\n  @Test\n  public void testBuildNonKeyValueMeta() throws KettleException {\n    HBaseService hBaseService = mock( HBaseService.class );\n    ByteConversionUtil byteConversionUtil = mock( ByteConversionUtil.class );\n    when( hBaseService.getByteConversionUtil() ).thenReturn( byteConversionUtil );\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = mock( HBaseValueMetaInterfaceFactory.class );\n    when( hBaseService.getHBaseValueMetaInterfaceFactory() ).thenReturn( valueMetaInterfaceFactory );\n    HBaseValueMetaInterface valueMeta = mock( HBaseValueMetaInterface.class );\n    when( valueMeta.isString() ).thenReturn( true );\n    when(\n      valueMetaInterfaceFactory.createHBaseValueMetaInterface( same( \"FAMILY\" ), same( \"COLUMN_NAME\" ),\n        same( \"ALIAS\" ), anyInt(), anyInt(), anyInt() ) ).thenReturn( valueMeta );\n\n    HBaseValueMetaInterface column =\n      MappingUtils.buildNonKeyValueMeta( \"ALIAS\", \"FAMILY\", \"COLUMN_NAME\", STRING_TYPE, \"INDEXED_VALS\", hBaseService );\n\n    assertNotNull( column );\n    verify( valueMeta ).setHBaseTypeFromString( STRING_TYPE );\n    verify( valueMeta ).setStorageType( ValueMetaInterface.STORAGE_TYPE_INDEXED );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_UndefinedMappingName() throws Exception {\n    HBaseService hBaseService = mock( HBaseService.class );\n    MappingDefinition mappingDefinition = buildMappingDefinitionForGetMapping();\n    mappingDefinition.setMappingName( \"\" );\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_UndefinedColumns() throws Exception {\n    HBaseService hBaseService = mock( HBaseService.class );\n    MappingDefinition mappingDefinition = buildMappingDefinitionForGetMapping();\n    mappingDefinition.setMappingColumns( Collections.<MappingColumn>emptyList() );\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_NoKeyDefined() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingUtils.getMapping( buildMappingDefinitionWithoutKey(), hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_TwoKeysDefined() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingUtils.getMapping( buildMappingDefinitionWithTwoKeys(), hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_keyColumnWithoutAlias() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    MappingColumn keyColumn = buildKeyColumn( null, STRING_TYPE );\n    mappingDefinition.setMappingColumns( Collections.singletonList( keyColumn ) );\n\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_keyColumnWithoutType() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    MappingColumn keyColumn = buildKeyColumn( KEY_STRING, null );\n    mappingDefinition.setMappingColumns( Collections.singletonList( keyColumn ) );\n\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_columnWithoutFamilyName() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    List<MappingColumn> columns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = buildKeyColumn( KEY_STRING, STRING_TYPE );\n    columns.add( keyColumn );\n    MappingColumn otherColumn = buildNoKeyColumn( ALIAS_STRING, null, \"columnName\", STRING_TYPE );\n    columns.add( otherColumn );\n    mappingDefinition.setMappingColumns( columns );\n\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testGetMapping_columnWithoutColumnName() throws Exception {\n    HBaseService hBaseService = mockHBaseService();\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    List<MappingColumn> columns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = buildKeyColumn( KEY_STRING, STRING_TYPE );\n    columns.add( keyColumn );\n    MappingColumn otherColumn = buildNoKeyColumn( ALIAS_STRING, \"family\", null, STRING_TYPE );\n    columns.add( otherColumn );\n    mappingDefinition.setMappingColumns( columns );\n\n    MappingUtils.getMapping( mappingDefinition, hBaseService );\n  }\n\n  @Test\n  public void testGetMapping() throws Exception {\n    HBaseService hBaseService = mock( HBaseService.class );\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = mock( HBaseValueMetaInterfaceFactory.class );\n    when( hBaseService.getHBaseValueMetaInterfaceFactory() ).thenReturn( valueMetaInterfaceFactory );\n    HBaseValueMetaInterface keyValueMeta = mock( HBaseValueMetaInterface.class );\n    when( keyValueMeta.isString() ).thenReturn( true );\n    when(\n      valueMetaInterfaceFactory.createHBaseValueMetaInterface( nullable( String.class ), nullable( String.class ), same( KEY_STRING ),\n        anyInt(), anyInt(), anyInt() ) ).thenReturn( keyValueMeta );\n\n    HBaseValueMetaInterface valueValueMeta = mock( HBaseValueMetaInterface.class );\n    when( keyValueMeta.isString() ).thenReturn( true );\n    when(\n      valueMetaInterfaceFactory.createHBaseValueMetaInterface( nullable( String.class ), nullable( String.class ), same( VALUE_STRING ),\n        anyInt(), anyInt(), anyInt() ) ).thenReturn( valueValueMeta );\n\n    MappingFactory mappingFactory = mock( MappingFactory.class );\n    when( hBaseService.getMappingFactory() ).thenReturn( mappingFactory );\n    Mapping mapping = mock( Mapping.class );\n    when( mappingFactory.createMapping( TEST_TABLE_NAME, TEST_MAPPING_NAME ) ).thenReturn( mapping );\n\n    Mapping result = MappingUtils.getMapping( buildMappingDefinitionForGetMapping(), hBaseService );\n    assertNotNull( result );\n\n    verify( mapping ).setKeyName( KEY_STRING );\n    verify( mapping ).addMappedColumn( valueValueMeta, false );\n  }\n\n  private static HBaseService mockHBaseService() {\n    HBaseService hBaseService = mock( HBaseService.class );\n    HBaseValueMetaInterfaceFactory valueMetaInterfaceFactory = mock( HBaseValueMetaInterfaceFactory.class );\n    when( hBaseService.getHBaseValueMetaInterfaceFactory() ).thenReturn( valueMetaInterfaceFactory );\n    when(\n      valueMetaInterfaceFactory.createHBaseValueMetaInterface( nullable( String.class ), nullable( String.class ), anyString(), anyInt(),\n        anyInt(), anyInt() ) ).thenAnswer( new Answer<HBaseValueMetaInterface>() {\n\n          @Override\n          public HBaseValueMetaInterface answer( InvocationOnMock invocation ) throws Throwable {\n            Object[] args = invocation.getArguments();\n            String columnFamily = (String) args[ FAMILIY_ARG_INDEX ];\n            String columnName = (String) args[ NAME_ARG_INDEX ];\n            String alias = (String) args[ ALIAS_ARG_INDEX ];\n            HBaseValueMetaInterface valueMeta = mock( HBaseValueMetaInterface.class );\n            when( valueMeta.getAlias() ).thenReturn( alias );\n            when( valueMeta.getColumnFamily() ).thenReturn( columnFamily );\n            when( valueMeta.getColumnName() ).thenReturn( columnName );\n            return valueMeta;\n          }\n        } );\n\n    MappingFactory mappingFactory = mock( MappingFactory.class );\n    when( hBaseService.getMappingFactory() ).thenReturn( mappingFactory );\n    Mapping mapping = mock( Mapping.class );\n    when( mappingFactory.createMapping( TEST_TABLE_NAME, TEST_MAPPING_NAME ) ).thenReturn( mapping );\n    return hBaseService;\n  }\n\n  private static MappingDefinition buildMappingDefinitionWithoutKey() {\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    MappingColumn valueColumn = new MappingColumn();\n    valueColumn.setAlias( VALUE_STRING );\n    valueColumn.setType( STRING_TYPE );\n    valueColumn.setColumnFamily( \"family\" );\n    valueColumn.setColumnName( \"name\" );\n    mappingDefinition.setMappingColumns( Collections.singletonList( valueColumn ) );\n    return mappingDefinition;\n  }\n\n  private static MappingDefinition buildMappingDefinitionWithTwoKeys() {\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    List<MappingColumn> mappingColumns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = new MappingColumn();\n    keyColumn.setAlias( KEY_STRING );\n    keyColumn.setKey( true );\n    keyColumn.setType( STRING_TYPE );\n    mappingColumns.add( keyColumn );\n\n    MappingColumn keyColumn2 = new MappingColumn();\n    keyColumn2.setAlias( \"key2\" );\n    keyColumn2.setKey( true );\n    keyColumn2.setType( STRING_TYPE );\n    mappingColumns.add( keyColumn2 );\n\n    mappingDefinition.setMappingColumns( mappingColumns );\n    return mappingDefinition;\n  }\n\n  private static MappingDefinition buildMappingDefinitionForGetMapping() {\n    MappingDefinition mappingDefinition = createMappingDefinition();\n    List<MappingColumn> mappingColumns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = buildKeyColumn( KEY_STRING, STRING_TYPE );\n    mappingColumns.add( keyColumn );\n\n    MappingColumn valueColumn = buildNoKeyColumn( VALUE_STRING, \"family\", \"name\", STRING_TYPE );\n    mappingColumns.add( valueColumn );\n    mappingDefinition.setMappingColumns( mappingColumns );\n    return mappingDefinition;\n  }\n\n  private static MappingColumn buildKeyColumn( String alias, String type ) {\n    MappingColumn keyColumn = new MappingColumn();\n    keyColumn.setAlias( alias );\n    keyColumn.setKey( true );\n    keyColumn.setType( type );\n    return keyColumn;\n  }\n\n  public static MappingColumn buildNoKeyColumn( String alias, String family, String name, String type ) {\n    MappingColumn valueColumn = new MappingColumn();\n    valueColumn.setAlias( alias );\n    valueColumn.setType( STRING_TYPE );\n    valueColumn.setColumnFamily( family );\n    valueColumn.setColumnName( name );\n    return valueColumn;\n  }\n\n  private static MappingDefinition createMappingDefinition() {\n    MappingDefinition mappingDefinition = new MappingDefinition();\n    mappingDefinition.setTableName( TEST_TABLE_NAME );\n    mappingDefinition.setMappingName( TEST_MAPPING_NAME );\n    return mappingDefinition;\n  }\n\n  private static List<MappingColumn> buildTupleMapping() {\n    List<MappingColumn> mappingColumns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = new MappingColumn();\n    keyColumn.setAlias( \"KEY\" );\n    mappingColumns.add( keyColumn );\n    MappingColumn familyColumn = new MappingColumn();\n    familyColumn.setAlias( \"Family\" );\n    mappingColumns.add( familyColumn );\n    MappingColumn columnColumn = new MappingColumn();\n    columnColumn.setAlias( \"Column\" );\n    mappingColumns.add( columnColumn );\n    MappingColumn valueColumn = new MappingColumn();\n    valueColumn.setAlias( \"Value\" );\n    mappingColumns.add( valueColumn );\n    MappingColumn timestampColumn = new MappingColumn();\n    timestampColumn.setAlias( \"Timestamp\" );\n    mappingColumns.add( timestampColumn );\n    return mappingColumns;\n  }\n\n  private static List<MappingColumn> buildNoTupleMapping() {\n    List<MappingColumn> mappingColumns = new ArrayList<MappingColumn>();\n    MappingColumn keyColumn = new MappingColumn();\n    keyColumn.setAlias( KEY_STRING );\n    mappingColumns.add( keyColumn );\n    MappingColumn valueColumn = new MappingColumn();\n    valueColumn.setAlias( VALUE_STRING );\n    mappingColumns.add( valueColumn );\n    return mappingColumns;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/mapping/MockHBaseByteConverterUsingJavaByteBuffer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.mapping;\n\nimport org.pentaho.hadoop.shim.api.internal.hbase.HBaseBytesUtilShim;\n\nimport java.nio.ByteBuffer;\n\n/**\n * @author Vasilina Terehova\n */\n\npublic class MockHBaseByteConverterUsingJavaByteBuffer implements HBaseBytesUtilShim {\n\n  @Override public int getSizeOfFloat() {\n    return Float.SIZE / Byte.SIZE;\n  }\n\n  @Override public int getSizeOfDouble() {\n    return Double.SIZE / Byte.SIZE;\n  }\n\n  @Override public int getSizeOfInt() {\n    return Integer.SIZE / Byte.SIZE;\n  }\n\n  @Override public int getSizeOfLong() {\n    return Long.SIZE / Byte.SIZE;\n  }\n\n  @Override public int getSizeOfShort() {\n    return Short.SIZE / Byte.SIZE;\n  }\n\n  @Override public int getSizeOfByte() {\n    return 1;\n  }\n\n  @Override public byte[] toBytes( String aString ) {\n    return aString.getBytes();\n  }\n\n  @Override public byte[] toBytes( int anInt ) {\n    return ByteBuffer.allocate( getSizeOfInt() ).putInt( anInt ).array();\n  }\n\n  @Override public byte[] toBytes( long aLong ) {\n    return ByteBuffer.allocate( getSizeOfLong() ).putLong( aLong ).array();\n  }\n\n  @Override public byte[] toBytes( float aFloat ) {\n    return ByteBuffer.allocate( getSizeOfFloat() ).putFloat( aFloat ).array();\n  }\n\n  @Override public byte[] toBytes( double aDouble ) {\n    return ByteBuffer.allocate( getSizeOfDouble() ).putDouble( aDouble ).array();\n  }\n\n  @Override public byte[] toBytesBinary( String value ) {\n    return value.getBytes();\n  }\n\n  @Override public String toString( byte[] value ) {\n    return new String( value );\n  }\n\n  @Override public long toLong( byte[] value ) {\n    return ByteBuffer.wrap( value ).getLong();\n  }\n\n  @Override public int toInt( byte[] value ) {\n    return ByteBuffer.wrap( value ).getInt();\n  }\n\n  @Override public float toFloat( byte[] value ) {\n    return ByteBuffer.wrap( value ).getFloat();\n  }\n\n  @Override public double toDouble( byte[] value ) {\n    return ByteBuffer.wrap( value ).getDouble();\n  }\n\n  @Override public short toShort( byte[] value ) {\n    return ByteBuffer.wrap( value ).getShort();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutputMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\npublic class HBaseOutputMetaInjectionTest extends BaseMetadataInjectionTest<HBaseOutputMeta> {\n\n  @Before\n  public void setup() {\n    NamedClusterService namedClusterService = Mockito.mock( NamedClusterService.class );\n    NamedClusterServiceLocator namedClusterServiceLocator = Mockito.mock( NamedClusterServiceLocator.class );\n    RuntimeTestActionService runtimeTestActionService = Mockito.mock( RuntimeTestActionService.class );\n    RuntimeTester runtimeTester = Mockito.mock( RuntimeTester.class );\n    MetastoreLocator metaStore = Mockito.mock( MetastoreLocator.class );\n\n    setup( new HBaseOutputMeta( namedClusterService, namedClusterServiceLocator, runtimeTestActionService, runtimeTester, new NamedClusterLoadSaveUtil(),\n      metaStore ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"HBASE_SITE_XML_URL\", new StringGetter() {\n      public String get() {\n        return meta.getCoreConfigURL();\n      }\n    } );\n    check( \"HBASE_DEFAULT_XML_URL\", new StringGetter() {\n      public String get() {\n        return meta.getDefaultConfigURL();\n      }\n    } );\n    check( \"TARGET_TABLE_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getTargetTableName();\n      }\n    } );\n    check( \"TARGET_MAPPING_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getTargetMappingName();\n      }\n    } );\n    check( \"DELETE_ROW_KEY\", new BooleanGetter()  {\n      @Override\n      public boolean get() {\n        return meta.getDeleteRowKey();\n      }\n    } );\n    check( \"DISABLE_WRITE_TO_WAL\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getDisableWriteToWAL();\n      }\n    } );\n    check( \"WRITE_BUFFER_SIZE\", new StringGetter() {\n      public String get() {\n        return meta.getWriteBufferSize();\n      }\n    } );\n\n    check( \"TABLE_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getTableName();\n      }\n    } );\n    check( \"MAPPING_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingName();\n      }\n    } );\n\n    check( \"MAPPING_ALIAS\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getAlias();\n      }\n    } );\n    check( \"MAPPING_KEY\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).isKey();\n      }\n    } );\n    check( \"MAPPING_COLUMN_FAMILY\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnFamily();\n      }\n    } );\n    check( \"MAPPING_COLUMN_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnName();\n      }\n    } );\n    check( \"MAPPING_TYPE\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getType();\n      }\n    } );\n    check( \"MAPPING_INDEXED_VALUES\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getIndexedValues();\n      }\n    } );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/output/HBaseOutputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hbase.LogInjector;\nimport org.pentaho.big.data.kettle.plugins.hbase.MappingDefinition;\nimport org.pentaho.big.data.kettle.plugins.hbase.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.hbase.ServiceStatus;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.logging.LoggingBuffer;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.loadsave.MemoryRepository;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.HBaseService;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.MappingFactory;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport javax.imageio.metadata.IIOMetadataNode;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.hamcrest.core.Is.is;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertThat;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.atLeast;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class HBaseOutputMetaTest {\n\n  @Mock NamedClusterService namedClusterService;\n  @Mock NamedClusterServiceLocator namedClusterServiceLocator;\n  @Mock RuntimeTestActionService runtimeTestActionService;\n  @Mock RuntimeTester runtimeTester;\n  @Mock NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n  @Mock NamedCluster namedCluster;\n  @Mock MetastoreLocator metastoreLocatorOsgi;\n\n  @Mock Repository rep;\n  @Mock IMetaStore metaStore;\n  @Mock ObjectId id_step;\n  @Mock HBaseService hBaseService;\n  @Mock MappingDefinition mappingDefinition;\n\n  List<DatabaseMeta> databases = new ArrayList<>();\n\n  @InjectMocks HBaseOutputMeta hBaseOutputMeta;\n\n  @Test\n  public void testReadRepSetsNamedCluster() throws Exception {\n    when( namedClusterLoadSaveUtil.loadClusterConfig( any(), any(), any(), any(), any(), any() ) )\n      .thenReturn( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenReturn( hBaseService );\n    when( hBaseService.getMappingFactory() )\n      .thenReturn( mock( MappingFactory.class ) );\n    Mapping mapping = mock( Mapping.class );\n    when( mapping.readRep( rep, id_step ) ).thenReturn( true );\n    when( hBaseService.getMappingFactory().createMapping() ).thenReturn( mapping );\n\n    hBaseOutputMeta.readRep( rep, metaStore, id_step, databases );\n    assertThat( hBaseOutputMeta.getNamedCluster(), is( namedCluster ) );\n    assertThat( hBaseOutputMeta.getMapping(), is( mapping ) );\n  }\n\n  /**\n   * actual for bug BACKLOG-9529\n   */\n  @Test\n  public void testLogSuccessfulForGetXml() throws Exception {\n    HBaseOutputMeta hBaseOutputMetaSpy = Mockito.spy( this.hBaseOutputMeta );\n    Mockito.doThrow( new KettleException( \"Unexpected error occured\" ) ).when( hBaseOutputMetaSpy )\n      .applyInjection( any() );\n\n    LoggingBuffer loggingBuffer = LogInjector.setMockForLoggingBuffer();\n    hBaseOutputMetaSpy.getXML();\n    verify( loggingBuffer, atLeast( 1 ) ).addLogggingEvent( any() );\n  }\n\n  /**\n   * actual for bug BACKLOG-9629\n   */\n  @Test\n  public void testApplyInjectionDefinitionExists() throws Exception {\n    HBaseOutputMeta hBaseOutputMetaSpy = Mockito.spy( this.hBaseOutputMeta );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenReturn( hBaseService );\n    hBaseOutputMetaSpy.setMappingDefinition( mappingDefinition );\n    hBaseOutputMetaSpy.setNamedCluster( namedCluster );\n    Mockito.doReturn( null ).when( hBaseOutputMetaSpy ).getMapping( any(), any() );\n\n    hBaseOutputMetaSpy.getXML();\n    verify( hBaseOutputMetaSpy, times( 1 ) ).setMapping( any() );\n  }\n\n  /**\n   * actual for bug BACKLOG-9629\n   */\n  @Test\n  public void testApplyInjectionDefinitionNull() throws Exception {\n    HBaseOutputMeta hBaseOutputMetaSpy = Mockito.spy( this.hBaseOutputMeta );\n    hBaseOutputMetaSpy.setMappingDefinition( null );\n    hBaseOutputMetaSpy.setNamedCluster( namedCluster );\n\n    hBaseOutputMetaSpy.getXML();\n    verify( hBaseOutputMetaSpy, times( 0 ) ).getMapping( any(), any() );\n    verify( hBaseOutputMetaSpy, times( 0 ) ).setMapping( any() );\n  }\n\n  @Test\n  public void testLoadXmlDoesntBubleUpException() throws Exception {\n    KettleLogStore.init();\n    ClusterInitializationException exception = new ClusterInitializationException( new Exception() );\n    hBaseOutputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenThrow( exception );\n    when( namedClusterLoadSaveUtil.loadClusterConfig( any(), any(), any(), any(), any(), any() ) )\n      .thenReturn( namedCluster );\n\n    IIOMetadataNode node = new IIOMetadataNode();\n    IIOMetadataNode child = new IIOMetadataNode( \"disable_wal\" );\n    IIOMetadataNode grandChild = new IIOMetadataNode();\n    grandChild.setNodeValue( \"N\" );\n    child.appendChild( grandChild );\n    node.appendChild( child );\n\n    hBaseOutputMeta.loadXML( node, new ArrayList<>(), metaStore );\n\n    ServiceStatus serviceStatus = hBaseOutputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertFalse( serviceStatus.isOk() );\n    assertEquals( exception, serviceStatus.getException() );\n  }\n\n  @Test\n  public void testLoadXmlServiceStatusOk() throws Exception {\n    KettleLogStore.init();\n    hBaseOutputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenReturn( hBaseService );\n    when( namedClusterLoadSaveUtil.loadClusterConfig( any(), any(), any(), any(), any(), any() ) )\n      .thenReturn( namedCluster );\n\n    IIOMetadataNode node = new IIOMetadataNode();\n    IIOMetadataNode child = new IIOMetadataNode( \"disable_wal\" );\n    IIOMetadataNode grandChild = new IIOMetadataNode();\n    grandChild.setNodeValue( \"N\" );\n    child.appendChild( grandChild );\n    node.appendChild( child );\n\n    hBaseOutputMeta.loadXML( node, new ArrayList<>(), metaStore );\n\n    ServiceStatus serviceStatus = hBaseOutputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertTrue( serviceStatus.isOk() );\n  }\n\n  @Test\n  public void testReadRepDoesntBubleUpException() throws Exception {\n    KettleLogStore.init();\n    ClusterInitializationException exception = new ClusterInitializationException( new Exception() );\n    hBaseOutputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenThrow( exception );\n    when( namedClusterLoadSaveUtil.loadClusterConfig( any(), any(), any(), any(), any(), any() ) )\n      .thenReturn( namedCluster );\n\n    hBaseOutputMeta.readRep( new MemoryRepository(), metaStore, mock( ObjectId.class ), new ArrayList<>() );\n\n    ServiceStatus serviceStatus = hBaseOutputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertFalse( serviceStatus.isOk() );\n    assertEquals( exception, serviceStatus.getException() );\n  }\n\n  @Test\n  public void testReadRepServiceStatusOk() throws Exception {\n    KettleLogStore.init();\n    hBaseOutputMeta.setNamedCluster( namedCluster );\n    when( namedClusterServiceLocator.getService( namedCluster, HBaseService.class, null ) ).thenReturn( hBaseService );\n    when( namedClusterLoadSaveUtil.loadClusterConfig( any(), any(), any(), any(), any(), any() ) )\n      .thenReturn( namedCluster );\n\n    hBaseOutputMeta.readRep( new MemoryRepository(), metaStore, mock( ObjectId.class ), new ArrayList<>() );\n\n    ServiceStatus serviceStatus = hBaseOutputMeta.getServiceStatus();\n    assertNotNull( serviceStatus );\n    assertTrue( serviceStatus.isOk() );\n  }\n\n  @Test\n  public void testInjectWithEmbeddedMetastoreProviderKey() throws Exception {\n    KettleLogStore.init();\n    hBaseOutputMeta.setNamedCluster( namedCluster );\n    when( namedCluster.getName() ).thenReturn( \"ClusterName\" );\n    NamedCluster embeddedNamedCluster = mock( NamedCluster.class );\n    when( embeddedNamedCluster.getShimIdentifier() ).thenReturn( \"shim\" );\n    StepMeta mockStepMeta = mock( StepMeta.class );\n    TransMeta mockTransMeta = mock( TransMeta.class );\n    when( mockTransMeta.getEmbeddedMetastoreProviderKey() ).thenReturn( \"key\" );\n    hBaseOutputMeta.setParentStepMeta( mockStepMeta );\n    when( mockStepMeta.getParentTransMeta() ).thenReturn( mockTransMeta );\n    when( metastoreLocatorOsgi.getExplicitMetastore( \"key\" ) ).thenReturn( metaStore );\n    when( namedClusterService.getNamedClusterByName( \"ClusterName\", metaStore ) ).thenReturn( embeddedNamedCluster );\n\n    hBaseOutputMeta.applyInjection( new Variables() );\n    assertEquals( embeddedNamedCluster, hBaseOutputMeta.getNamedCluster() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/output/KettleRowToHBaseTupleTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.output;\n\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingUtils;\nimport org.pentaho.big.data.kettle.plugins.hbase.output.KettleRowToHBaseTuple.FieldException;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.hadoop.shim.api.hbase.ByteConversionUtil;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping.KeyType;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping.TupleMapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBasePut;\nimport org.pentaho.hadoop.shim.api.hbase.table.HBaseTableWriteOperationManager;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class KettleRowToHBaseTupleTest {\n\n  private Mapping tupleMapping;\n\n  @Before\n  public void setup() {\n    tupleMapping = Mockito.mock( Mapping.class );\n    when( tupleMapping.getKeyName() ).thenReturn( Mapping.TupleMapping.KEY.toString() );\n    when( tupleMapping.getKeyType() ).thenReturn( KeyType.STRING );\n  }\n\n  @Test\n  public void testRowConversion() throws Exception {\n\n    RowMetaInterface inputRowMeta = Mockito.mock( RowMetaInterface.class );\n\n    when( inputRowMeta.indexOfValue( Mapping.TupleMapping.KEY.toString() ) ).thenReturn( 0 );\n    when( inputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() ) ).thenReturn( 1 );\n    when( inputRowMeta.indexOfValue( Mapping.TupleMapping.COLUMN.toString() ) ).thenReturn( 2 );\n    when( inputRowMeta.indexOfValue( Mapping.TupleMapping.VALUE.toString() ) ).thenReturn( 3 );\n    when( inputRowMeta.indexOfValue( MappingUtils.TUPLE_MAPPING_VISIBILITY ) ).thenReturn( 4 );\n\n    ValueMetaString keyMeta = new ValueMetaString( Mapping.TupleMapping.KEY.toString() );\n    ValueMetaString familyMeta = new ValueMetaString( Mapping.TupleMapping.FAMILY.toString() );\n    ValueMetaString columnMeta = new ValueMetaString( Mapping.TupleMapping.COLUMN.toString() );\n    ValueMetaString valueMeta = new ValueMetaString( Mapping.TupleMapping.VALUE.toString() );\n    ValueMetaString visMeta = new ValueMetaString( MappingUtils.TUPLE_MAPPING_VISIBILITY );\n\n    when( inputRowMeta.getValueMeta( 0 ) ).thenReturn( keyMeta );\n    when( inputRowMeta.getValueMeta( 1 ) ).thenReturn( familyMeta );\n    when( inputRowMeta.getValueMeta( 2 ) ).thenReturn( columnMeta );\n    when( inputRowMeta.getValueMeta( 3 ) ).thenReturn( valueMeta );\n    when( inputRowMeta.getValueMeta( 4 ) ).thenReturn( visMeta );\n\n    Map<String, HBaseValueMetaInterface> columnMap = new HashMap<>();\n\n    HBaseValueMetaInterface hvmi = Mockito.mock( HBaseValueMetaInterface.class );\n\n    columnMap.put( valueMeta.getName(), hvmi );\n\n    HBaseValueMetaInterface hvmiv = Mockito.mock( HBaseValueMetaInterface.class );\n\n    columnMap.put( visMeta.getName(), hvmiv );\n\n    KettleRowToHBaseTuple rowConverter = new KettleRowToHBaseTuple( inputRowMeta, tupleMapping, columnMap );\n\n    ByteConversionUtil byteConversionUtil = Mockito.mock( ByteConversionUtil.class );\n\n    String[] row = { \"key\", \"family\", \"@@@binary@@@column\", \"value\", \"public\" };\n\n    HBaseTableWriteOperationManager writeManager = Mockito.mock( HBaseTableWriteOperationManager.class );\n\n    HBasePut put = Mockito.mock( HBasePut.class );\n\n    when( writeManager.createPut( row[ 0 ].getBytes() ) ).thenReturn( put );\n\n    when( byteConversionUtil.encodeKeyValue( row[ 0 ], keyMeta, KeyType.STRING ) ).thenReturn( row[ 0 ].getBytes() );\n\n    rowConverter.createTuplePut( writeManager, byteConversionUtil, row, true );\n\n    verify( put, times( 1 ) ).addColumn( eq( row[ 1 ] ), eq( \"column\" ), eq( true ), any() );\n    verify( put, times( 1 ) ).addColumn( eq( row[ 1 ] ), eq( MappingUtils.TUPLE_MAPPING_VISIBILITY ), eq( false ),\n      any() );\n    verify( put, times( 1 ) ).setWriteToWAL( true );\n\n    try {\n      rowConverter.createTuplePut( null, null, new String[] { null, null, null, null, null }, true );\n    } catch ( FieldException fe ) {\n      Assert.assertEquals( fe.getFieldString(), TupleMapping.KEY.toString() );\n    }\n\n    try {\n      rowConverter.createTuplePut( null, null, new String[] { \"key\", null, null, null, null }, true );\n    } catch ( FieldException fe ) {\n      Assert.assertEquals( fe.getFieldString(), TupleMapping.FAMILY.toString() );\n    }\n\n    try {\n      rowConverter.createTuplePut( null, null, new String[] { \"key\", \"family\", null, null, null }, true );\n    } catch ( FieldException fe ) {\n      Assert.assertEquals( fe.getFieldString(), TupleMapping.COLUMN.toString() );\n    }\n\n    try {\n      rowConverter.createTuplePut( null, null, new String[] { \"key\", \"family\", \"column\", null, null }, true );\n    } catch ( FieldException fe ) {\n      Assert.assertEquals( fe.getFieldString(), TupleMapping.VALUE.toString() );\n    }\n\n  }\n\n  @Test\n  public void testMissingValues() {\n\n    try {\n      RowMetaInterface inputRowMeta = Mockito.mock( RowMetaInterface.class );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.KEY.toString() ) ).thenReturn( -1 );\n      new KettleRowToHBaseTuple( inputRowMeta, tupleMapping, null );\n      Assert.fail();\n    } catch ( KettleException e ) {\n    }\n\n    try {\n      RowMetaInterface inputRowMeta = Mockito.mock( RowMetaInterface.class );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.KEY.toString() ) ).thenReturn( 0 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() ) ).thenReturn( -1 );\n      new KettleRowToHBaseTuple( inputRowMeta, tupleMapping, null );\n      Assert.fail();\n    } catch ( KettleException e ) {\n    }\n\n    try {\n      RowMetaInterface inputRowMeta = Mockito.mock( RowMetaInterface.class );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.KEY.toString() ) ).thenReturn( 0 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() ) ).thenReturn( 1 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.COLUMN.toString() ) ).thenReturn( -1 );\n      new KettleRowToHBaseTuple( inputRowMeta, tupleMapping, null );\n      Assert.fail();\n    } catch ( KettleException e ) {\n    }\n\n    try {\n      RowMetaInterface inputRowMeta = Mockito.mock( RowMetaInterface.class );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.KEY.toString() ) ).thenReturn( 0 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.FAMILY.toString() ) ).thenReturn( 1 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.COLUMN.toString() ) ).thenReturn( 2 );\n      when( inputRowMeta.indexOfValue( Mapping.TupleMapping.VALUE.toString() ) ).thenReturn( -1 );\n      new KettleRowToHBaseTuple( inputRowMeta, tupleMapping, null );\n      Assert.fail();\n    } catch ( KettleException e ) {\n    }\n\n  }\n\n  @Test\n  public void testException() {\n\n    FieldException fieldException = new FieldException( TupleMapping.KEY );\n    Assert.assertEquals( fieldException.getFieldString(), TupleMapping.KEY.toString() );\n\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoderMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.di.core.osgi.api.MetastoreLocatorOsgi;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\npublic class HBaseRowDecoderMetaInjectionTest extends BaseMetadataInjectionTest<HBaseRowDecoderMeta> {\n\n  @Before\n  public void setup() {\n    NamedClusterService namedClusterService = Mockito.mock( NamedClusterService.class );\n    NamedClusterServiceLocator namedClusterServiceLocator = Mockito.mock( NamedClusterServiceLocator.class );\n    RuntimeTestActionService runtimeTestActionService = Mockito.mock( RuntimeTestActionService.class );\n    RuntimeTester runtimeTester = Mockito.mock( RuntimeTester.class );\n    MetastoreLocator metaStore = Mockito.mock( MetastoreLocator.class );\n\n    setup( new HBaseRowDecoderMeta( namedClusterServiceLocator, namedClusterService, runtimeTestActionService,\n        runtimeTester, metaStore ) );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"KEY_FIELD\", new StringGetter() {\n      public String get() {\n        return meta.getIncomingKeyField();\n      }\n    } );\n    check( \"HBASE_RESULT_FIELD\", new StringGetter() {\n      public String get() {\n        return meta.getIncomingResultField();\n      }\n    } );\n\n    check( \"TABLE_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getTableName();\n      }\n    } );\n    check( \"MAPPING_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingName();\n      }\n    } );\n\n    check( \"MAPPING_ALIAS\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getAlias();\n      }\n    } );\n    check( \"MAPPING_KEY\", new BooleanGetter() {\n      public boolean get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).isKey();\n      }\n    } );\n    check( \"MAPPING_COLUMN_FAMILY\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnFamily();\n      }\n    } );\n    check( \"MAPPING_COLUMN_NAME\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getColumnName();\n      }\n    } );\n    check( \"MAPPING_TYPE\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getType();\n      }\n    } );\n    check( \"MAPPING_INDEXED_VALUES\", new StringGetter() {\n      public String get() {\n        return meta.getMappingDefinition().getMappingColumns().get( 0 ).getIndexedValues();\n      }\n    } );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/rowdecoder/HBaseRowDecoderMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.rowdecoder;\n\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * @author Tatsiana_Kasiankova\n *\n */\npublic class HBaseRowDecoderMetaTest {\n\n  private static final String MAPPING_NAME = \"MappingName\";\n\n  private static final String TABLE_NAME = \"TableName\";\n\n  private static final String COLUMN_NAME = \"ColumnName\";\n  private static final String FAMILY_NAME = \"fm\";\n  private static final String ALIAS = \"alias\";\n  private static final String MAPPING_KEY_NAME = \"mappingKeyName\";\n  private static final String ORIGIN = \"HBase Row Decoder\";\n  private HBaseRowDecoderMeta hbRowDecoderMeta;\n  private RowMeta rowMeta;\n\n  @Before\n  public void setup() {\n    hbRowDecoderMeta =\n      new HBaseRowDecoderMeta( mock( NamedClusterServiceLocator.class ), mock( NamedClusterService.class ), mock(\n        RuntimeTestActionService.class ), mock( RuntimeTester.class ), mock( MetastoreLocator.class ) );\n    rowMeta = new RowMeta();\n  }\n\n  @After\n  public void tearDown() {\n    rowMeta.clear();\n  }\n\n  @Test\n  public void testRowMetaIsFilled_WhenMappingHasTableNameAndMappingName() throws Exception {\n    // Mapping from HBase: having both table name and mapping name\n    hbRowDecoderMeta.setMapping( getMapping( TABLE_NAME, MAPPING_NAME ) );\n\n    hbRowDecoderMeta.getFields( DefaultBowl.getInstance(), rowMeta, ORIGIN, null, null, null );\n\n    assertRowMetaIsFilledWithFields();\n  }\n\n  @Test\n  public void testRowMetaIsFilled_WhenMappingHasNoMappingName() throws Exception {\n    // \"local\" Mapping: no mapping name\n    hbRowDecoderMeta.setMapping( getMapping( null, null ) );\n\n    hbRowDecoderMeta.getFields( DefaultBowl.getInstance(), rowMeta, ORIGIN, null, null, null );\n\n    assertRowMetaIsFilledWithFields();\n  }\n\n  private void assertRowMetaIsFilledWithFields() {\n    assertEquals( 2, rowMeta.getValueMetaList().size() );\n    ValueMetaInterface vmi = rowMeta.getValueMeta( 0 );\n    assertEquals( MAPPING_KEY_NAME, vmi.getName() );\n    vmi = rowMeta.getValueMeta( 1 );\n    assertEquals( ALIAS, vmi.getName() );\n  }\n\n  private Mapping getMapping( String tableName, String mappingName ) throws Exception {\n    Mapping maping = mock( Mapping.class );\n    when( maping.getKeyName() ).thenReturn( MAPPING_KEY_NAME );\n    Map<String, HBaseValueMetaInterface> map = new HashMap<>();\n    HBaseValueMetaInterface value = mock( HBaseValueMetaInterface.class );\n    when( value.getName() ).thenReturn( ALIAS );\n    map.put( ALIAS, value );\n    when( maping.getMappedColumns() ).thenReturn( map );\n    return maping;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hbase/core/src/test/resources/StubMapping.xml",
    "content": "<step>\n    <name>Test</name>\n    <type>HBaseInput</type>\n    <description/>\n    <distribute>N</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n        <method>none</method>\n        <schema_name/>\n    </partitioning>\n\n    <cluster_name>Local Sandbox</cluster_name>\n\n    <zookeeper_hosts>sandbox-hdp.hortonworks.com</zookeeper_hosts>\n\n    <zookeeper_port>2181</zookeeper_port>\n\n    <source_table_name>iemployee</source_table_name>\n\n    <output_fields>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>Rowkey</alias>\n\n            <family/>\n\n            <column>Rowkey</column>\n\n            <key>Y</key>\n\n            <type>Integer</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>fname</alias>\n\n            <family>personal</family>\n\n            <column>fname</column>\n\n            <key>N</key>\n\n            <type>Float</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>lname</alias>\n\n            <family>personal</family>\n\n            <column>lname</column>\n\n            <key>N</key>\n\n            <type>Double</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>salary</alias>\n\n            <family>payroll</family>\n\n            <column>salary</column>\n\n            <key>N</key>\n\n            <type>Float</type>\n\n            <format/>\n\n        </field>\n    </output_fields>\n    <match_any_filter>N</match_any_filter>\n\n    <mapping>\n        <mapping_name>simple input map</mapping_name>\n\n        <table_name>iemployee</table_name>\n\n        <key>Rowkey</key>\n\n        <key_type>Integer</key_type>\n\n        <mapped_columns>\n            <mapped_column>\n                <alias>fname</alias>\n\n                <column_family>personal</column_family>\n\n                <column_name>fname</column_name>\n\n                <type>Integer</type>\n\n            </mapped_column>\n            <mapped_column>\n                <alias>lname</alias>\n\n                <column_family>personal</column_family>\n\n                <column_name>lname</column_name>\n\n                <type>Long</type>\n\n            </mapped_column>\n            <mapped_column>\n                <alias>salary</alias>\n\n                <column_family>payroll</column_family>\n\n                <column_name>salary</column_name>\n\n                <type>Float</type>\n\n            </mapped_column>\n        </mapped_columns>\n    </mapping><attributes></attributes>\n    <cluster_schema/>\n    <remotesteps>\n        <input>\n        </input>\n        <output>\n        </output>\n    </remotesteps>\n    <GUI>\n        <xloc>160</xloc>\n        <yloc>144</yloc>\n        <draw>Y</draw>\n    </GUI>\n</step>"
  },
  {
    "path": "kettle-plugins/hbase/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-hbase</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n    <modelVersion>4.0.0</modelVersion>\n    <properties>\n        <hbase.version>1.4.8</hbase.version>\n    </properties>\n    <parent>\n        <groupId>pentaho</groupId>\n        <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n        <version>11.1.0.0-SNAPSHOT</version>\n    </parent>\n    <artifactId>pentaho-big-data-kettle-plugins-hbase-meta</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n    <packaging>jar</packaging>\n    <dependencies>\n        <dependency>\n            <groupId>org.apache.hbase</groupId>\n            <artifactId>hbase-client</artifactId>\n            <version>${hbase.version}</version>\n            <scope>provided</scope>\n            <exclusions>\n                <exclusion>\n                    <groupId>com.google.protobuf</groupId>\n                    <artifactId>protobuf-java</artifactId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.hadoop.thirdparty</groupId>\n            <artifactId>hadoop-shaded-protobuf_3_25</artifactId>\n            <version>${hadoop-shaded-protobuf_3_25.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>shim-api</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-core</artifactId>\n            <version>${pdi.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-engine</artifactId>\n            <version>${pdi.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.mockito</groupId>\n            <artifactId>mockito-all</artifactId>\n            <version>1.10.19</version>\n            <scope>test</scope>\n        </dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/meta/AELHBaseMappingImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.meta;\n\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.hbase.mapping.Mapping;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\nimport org.w3c.dom.Node;\n\nimport java.io.Serializable;\nimport java.util.HashMap;\nimport java.util.Map;\n\npublic class AELHBaseMappingImpl implements Mapping, Serializable {\n  private static final long serialVersionUID = 1L;\n\n  private String tableName;\n  private String mappingName;\n  private String keyName;\n  private KeyType keyType;\n  private String keyTypeAsString;\n  private int numMappedColumns;\n  private Map<String, HBaseValueMetaInterface> mappedColumns;\n\n  public AELHBaseMappingImpl() {\n  }\n\n  @Override\n  public String addMappedColumn( HBaseValueMetaInterface hBaseValueMetaInterface, boolean b ) throws Exception {\n    if ( mappedColumns == null ) {\n      mappedColumns = new HashMap<>();\n    }\n\n    mappedColumns.put( hBaseValueMetaInterface.getAlias(), hBaseValueMetaInterface );\n    this.numMappedColumns++;\n\n    return hBaseValueMetaInterface.getAlias();\n  }\n\n  @Override\n  public String getTableName() {\n    return tableName;\n  }\n\n  @Override\n  public void setTableName( String tableName ) {\n    this.tableName = tableName;\n  }\n\n  @Override\n  public String getMappingName() {\n    return mappingName;\n  }\n\n  @Override\n  public void setMappingName( String mappingName ) {\n    this.mappingName = mappingName;\n  }\n\n  @Override\n  public String getKeyName() {\n    return keyName;\n  }\n\n  @Override\n  public void setKeyName( String keyName ) {\n    this.keyName = keyName;\n  }\n\n  @Override\n  public KeyType getKeyType() {\n    return keyType;\n  }\n\n  @Override\n  public void setKeyType( KeyType keyType ) {\n    this.keyType = keyType;\n  }\n\n  @Override\n  public Map<String, HBaseValueMetaInterface> getMappedColumns() {\n    return mappedColumns;\n  }\n\n  @Override\n  public void setMappedColumns( Map<String, HBaseValueMetaInterface> mappedColumns ) {\n    this.mappedColumns = mappedColumns;\n  }\n\n  @Override\n  public void setKeyTypeAsString( String s ) throws Exception {\n    this.keyTypeAsString = s;\n  }\n\n  @Override\n  public boolean isTupleMapping() {\n    return false;\n  }\n\n  @Override\n  public void setTupleMapping( boolean b ) {\n\n  }\n\n  @Override\n  public String getTupleFamilies() {\n    return null;\n  }\n\n  @Override\n  public String[] getTupleFamiliesSplit() {\n    return new String[0];\n  }\n\n  @Override\n  public void setTupleFamilies( String s ) {\n\n  }\n\n  @Override\n  public int numMappedColumns() {\n    return this.numMappedColumns;\n  }\n\n  @Override\n  public void saveRep( Repository repository, ObjectId objectId, ObjectId objectId1 ) throws KettleException {\n    //noop on AEL\n  }\n\n  @Override\n  public String getXML() {\n    if ( Const.isEmpty( getKeyName() ) ) {\n      return \"\"; // nothing defined\n    }\n\n    String retString = \"\";\n\n    retString += XMLHandler.openTag( \"mapping\" );\n    retString += XMLHandler.addTagValue( \"mapping_name\", getMappingName() );\n    retString += XMLHandler.addTagValue( \"table_name\", getTableName() );\n    retString += XMLHandler.addTagValue( \"key\", getKeyName() );\n    retString += XMLHandler.addTagValue( \"key_type\", getKeyType().toString() );\n    if ( mappedColumns.size() > 0 ) {\n      retString += XMLHandler.openTag( \"mapped_columns\" );\n\n      for ( String alias : mappedColumns.keySet() ) {\n        HBaseValueMetaInterface vm = mappedColumns.get( alias );\n\n        retString += XMLHandler.openTag( \"mapped_column\" );\n        retString += XMLHandler.addTagValue( \"alias\", alias );\n        retString += XMLHandler.addTagValue( \"column_family\", vm.getColumnFamily() );\n        retString += XMLHandler.addTagValue( \"column_name\", vm.getColumnName() );\n        retString += XMLHandler.addTagValue( \"type\", vm.getHBaseTypeDesc() );\n        retString += XMLHandler.closeTag( \"mapped_column\" );\n      }\n\n      retString += XMLHandler.closeTag( \"mapped_columns\" );\n    }\n\n    retString += XMLHandler.closeTag( \"mapping\" );\n\n    return retString;\n  }\n\n  @Override\n  public boolean loadXML( Node node ) throws KettleXMLException {\n    node = XMLHandler.getSubNode( node, \"mapping\" );\n\n    if ( node == null\n        || Const.isEmpty( XMLHandler.getTagValue( node, \"key\" ) ) ) {\n      return false; // no mapping info in XML\n    }\n\n    setMappingName( XMLHandler.getTagValue( node, \"mapping_name\" ) );\n    setTableName( XMLHandler.getTagValue( node, \"table_name\" ) );\n\n    String keyName = XMLHandler.getTagValue( node, \"key\" );\n    if ( keyName.indexOf( ',' ) > 0 ) {\n      setTupleMapping( true );\n      setKeyName( keyName.substring( 0, keyName.indexOf( ',' ) ) );\n      if ( keyName.indexOf( ',' ) != keyName.length() - 1 ) {\n        // specific families have been supplied\n        String familiesList = keyName.substring( keyName.indexOf( ',' ) + 1,\n            keyName.length() );\n        if ( !Const.isEmpty( familiesList.trim() ) ) {\n          setTupleFamilies( familiesList );\n        }\n      }\n    } else {\n      setKeyName( keyName );\n    }\n\n    String keyTypeS = XMLHandler.getTagValue( node, \"key_type\" );\n    for ( KeyType k : KeyType.values() ) {\n      if ( k.toString().equalsIgnoreCase( keyTypeS ) ) {\n        setKeyType( k );\n        break;\n      }\n    }\n\n    Node fields = XMLHandler.getSubNode( node, \"mapped_columns\" );\n    if ( fields != null && XMLHandler.countNodes( fields, \"mapped_column\" ) > 0 ) {\n      int nrfields = XMLHandler.countNodes( fields, \"mapped_column\" );\n\n      for ( int i = 0; i < nrfields; i++ ) {\n        Node fieldNode = XMLHandler.getSubNodeByNr( fields, \"mapped_column\", i );\n        String alias = XMLHandler.getTagValue( fieldNode, \"alias\" );\n        String colFam = XMLHandler.getTagValue( fieldNode, \"column_family\" );\n        if ( colFam == null ) {\n          colFam = \"\";\n        }\n        String colName = XMLHandler.getTagValue( fieldNode, \"column_name\" );\n        if ( colName == null ) {\n          colName = \"\";\n        }\n        String type = XMLHandler.getTagValue( fieldNode, \"type\" );\n\n        AELHBaseValueMetaImpl vm = new AELHBaseValueMetaImpl( false, alias, colName, colFam, getMappingName(), getTableName() );\n        vm.setHBaseTypeFromString( type );\n\n        try {\n          addMappedColumn( vm, isTupleMapping() );\n        } catch ( Exception ex ) {\n          throw new KettleXMLException( ex );\n        }\n      }\n    }\n\n    return true;\n  }\n\n  @Override\n  public boolean readRep( Repository repository, ObjectId objectId ) throws KettleException {\n    return false;\n  }\n\n  @Override\n  public String getFriendlyName() {\n    return null;\n  }\n\n  @Override\n  public Object decodeKeyValue( byte[] bytes ) throws KettleException {\n    return null;\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/src/main/java/org/pentaho/big/data/kettle/plugins/hbase/meta/AELHBaseValueMetaImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.meta;\n\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.hbase.meta.HBaseValueMetaInterface;\n\nimport java.math.BigDecimal;\nimport java.util.Date;\n\npublic class AELHBaseValueMetaImpl extends ValueMetaBase implements HBaseValueMetaInterface {\n  private boolean isKey;\n  private String alias;\n  private String columnName;\n  private String columnFamily;\n  private String mappingName;\n  private String tableName;\n  private boolean isLongOrDouble = true;\n\n  public AELHBaseValueMetaImpl( boolean isKey, String alias, String columnName, String columnFamily, String mappingName, String tableName ) {\n    super( alias );\n    this.isKey = isKey;\n    this.alias = alias;\n    this.columnName = columnName;\n    this.columnFamily = columnFamily;\n    this.mappingName = mappingName;\n    this.tableName = tableName;\n  }\n\n  @Override\n  public boolean isKey() {\n    return isKey;\n  }\n\n  @Override\n  public void setKey( boolean key ) {\n    isKey = key;\n  }\n\n  @Override\n  public String getAlias() {\n    return getName();\n  }\n\n  @Override\n  public void setAlias( String alias ) {\n    this.alias = alias;\n    setName( alias );\n  }\n\n  @Override\n  public String getColumnName() {\n    return columnName;\n  }\n\n  @Override\n  public void setColumnName( String columnName ) {\n    this.columnName = columnName;\n  }\n\n  @Override\n  public String getColumnFamily() {\n    return columnFamily;\n  }\n\n  @Override\n  public void setColumnFamily( String columnFamily ) {\n    this.columnFamily = columnFamily;\n  }\n\n  @Override\n  public void setHBaseTypeFromString( String hbaseType ) throws IllegalArgumentException {\n    if ( hbaseType.equalsIgnoreCase( \"Integer\" ) ) {\n      setType( ValueMeta.getType( hbaseType ) );\n      setIsLongOrDouble( false );\n      return;\n    }\n    if ( hbaseType.equalsIgnoreCase( \"Long\" ) ) {\n      setType( ValueMeta.getType( \"Integer\" ) );\n      setIsLongOrDouble( true );\n      return;\n    }\n    if ( hbaseType.equals( \"Float\" ) ) {\n      setType( ValueMeta.getType( \"Number\" ) );\n      setIsLongOrDouble( false );\n      return;\n    }\n    if ( hbaseType.equals( \"Double\" ) ) {\n      setType( ValueMeta.getType( \"Number\" ) );\n      setIsLongOrDouble( true );\n      return;\n    }\n\n    // default\n    int type = ValueMeta.getType( hbaseType );\n    if ( type == ValueMetaInterface.TYPE_NONE ) {\n      throw new IllegalArgumentException( BaseMessages.getString( PKG,\n          \"HBaseValueMeta.Error.UnknownType\", hbaseType ) );\n    }\n\n    setType( type );\n  }\n\n  @Override\n  public String getHBaseTypeDesc() {\n    if ( isInteger() ) {\n      return ( getIsLongOrDouble() ? \"Long\" : \"Integer\" );\n    }\n    if ( isNumber() ) {\n      return ( getIsLongOrDouble() ? \"Double\" : \"Float\" );\n    }\n\n    return ValueMeta.getTypeDesc( getType() );\n  }\n\n  @Override\n  public Object decodeColumnValue( byte[] rawColValue ) throws KettleException {\n    // just return null if this column doesn't have a value for the row\n    if ( rawColValue == null ) {\n      return null;\n    }\n\n    if ( isString() ) {\n      String convertedString = Bytes.toString( rawColValue );\n      if ( getStorageType() == ValueMetaInterface.STORAGE_TYPE_INDEXED ) {\n        // need to return the integer index of this value\n        Object[] legalVals = getIndex();\n        int foundIndex = -1;\n        for ( int i = 0; i < legalVals.length; i++ ) {\n          if ( legalVals[ i ].toString().trim().equals( convertedString.trim() ) ) {\n            foundIndex = i;\n            break;\n          }\n        }\n        if ( foundIndex >= 0 ) {\n          return new Integer( foundIndex );\n        }\n        throw new KettleException( BaseMessages.getString( PKG,\n            \"HBaseValueMeta.Error.IllegalIndexedColumnValue\", convertedString,\n            getAlias() ) );\n      } else {\n        return convertedString;\n      }\n    }\n\n    if ( isNumber() ) {\n      if ( rawColValue.length == Bytes.SIZEOF_FLOAT ) {\n        float floatResult = Bytes.toFloat( rawColValue );\n        return new Double( floatResult );\n      }\n\n      if ( rawColValue.length == Bytes.SIZEOF_DOUBLE ) {\n        return new Double( Bytes.toDouble( rawColValue ) );\n      }\n    }\n\n    if ( isInteger() ) {\n      if ( rawColValue.length == Bytes.SIZEOF_INT ) {\n        int intResult = Bytes.toInt( rawColValue );\n        return new Long( intResult );\n      }\n\n      if ( rawColValue.length == Bytes.SIZEOF_LONG ) {\n        return new Long( Bytes.toLong( rawColValue ) );\n      }\n      if ( rawColValue.length == Bytes.SIZEOF_SHORT ) {\n        // be lenient on reading from HBase - accept and convert shorts\n        // even though our mapping defines only longs and integers\n        // TODO add short to the types that can be mapped?\n        short tempShort = Bytes.toShort( rawColValue );\n        return new Long( tempShort );\n      }\n\n      throw new KettleException( BaseMessages.getString( PKG,\n          \"HBaseValueMeta.Error.IllegalIntegerLength\" ) );\n\n    }\n\n    if ( isBigNumber() ) {\n      String temp = Bytes.toString( rawColValue );\n\n      BigDecimal result = new BigDecimal( temp );\n\n      if ( result == null ) {\n        throw new KettleException( BaseMessages.getString( PKG,\n            \"HBaseValueMeta.Error.UnableToDecodeBigDecimal\" ) );\n      }\n\n      return result;\n    }\n\n    if ( isBinary() ) {\n      // just return the raw array of bytes\n      return rawColValue;\n    }\n\n    if ( isBoolean() ) {\n      // try as a string first\n      Boolean result = decodeBoolFromString( rawColValue );\n      if ( result == null ) {\n        // try as a number\n        result = decodeBoolFromNumber( rawColValue );\n      }\n\n      if ( result != null ) {\n        return result;\n      }\n\n      throw new KettleException( BaseMessages.getString( PKG,\n          \"HBaseValueMeta.Error.UnableToDecodeBoolean\" ) );\n    }\n\n    if ( isDate() ) {\n      if ( rawColValue.length != Bytes.SIZEOF_LONG ) {\n        throw new KettleException( BaseMessages.getString( PKG,\n            \"HBaseValueMeta.Error.DateValueLengthNotEqualToLong\" ) );\n      }\n      long millis = Bytes.toLong( rawColValue );\n      Date d = new Date( millis );\n      return d;\n    }\n\n    throw new KettleException( BaseMessages.getString( PKG,\n        \"HBaseValueMeta.Error.UnknownTypeForColumn\" ) );\n  }\n\n  @Override\n  public byte[] encodeColumnValue( Object columnValue, ValueMetaInterface colMeta ) throws KettleException {\n    if ( columnValue == null ) {\n      return null;\n    }\n\n    byte[] encoded = null;\n\n    /**\n     * BACKLOG-26151 -\n     * When doing type conversions, the type of this HBase value\n     * is given by outputType, the colMeta then converts based on\n     * the type of the incoming value\n     */\n    int outputType = this.getType();\n\n    switch ( outputType ) {\n      case TYPE_STRING:\n        String toEncode = colMeta.getString( columnValue );\n        encoded = Bytes.toBytes( toEncode );\n        break;\n      case TYPE_INTEGER:\n        Long l = colMeta.getInteger( columnValue );\n        if ( getIsLongOrDouble() ) {\n          encoded = Bytes.toBytes( l.longValue() );\n        } else {\n          encoded = Bytes.toBytes( l.intValue() );\n        }\n        break;\n      case TYPE_NUMBER:\n        Double d = colMeta.getNumber( columnValue );\n        if ( getIsLongOrDouble() ) {\n          encoded = Bytes.toBytes( d.doubleValue() );\n        } else {\n          encoded = Bytes.toBytes( d.floatValue() );\n        }\n        break;\n      case TYPE_DATE:\n        Date date = colMeta.getDate( columnValue );\n        encoded = Bytes.toBytes( date.getTime() );\n        break;\n      case TYPE_BOOLEAN:\n        Boolean b = colMeta.getBoolean( columnValue );\n        String boolString = ( b.booleanValue() ) ? \"Y\" : \"N\";\n        encoded = Bytes.toBytes( boolString );\n        break;\n      case TYPE_BIGNUMBER:\n        BigDecimal bd = colMeta.getBigNumber( columnValue );\n        String bds = bd.toString();\n        encoded = Bytes.toBytes( bds );\n        break;\n      case TYPE_BINARY:\n        encoded = colMeta.getBinary( columnValue );\n        break;\n    }\n    return encoded;\n  }\n\n  @Override\n  public String getMappingName() {\n    return mappingName;\n  }\n\n  @Override\n  public void setMappingName( String mappingName ) {\n    this.mappingName = mappingName;\n  }\n\n  @Override\n  public String getTableName() {\n    return tableName;\n  }\n\n  @Override\n  public void setTableName( String tableName ) {\n    this.tableName = tableName;\n  }\n\n  @Override\n  public boolean getIsLongOrDouble() {\n    return isLongOrDouble;\n  }\n\n  @Override\n  public void setIsLongOrDouble( boolean ld ) {\n    this.isLongOrDouble = ld;\n  }\n\n  @Override\n  public void getXml( StringBuilder retval ) {\n    String format = getConversionMask();\n    retval.append( \"\\n        \" ).append( XMLHandler.openTag( \"field\" ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"table_name\", getTableName() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"mapping_name\", getMappingName() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"alias\", getAlias() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"family\", getColumnFamily() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"column\", getColumnName() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"key\", isKey() ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"type\", ValueMetaBase.getTypeDesc( getType() ) ) );\n    retval.append( \"\\n            \" ).append( XMLHandler.addTagValue( \"format\", format ) );\n    retval.append( \"\\n        \" ).append( XMLHandler.closeTag( \"field\" ) );\n  }\n\n  @Override\n  public void saveRep( Repository rep, ObjectId id_transformation, ObjectId id_step, int count ) throws KettleException {\n    //noop in AEL\n  }\n\n  /**\n   * Decodes a boolean value from an array of bytes that is assumed to hold a string.]\n   * Lifted from Shim to support AEL conversions\n   *\n   * @param rawEncoded an array of bytes holding the string representation of a boolean value\n   * @return a Boolean object or null if it can't be decoded from the supplied array of bytes.\n   */\n  public static Boolean decodeBoolFromString( byte[] rawEncoded ) {\n\n    String tempString = Bytes.toString( rawEncoded );\n    if ( tempString.equalsIgnoreCase( \"Y\" ) || tempString.equalsIgnoreCase( \"N\" )\n        || tempString.equalsIgnoreCase( \"YES\" )\n        || tempString.equalsIgnoreCase( \"NO\" )\n        || tempString.equalsIgnoreCase( \"TRUE\" )\n        || tempString.equalsIgnoreCase( \"FALSE\" )\n        || tempString.equalsIgnoreCase( \"T\" ) || tempString.equalsIgnoreCase( \"F\" )\n        || tempString.equalsIgnoreCase( \"1\" ) || tempString.equalsIgnoreCase( \"0\" ) ) {\n\n      return Boolean.valueOf( tempString.equalsIgnoreCase( \"Y\" )\n          || tempString.equalsIgnoreCase( \"YES\" )\n          || tempString.equalsIgnoreCase( \"TRUE\" )\n          || tempString.equalsIgnoreCase( \"T\" )\n          || tempString.equalsIgnoreCase( \"1\" ) );\n    }\n    return null;\n  }\n\n  /**\n   * Decodes a boolean value from an array of bytes that is assumed to hold a number.\n   * Lifted from Shim to support AEL conversions\n   *\n   * @param rawEncoded an array of bytes holding the numerical representation of a boolean value\n   * @return a Boolean object or null if it can't be decoded from the supplied array of bytes.\n   */\n  public static Boolean decodeBoolFromNumber( byte[] rawEncoded ) {\n\n    if ( rawEncoded == null ) {\n      return null;\n    }\n\n    if ( rawEncoded.length == Bytes.SIZEOF_BYTE ) {\n      byte val = rawEncoded[ 0 ];\n      if ( val == 0 || val == 1 ) {\n        return new Boolean( val == 1 );\n      }\n    }\n\n    if ( rawEncoded.length == Bytes.SIZEOF_SHORT ) {\n      short tempShort = Bytes.toShort( rawEncoded );\n\n      if ( tempShort == 0 || tempShort == 1 ) {\n        return new Boolean( tempShort == 1 );\n      }\n    }\n\n    if ( rawEncoded.length == Bytes.SIZEOF_INT\n        || rawEncoded.length == Bytes.SIZEOF_FLOAT ) {\n      int tempInt = Bytes.toInt( rawEncoded );\n      if ( tempInt == 1 || tempInt == 0 ) {\n        return new Boolean( tempInt == 1 );\n      }\n\n      float tempFloat = Bytes.toFloat( rawEncoded );\n      if ( tempFloat == 0.0f || tempFloat == 1.0f ) {\n        return new Boolean( tempFloat == 1.0f );\n      }\n    }\n\n    if ( rawEncoded.length == Bytes.SIZEOF_LONG\n        || rawEncoded.length == Bytes.SIZEOF_DOUBLE ) {\n      long tempLong = Bytes.toLong( rawEncoded );\n      if ( tempLong == 0L || tempLong == 1L ) {\n        return new Boolean( tempLong == 1L );\n      }\n\n      double tempDouble = Bytes.toDouble( rawEncoded );\n      if ( tempDouble == 0.0 || tempDouble == 1.0 ) {\n        return new Boolean( tempDouble == 1.0 );\n      }\n    }\n\n    // not identifiable from a number\n    return null;\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/meta/AELHBaseMappingTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.meta;\n\nimport org.apache.commons.io.IOUtils;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.runners.MockitoJUnitRunner;\nimport org.w3c.dom.Document;\nimport org.w3c.dom.Node;\nimport org.xml.sax.InputSource;\nimport org.xml.sax.SAXException;\n\nimport javax.xml.parsers.DocumentBuilder;\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.parsers.ParserConfigurationException;\nimport java.io.IOException;\nimport java.io.StringReader;\n\nimport static org.junit.Assert.fail;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class AELHBaseMappingTest {\n  private AELHBaseMappingImpl stubMapping;\n\n  @Before\n  public void setup() throws Exception {\n    stubMapping = new AELHBaseMappingImpl();\n\n    Node mappingNode = null;\n    try {\n      mappingNode = getMappingNode();\n    } catch( Exception ex ) {\n      fail();\n    }\n\n    stubMapping.loadXML( mappingNode );\n  }\n\n  @Test\n  public void inflateFromXmlTest() {\n    Assert.assertEquals( stubMapping.getTableName(), \"iemployee\" );\n    Assert.assertEquals( stubMapping.getMappingName(), \"simple input map\" );\n    Assert.assertEquals( stubMapping.getMappedColumns().size(), 3 );\n  }\n\n  @Test\n  public void serializeToXmlTest() throws IOException {\n    String serializedStub = stubMapping.getXML();\n\n    Assert.assertTrue( serializedStub.contains( \"iemployee\" ) );\n    Assert.assertTrue( serializedStub.contains( \"simple input map\" ) );\n  }\n\n  private Node getMappingNode() throws IOException, ParserConfigurationException, SAXException {\n    String xml = IOUtils.toString( getClass().getClassLoader().getResourceAsStream( \"StubMapping.xml\" ) );\n\n    DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();\n    DocumentBuilder builder = factory.newDocumentBuilder();\n\n    Document doc = builder.parse( new InputSource( new StringReader( xml ) ) );\n\n    return doc.getDocumentElement();\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/src/test/java/org/pentaho/big/data/kettle/plugins/hbase/meta/AELHBaseValueMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hbase.meta;\n\nimport org.apache.hadoop.hbase.util.Bytes;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.runners.MockitoJUnitRunner;\nimport org.pentaho.di.core.exception.KettleException;\n\nimport java.math.BigDecimal;\nimport java.util.Date;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class AELHBaseValueMetaTest {\n  private AELHBaseValueMetaImpl stubValueMeta;\n\n  @Before\n  public void setup() throws Exception {\n    stubValueMeta = new AELHBaseValueMetaImpl( true, \"testAlias\",\n        \"testColumnName\", \"testColumnFamily\", \"testMappingName\",\n        \"testTableName\" );\n\n    stubValueMeta.setMappingName( \"testMappingName\" );\n    stubValueMeta.setTableName( \"testTableName\" );\n\n    stubValueMeta.setType( 5 );\n    stubValueMeta.setIsLongOrDouble( false );\n  }\n\n  @Test\n  public void getXmlSerializationTest() {\n    StringBuilder sb = new StringBuilder(  );\n\n    stubValueMeta.getXml( sb );\n\n    Assert.assertTrue( sb.toString().contains( \"Y\" ) );\n    Assert.assertTrue( sb.toString().contains( \"testAlias\" ) );\n    Assert.assertTrue( sb.toString().contains( \"testColumnName\" ) );\n  }\n\n  @Test\n  public void getHBaseTypeDescTest() {\n    String stubType = stubValueMeta.getHBaseTypeDesc();\n\n    Assert.assertEquals( \"Integer\", stubType );\n  }\n\n  @Test\n  public void getHBaseTypeDescNumberTest() {\n    stubValueMeta.setType( 1 );\n    String stubType = stubValueMeta.getHBaseTypeDesc();\n\n    Assert.assertEquals( \"Float\", stubType );\n  }\n\n  @Test\n  public void decodeNullBytesTest() throws KettleException {\n    Object shouldBeNull = stubValueMeta.decodeColumnValue( null );\n\n    Assert.assertNull( shouldBeNull );\n  }\n\n  @Test\n  public void decodeStringIntoObject() throws KettleException {\n    stubValueMeta.setType( 2 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( \"stubString\" ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeNumberIntoObject() throws KettleException {\n    stubValueMeta.setType( 1 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 2.2 ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeFloadIntoObject() throws KettleException {\n    stubValueMeta.setType( 1 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 2.2f ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeIntegerIntoObject() throws KettleException {\n    stubValueMeta.setType( 5 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1 ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeLongIntoObject() throws KettleException {\n    stubValueMeta.setType( 5 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1L ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeShortIntoObject() throws KettleException {\n    stubValueMeta.setType( 5 );\n    short i = 1;\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( i ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBigNumberIntoObject() throws KettleException {\n    stubValueMeta.setType( 6 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes(  \"9.9999999\" ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanStringIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( \"1\" ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanFloatIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1.0f ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanLongIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1L ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanDoubleIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1.0 ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanBytesIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    byte i = 1;\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( i ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  public void decodeBooleanShortIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    short i = 1;\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( i ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBooleanIntoObject() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1 ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeBytesIntoObject() throws KettleException {\n    stubValueMeta.setType( 8 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1010 ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void decodeDateIntoObject() throws KettleException {\n    stubValueMeta.setType( 3 );\n    Object str = stubValueMeta.decodeColumnValue( Bytes.toBytes( 1539717565559l ) );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeNullBytesTest() throws KettleException {\n    Object shouldBeNull = stubValueMeta.encodeColumnValue( null, stubValueMeta );\n\n    Assert.assertNull( shouldBeNull );\n  }\n\n  @Test\n  public void encodeStringIntoBytes() throws KettleException {\n    stubValueMeta.setType( 2 );\n    Object str = stubValueMeta.encodeColumnValue( \"stubString\", stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeNumberIntoBytes() throws KettleException {\n    stubValueMeta.setType( 1 );\n    Object str = stubValueMeta.encodeColumnValue(2.2, stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeIntegerIntoBytes() throws KettleException {\n    stubValueMeta.setType( 5 );\n    Object str = stubValueMeta.encodeColumnValue(1L, stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeBigNumberIntoBytes() throws KettleException {\n    stubValueMeta.setType( 6 );\n    Object str = stubValueMeta.encodeColumnValue( new BigDecimal(  9.9999999 ), stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeDateIntoBytes() throws KettleException {\n    stubValueMeta.setType( 3 );\n    Object str = stubValueMeta.encodeColumnValue( new Date(), stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeBooleanIntoBytes() throws KettleException {\n    stubValueMeta.setType( 4 );\n    Object str = stubValueMeta.encodeColumnValue( Boolean.TRUE, stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void encodeBinaryIntoBytes() throws KettleException {\n    stubValueMeta.setType( 8 );\n    Object str = stubValueMeta.encodeColumnValue( new byte[]{ 1, 0, 1 }, stubValueMeta );\n\n    Assert.assertNotNull( str );\n  }\n\n  @Test\n  public void integerIsNotLongOrDoubleTest() {\n    stubValueMeta.setHBaseTypeFromString( \"Integer\" );\n\n    Assert.assertFalse( stubValueMeta.getIsLongOrDouble() );\n  }\n\n  @Test\n  public void longIsLongOrDouble() {\n    stubValueMeta.setHBaseTypeFromString( \"Long\" );\n\n    Assert.assertTrue( stubValueMeta.getIsLongOrDouble() );\n  }\n\n  @Test\n  public void floatIsNotLongOrDouble() {\n    stubValueMeta.setHBaseTypeFromString( \"Float\" );\n\n    Assert.assertFalse( stubValueMeta.getIsLongOrDouble() );\n  }\n\n  @Test\n  public void doubleIsLongOrDouble() {\n    stubValueMeta.setHBaseTypeFromString( \"Double\" );\n\n    Assert.assertTrue( stubValueMeta.getIsLongOrDouble() );\n  }\n}\n\n"
  },
  {
    "path": "kettle-plugins/hbase-meta/src/test/resources/StubMapping.xml",
    "content": "<step>\n    <name>Test</name>\n    <type>HBaseInput</type>\n    <description/>\n    <distribute>N</distribute>\n    <custom_distribution/>\n    <copies>1</copies>\n    <partitioning>\n        <method>none</method>\n        <schema_name/>\n    </partitioning>\n\n    <cluster_name>Local Sandbox</cluster_name>\n\n    <zookeeper_hosts>sandbox-hdp.hortonworks.com</zookeeper_hosts>\n\n    <zookeeper_port>2181</zookeeper_port>\n\n    <source_table_name>iemployee</source_table_name>\n\n    <output_fields>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>Rowkey</alias>\n\n            <family/>\n\n            <column>Rowkey</column>\n\n            <key>Y</key>\n\n            <type>Integer</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>fname</alias>\n\n            <family>personal</family>\n\n            <column>fname</column>\n\n            <key>N</key>\n\n            <type>Float</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>lname</alias>\n\n            <family>personal</family>\n\n            <column>lname</column>\n\n            <key>N</key>\n\n            <type>Double</type>\n\n            <format/>\n\n        </field>\n        <field>\n            <table_name>iemployee</table_name>\n\n            <mapping_name>simple input map</mapping_name>\n\n            <alias>salary</alias>\n\n            <family>payroll</family>\n\n            <column>salary</column>\n\n            <key>N</key>\n\n            <type>Float</type>\n\n            <format/>\n\n        </field>\n    </output_fields>\n    <match_any_filter>N</match_any_filter>\n\n    <mapping>\n        <mapping_name>simple input map</mapping_name>\n\n        <table_name>iemployee</table_name>\n\n        <key>Rowkey</key>\n\n        <key_type>Integer</key_type>\n\n        <mapped_columns>\n            <mapped_column>\n                <alias>fname</alias>\n\n                <column_family>personal</column_family>\n\n                <column_name>fname</column_name>\n\n                <type>Integer</type>\n\n            </mapped_column>\n            <mapped_column>\n                <alias>lname</alias>\n\n                <column_family>personal</column_family>\n\n                <column_name>lname</column_name>\n\n                <type>Long</type>\n\n            </mapped_column>\n            <mapped_column>\n                <alias>salary</alias>\n\n                <column_family>payroll</column_family>\n\n                <column_name>salary</column_name>\n\n                <type>Float</type>\n\n            </mapped_column>\n        </mapped_columns>\n    </mapping><attributes></attributes>\n    <cluster_schema/>\n    <remotesteps>\n        <input>\n        </input>\n        <output>\n        </output>\n    </remotesteps>\n    <GUI>\n        <xloc>160</xloc>\n        <yloc>144</yloc>\n        <draw>Y</draw>\n    </GUI>\n</step>"
  },
  {
    "path": "kettle-plugins/hdfs/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>hdfs-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-hdfs-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Hdfs Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hdfs-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hdfs/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-hdfs-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-hdfs-core:jar</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/hdfs/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/hdfs/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hdfs</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>hdfs-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Hdfs Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hdfs</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-hdfs-core</artifactId>\n  <name>PDI Hdfs Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n    <jdom.version>1.1.3</jdom.version>\n    <dependency.com.tinkerpop.blueprints.version>2.6.0</dependency.com.tinkerpop.blueprints.version>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n    <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,!org.pentaho.di.ui.core.namedcluster.*,*</Import-Package>\n    -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-all</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.tinkerpop.blueprints</groupId>\n      <artifactId>blueprints-core</artifactId>\n      <version>${dependency.com.tinkerpop.blueprints.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-inline</artifactId>\n      <version>${mockito-inline.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-all</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-all</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-all</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n<!--    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>-->\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-metaverse-api</artifactId>\n      <version>${pentaho-metaverse.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <!-- Needed for tests -->\n    <dependency>\n      <groupId>org.jdom</groupId>\n      <artifactId>jdom</artifactId>\n      <version>${jdom.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n      <version>${log4j.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-httpclient</groupId>\n      <artifactId>commons-httpclient</artifactId>\n      <version>${dependency.commons-httpclient.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-all</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/HdfsLifecycleListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.HadoopVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.MapRFSFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.NamedClusterVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.di.core.annotations.LifecyclePlugin;\nimport org.pentaho.di.core.lifecycle.LifeEventHandler;\nimport org.pentaho.di.core.lifecycle.LifecycleException;\nimport org.pentaho.di.core.lifecycle.LifecycleListener;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\n\n/**\n * Created by bryan on 11/23/15.\n */\n@LifecyclePlugin( id = \"HdfsLifecycleListener\", name = \"HdfsLifecycleListener\" )\npublic class HdfsLifecycleListener implements LifecycleListener {\n\n  private final int hdfsPriority = 150;\n  private final int maprPriority = 160;\n  private final int ncPriority = 110;\n\n  private final NamedClusterService ncService;\n  private final RuntimeTestActionService rtTestActServ;\n  private final RuntimeTester rtTester;\n  private HadoopVfsFileChooserDialog hdfsFileChooserDialog;\n  private MapRFSFileChooserDialog mapRFSFileChooserDialog;\n  private NamedClusterVfsFileChooserDialog ncFileChooserDialog;\n\n  public HdfsLifecycleListener() {\n    this.ncService = NamedClusterManager.getInstance();\n    this.rtTestActServ = RuntimeTestActionServiceImpl.getInstance();\n    this.rtTester = RuntimeTesterImpl.getInstance();\n  }\n\n  @Deprecated\n  // This OSGI constructor should be removed\n  public HdfsLifecycleListener( NamedClusterService namedClusterService,\n                                RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this.ncService = namedClusterService;\n    this.rtTestActServ = runtimeTestActionService;\n    this.rtTester = runtimeTester;\n  }\n\n  @Override public void onStart( LifeEventHandler lifeEventHandler ) throws LifecycleException {\n    final Spoon spoon = Spoon.getInstance();\n\n    // Add dialogs on display thread\n    spoon.getDisplay().asyncExec( new Runnable() {\n      @Override public void run() {\n        VfsFileChooserDialog dialog = spoon.getVfsFileChooserDialog( null, null );\n        hdfsFileChooserDialog = new HadoopVfsFileChooserDialog( Schemes.HDFS_SCHEME, Schemes.HDFS_SCHEME_DISPLAY_NAME, dialog, null, null, ncService, rtTestActServ, rtTester );\n        dialog.addVFSUIPanel( hdfsPriority, hdfsFileChooserDialog );\n        mapRFSFileChooserDialog = new MapRFSFileChooserDialog( Schemes.MAPRFS_SCHEME, Schemes.MAPRFS_SCHEME_DISPLAY_NAME, dialog );\n        dialog.addVFSUIPanel( maprPriority, mapRFSFileChooserDialog );\n        ncFileChooserDialog = new NamedClusterVfsFileChooserDialog( Schemes.NAMED_CLUSTER_SCHEME, Schemes.NAMED_CLUSTER_SCHEME_DISPLAY_NAME, dialog, null, null, ncService, rtTestActServ, rtTester );\n        dialog.addVFSUIPanel( ncPriority, ncFileChooserDialog );\n      }\n    } );\n  }\n\n  @Override public void onExit( LifeEventHandler lifeEventHandler ) throws LifecycleException {\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/job/JobEntryHadoopCopyFiles.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.job;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.job.entries.copyfiles.JobEntryCopyFiles;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Objects;\n\n@JobEntry( id = \"HadoopCopyFilesPlugin\", image = \"HDM.svg\", name = \"HadoopCopyFilesPlugin.Name\",\n  description = \"HadoopCopyFilesPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.job.entries.hadoopcopyfiles\" )\npublic class JobEntryHadoopCopyFiles extends JobEntryCopyFiles {\n\n  public static final String S3_SOURCE_FILE = \"S3-SOURCE-FILE-\";\n  public static final String S3_DEST_FILE = \"S3-DEST-FILE-\";\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  public JobEntryHadoopCopyFiles() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n    this.fileFolderUrlMappings = new HashMap<>();\n  }\n\n  /**\n   * Hold mapping to go back to unresolved or original URL stored in the xml.\n   * <p/>\n   * Mapping legend:\n   * <ul>\n   *   <li><b>Key:</b> return value from {@link #loadURL(String, String, IMetaStore, Map)}</li>\n   *   <li><b>Value:</b> stored URL from fields ( {@link #SOURCE_FILE_FOLDER } and {@link #DESTINATION_FILE_FOLDER} ) or first parameter\n   *    * of {@link #loadURL(String, String, IMetaStore, Map)}</li>\n   * </ul>\n   */\n  protected final Map<String, String> fileFolderUrlMappings;\n\n  public JobEntryHadoopCopyFiles( NamedClusterService namedClusterService,\n                                  RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.fileFolderUrlMappings = new HashMap<>();\n  }\n\n  @Override\n  public String loadURL( String url, String ncName, IMetaStore metastore, Map mappings ) {\n    NamedCluster c = namedClusterService.getNamedClusterByName( ncName, metastore );\n    String origUrl = url;\n    boolean saveArgumentUrl = false;\n    String pref = null;\n\n    if ( url != null && url.indexOf( SOURCE_URL ) > -1 ) {\n      origUrl = url;\n      url = origUrl.substring( origUrl.indexOf( \"-\", origUrl.indexOf( SOURCE_URL ) + SOURCE_URL.length() ) + 1 );\n      pref = origUrl.substring( 0, origUrl.indexOf( \"-\", origUrl.indexOf( SOURCE_URL ) + SOURCE_URL.length() ) + 1 );\n    } else if ( url != null && url.indexOf( DEST_URL ) > -1 ) {\n      origUrl = url;\n      url = origUrl.substring( origUrl.indexOf( \"-\", origUrl.indexOf( DEST_URL ) + DEST_URL.length() ) + 1 );\n      pref = origUrl.substring( 0, origUrl.indexOf( \"-\", origUrl.indexOf( DEST_URL ) + DEST_URL.length() ) + 1 );\n    }\n    if ( c != null ) {\n      String valueBeforeCall = url;\n      url = c.processURLsubstitution( url, metastore, getVariables() );\n      saveArgumentUrl = !Objects.equals( valueBeforeCall, url );\n    }\n    if ( pref != null ) {\n      url = pref + url;\n    }\n\n    if ( saveArgumentUrl ) {\n      fileFolderUrlMappings.put( url, origUrl );\n    }\n\n    return super.loadURL( url, ncName, metastore, mappings );\n  }\n\n  /**\n   * Preserve the original URL input argument from {@link #loadURL(String, String, IMetaStore, Map)} and don't save the\n   * \"resolved\" URL, otherwise call normal logic from super class.\n   * @see JobEntryCopyFiles#loadURL(String, String, IMetaStore, Map)\n   * @param url\n   * @param ncName\n   * @param metastore\n   * @param mappings\n   * @return original URL if it has changed otherwise, the result from super class\n   */\n  @Override\n  public String saveURL( String url, String ncName, IMetaStore metastore, Map<String, String> mappings ) {\n    return !Objects.isNull( url ) && fileFolderUrlMappings.containsKey( url )\n      ? fileFolderUrlMappings.get( url )\n      : super.saveURL( url, ncName, metastore, mappings );\n  }\n\n  @VisibleForTesting\n  @Override protected VariableSpace getVariables() {\n    return super.getVariables();\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/job/JobEntryHadoopCopyFilesDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.job;\n\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.graphics.Image;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.di.core.plugins.ParentFirst;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.HadoopVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entries.copyfiles.JobEntryCopyFiles;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.job.entries.copyfiles.JobEntryCopyFilesDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.io.File;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\n@PluginDialog( id = \"HadoopCopyFilesPlugin\", image = \"HDM.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/hadoop-copy-files\" )\n//@ParentFirst( patterns = { \"../../lib\" } )\npublic class JobEntryHadoopCopyFilesDialog extends JobEntryCopyFilesDialog {\n  private static Class<?> BASE_PKG = JobEntryCopyFiles.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n  private static Class<?> PKG = JobEntryHadoopCopyFiles.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n  private LogChannel log = new LogChannel( this );\n  private JobEntryHadoopCopyFiles jobEntryHadoopCopyFiles;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  public static final String S3_ENVIRONMENT = \"S3\";\n\n  public JobEntryHadoopCopyFilesDialog( Shell parent, JobEntryInterface jobEntryInt, Repository rep, JobMeta jobMeta ) {\n    super( parent, jobEntryInt, rep, jobMeta );\n    jobEntry = (JobEntryCopyFiles) jobEntryInt;\n    jobEntryHadoopCopyFiles = (JobEntryHadoopCopyFiles) jobEntry;\n    namedClusterService = jobEntryHadoopCopyFiles.getNamedClusterService();\n    runtimeTestActionService = jobEntryHadoopCopyFiles.getRuntimeTestActionService();\n    runtimeTester = jobEntryHadoopCopyFiles.getRuntimeTester();\n    if ( this.jobEntry.getName() == null ) {\n      this.jobEntry.setName( BaseMessages.getString( BASE_PKG, \"JobCopyFiles.Name.Default\" ) );\n    }\n  }\n\n  protected void initUI() {\n    super.initUI();\n    shell.setText( BaseMessages.getString( PKG, \"JobHadoopCopyFiles.Title\" ) );\n  }\n\n  protected SelectionAdapter getFileSelectionAdapter() {\n    return new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        String path = wFields.getActiveTableItem().getText( wFields.getActiveTableColumn() );\n        String clusterName = wFields.getActiveTableItem().getText( wFields.getActiveTableColumn() - 1 );\n        setSelectedFile( path, clusterName );\n      }\n    };\n  }\n\n  /**\n   * Copy information from the meta-data input to the dialog fields.\n   */\n  public void getData() {\n\n    if ( jobEntry.getName() != null ) {\n      wName.setText( jobEntry.getName() );\n    }\n    wName.selectAll();\n    wCopyEmptyFolders.setSelection( jobEntry.copy_empty_folders );\n\n    if ( jobEntry.source_filefolder != null ) {\n      for ( int i = 0; i < jobEntry.source_filefolder.length; i++ ) {\n        TableItem ti = wFields.table.getItem( i );\n        if ( jobEntry.source_filefolder[i] != null ) {\n          String sourceUrl = jobEntry.source_filefolder[i];\n          String clusterName = jobEntry.getConfigurationBy( sourceUrl );\n          if ( clusterName != null ) {\n            clusterName =\n                clusterName.startsWith( JobEntryCopyFiles.LOCAL_SOURCE_FILE ) ? LOCAL_ENVIRONMENT : clusterName;\n            clusterName =\n                clusterName.startsWith( JobEntryCopyFiles.STATIC_SOURCE_FILE ) ? STATIC_ENVIRONMENT : clusterName;\n            clusterName =\n                clusterName.startsWith( JobEntryHadoopCopyFiles.S3_SOURCE_FILE ) ? S3_ENVIRONMENT : clusterName;\n\n            ti.setText( 1, clusterName );\n            sourceUrl =\n                clusterName.equals( LOCAL_ENVIRONMENT ) || clusterName.equals( STATIC_ENVIRONMENT )\n                  || clusterName.equals( S3_ENVIRONMENT ) ? sourceUrl : jobEntry.getUrlPath(\n                    sourceUrl.replace( JobEntryCopyFiles.SOURCE_URL + i + \"-\", \"\" ) );\n          }\n          if ( sourceUrl != null ) {\n            sourceUrl = sourceUrl.replace( JobEntryCopyFiles.SOURCE_URL + i + \"-\", \"\" );\n          } else {\n            sourceUrl = \"\";\n          }\n          ti.setText( 2, sourceUrl );\n        }\n        if ( jobEntry.wildcard[i] != null ) {\n          ti.setText( 3, jobEntry.wildcard[i] );\n        }\n        if ( jobEntry.destination_filefolder[i] != null ) {\n          String destinationURL = jobEntry.destination_filefolder[i];\n          String clusterName = jobEntry.getConfigurationBy( destinationURL );\n          if ( clusterName != null ) {\n            clusterName =\n                clusterName.startsWith( JobEntryCopyFiles.LOCAL_DEST_FILE ) ? LOCAL_ENVIRONMENT : clusterName;\n            clusterName =\n                clusterName.startsWith( JobEntryCopyFiles.STATIC_DEST_FILE ) ? STATIC_ENVIRONMENT : clusterName;\n            clusterName =\n                clusterName.startsWith( JobEntryHadoopCopyFiles.S3_DEST_FILE ) ? S3_ENVIRONMENT : clusterName;\n            ti.setText( 4, clusterName );\n            destinationURL =\n                clusterName.equals( LOCAL_ENVIRONMENT ) || clusterName.equals( STATIC_ENVIRONMENT )\n                  || clusterName.equals( S3_ENVIRONMENT ) ? destinationURL : jobEntry.getUrlPath(\n                    destinationURL.replace( JobEntryCopyFiles.DEST_URL + i + \"-\", \"\" ) );\n          }\n          if ( destinationURL != null ) {\n            destinationURL = destinationURL.replace( JobEntryCopyFiles.DEST_URL + i + \"-\", \"\" );\n          } else {\n            destinationURL = \"\";\n          }\n          ti.setText( 5, destinationURL != null ? destinationURL : \"\" );\n        }\n      }\n      wFields.setRowNums();\n      wFields.optWidth( true );\n    }\n    wPrevious.setSelection( jobEntry.arg_from_previous );\n    wOverwriteFiles.setSelection( jobEntry.overwrite_files );\n    wIncludeSubfolders.setSelection( jobEntry.include_subfolders );\n    wRemoveSourceFiles.setSelection( jobEntry.remove_source_files );\n    wDestinationIsAFile.setSelection( jobEntry.destination_is_a_file );\n    wCreateDestinationFolder.setSelection( jobEntry.create_destination_folder );\n    wAddFileToResult.setSelection( jobEntry.add_result_filesname );\n  }\n\n  protected void ok() {\n    if ( Utils.isEmpty( wName.getText() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( BASE_PKG, \"System.StepJobEntryNameMissing.Title\" ) );\n      mb.setMessage( BaseMessages.getString( BASE_PKG, \"System.JobEntryNameMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n\n    jobEntry.setName( wName.getText() );\n    jobEntry.setCopyEmptyFolders( wCopyEmptyFolders.getSelection() );\n    jobEntry.setoverwrite_files( wOverwriteFiles.getSelection() );\n    jobEntry.setIncludeSubfolders( wIncludeSubfolders.getSelection() );\n    jobEntry.setArgFromPrevious( wPrevious.getSelection() );\n    jobEntry.setRemoveSourceFiles( wRemoveSourceFiles.getSelection() );\n    jobEntry.setAddresultfilesname( wAddFileToResult.getSelection() );\n    jobEntry.setDestinationIsAFile( wDestinationIsAFile.getSelection() );\n    jobEntry.setCreateDestinationFolder( wCreateDestinationFolder.getSelection() );\n\n    int nritems = wFields.nrNonEmpty();\n    Map<String, String> namedClusterURLMappings = new HashMap<String, String>();\n    jobEntry.source_filefolder = new String[nritems];\n    jobEntry.destination_filefolder = new String[nritems];\n    jobEntry.wildcard = new String[nritems];\n    for ( int i = 0; i < nritems; i++ ) {\n\n      String sourceNc = wFields.getNonEmpty( i ).getText( 1 );\n      sourceNc = sourceNc.equals( LOCAL_ENVIRONMENT ) ? JobEntryCopyFiles.LOCAL_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( STATIC_ENVIRONMENT ) ? JobEntryCopyFiles.STATIC_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( S3_ENVIRONMENT ) ? JobEntryHadoopCopyFiles.S3_SOURCE_FILE + i : sourceNc;\n      String source = wFields.getNonEmpty( i ).getText( 2 );\n      String wild = wFields.getNonEmpty( i ).getText( 3 );\n      String destNc = wFields.getNonEmpty( i ).getText( 4 );\n      destNc = destNc.equals( LOCAL_ENVIRONMENT ) ? JobEntryCopyFiles.LOCAL_DEST_FILE + i : destNc;\n      destNc = destNc.equals( STATIC_ENVIRONMENT ) ? JobEntryCopyFiles.STATIC_DEST_FILE + i : destNc;\n      destNc = destNc.equals( S3_ENVIRONMENT ) ? JobEntryHadoopCopyFiles.S3_DEST_FILE + i : destNc;\n      String dest = wFields.getNonEmpty( i ).getText( 5 );\n      source = JobEntryCopyFiles.SOURCE_URL + i + \"-\" + source;\n      dest = JobEntryCopyFiles.DEST_URL + i + \"-\" + dest;\n\n      jobEntry.source_filefolder[i] = jobEntry.loadURL( source, sourceNc, getMetaStore(), namedClusterURLMappings );\n      jobEntry.destination_filefolder[i] = jobEntry.loadURL( dest, destNc, getMetaStore(), namedClusterURLMappings );\n      jobEntry.wildcard[i] = wild;\n    }\n    jobEntry.setConfigurationMappings( namedClusterURLMappings );\n    dispose();\n  }\n\n  private FileObject setSelectedFile( String path, String clusterName ) {\n\n    FileObject selectedFile = null;\n\n    try {\n      // Get current file\n      FileObject rootFile = null;\n      FileObject initialFile = null;\n      FileObject defaultInitialFile = null;\n\n      if ( !clusterName.equals( LOCAL_ENVIRONMENT ) && !clusterName.equals( S3_ENVIRONMENT ) ) {\n        NamedCluster namedCluster = namedClusterService.getNamedClusterByName( clusterName, getMetaStore() );\n        if ( Utils.isEmpty( path ) ) {\n          path = \"/\";\n        }\n        if ( namedCluster == null ) {\n          return null;\n        }\n        path = namedCluster.processURLsubstitution( path, getMetaStore(), jobMeta );\n      }\n\n      boolean resolvedInitialFile = false;\n\n      if ( clusterName.equals( S3_ENVIRONMENT ) && !path.startsWith( Schemes.S3_SCHEME + \"://\" ) ) {\n        path = Schemes.S3_SCHEME + \"://\";\n      }\n\n      if ( path != null ) {\n\n        String fileName = jobMeta.environmentSubstitute( path );\n\n        if ( fileName != null && !fileName.equals( \"\" ) ) {\n          try {\n            initialFile = KettleVFS.getInstance( jobMeta.getBowl() ).getFileObject( fileName );\n            resolvedInitialFile = true;\n          } catch ( Exception e ) {\n            showMessageAndLog( BaseMessages.getString( PKG, \"JobHadoopCopyFiles.Connection.Error.title\" ), BaseMessages.getString(\n                PKG, \"JobHadoopCopyFiles.Connection.error\" ), e.getMessage() );\n            return null;\n          }\n          File startFile = new File( System.getProperty( \"user.home\" ) );\n          defaultInitialFile = KettleVFS.getInstance( jobMeta.getBowl() ).getFileObject( startFile.getAbsolutePath() );\n          rootFile = initialFile.getFileSystem().getRoot();\n        } else {\n          defaultInitialFile = KettleVFS.getInstance( jobMeta.getBowl() )\n            .getFileObject( Spoon.getInstance().getLastFileOpened() );\n        }\n      }\n\n      if ( rootFile == null ) {\n        if ( defaultInitialFile == null ) {\n          return null;\n        }\n        rootFile = defaultInitialFile.getFileSystem().getRoot();\n        initialFile = defaultInitialFile;\n      }\n      VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n      fileChooserDialog.defaultInitialFile = defaultInitialFile;\n\n      NamedClusterWidgetImpl namedClusterWidget = null;\n\n      if ( clusterName.equals( LOCAL_ENVIRONMENT ) ) {\n        selectedFile =\n            fileChooserDialog.open( shell, new String[] { \"file\" }, \"file\", true, path, new String[] { \"*.*\" },\n                FILETYPES, false, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n      } else if ( clusterName.equals( S3_ENVIRONMENT ) ) {\n        selectedFile =\n            fileChooserDialog.open( shell, new String[] { Schemes.S3_SCHEME, Schemes.S3N_SCHEME }, Schemes.S3_SCHEME, true,\n              path, new String[] { \"*.*\" }, FILETYPES, false, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY,\n                false, true );\n      } else {\n        NamedCluster namedCluster = namedClusterService.getNamedClusterByName( clusterName, getMetaStore() );\n        if ( namedCluster != null ) {\n          if ( namedCluster.isMapr() ) {\n            selectedFile =\n                fileChooserDialog.open( shell, new String[] { Schemes.MAPRFS_SCHEME },\n                  Schemes.MAPRFS_SCHEME, false, path, new String[] { \"*.*\" }, FILETYPES, true,\n                    VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n          } else {\n            List<CustomVfsUiPanel> customPanels = fileChooserDialog.getCustomVfsUiPanels();\n            for ( CustomVfsUiPanel panel : customPanels ) {\n              if ( panel instanceof HadoopVfsFileChooserDialog ) {\n                HadoopVfsFileChooserDialog hadoopDialog = ( (HadoopVfsFileChooserDialog) panel );\n                namedClusterWidget = hadoopDialog.getNamedClusterWidget();\n                namedClusterWidget.initiate();\n                hadoopDialog.setNamedCluster( clusterName );\n                hadoopDialog.initializeConnectionPanel( initialFile );\n              }\n            }\n            if ( resolvedInitialFile ) {\n              fileChooserDialog.initialFile = initialFile;\n            }\n            selectedFile =\n                fileChooserDialog.open( shell, new String[] { Schemes.HDFS_SCHEME },\n                  Schemes.HDFS_SCHEME, false, path, new String[] { \"*.*\" }, FILETYPES, true,\n                    VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n          }\n        }\n      }\n\n      CustomVfsUiPanel currentPanel = fileChooserDialog.getCurrentPanel();\n      if ( currentPanel instanceof HadoopVfsFileChooserDialog ) {\n        namedClusterWidget = ( (HadoopVfsFileChooserDialog) currentPanel ).getNamedClusterWidget();\n      }\n\n      if ( selectedFile != null ) {\n        String url = selectedFile.getURL().toString();\n        if ( currentPanel != null ) {\n          if ( currentPanel.getVfsSchemeDisplayText().equals( LOCAL_ENVIRONMENT ) ) {\n            wFields.getActiveTableItem().setText( wFields.getActiveTableColumn() - 1, LOCAL_ENVIRONMENT );\n          } else if ( currentPanel.getVfsSchemeDisplayText().equals( S3_ENVIRONMENT ) ) {\n            wFields.getActiveTableItem().setText( wFields.getActiveTableColumn() - 1, S3_ENVIRONMENT );\n          } else if ( namedClusterWidget != null && namedClusterWidget.getSelectedNamedCluster() != null ) {\n            url = jobEntry.getUrlPath( url );\n            wFields.getActiveTableItem().setText( wFields.getActiveTableColumn() - 1,\n              namedClusterWidget.getSelectedNamedCluster().getName() );\n          }\n        }\n        wFields.getActiveTableItem().setText( wFields.getActiveTableColumn(), url );\n      }\n\n      return selectedFile;\n\n    } catch ( KettleFileException ex ) {\n      log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.KettleFileException\" ) );\n      return selectedFile;\n    } catch ( FileSystemException ex ) {\n      log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.FileSystemException\" ) );\n      return selectedFile;\n    }\n  }\n\n  private void showMessageAndLog( String title, String message, String messageToLog ) {\n    MessageBox box = new MessageBox( shell );\n    box.setText( title ); //$NON-NLS-1$\n    box.setMessage( message );\n    log.logError( messageToLog );\n    box.open();\n  }\n\n  protected Image getImage() {\n    return GUIResource.getInstance().getImage( \"HDM.svg\", getClass().getClassLoader(), ConstUI.ICON_SIZE,\n        ConstUI.ICON_SIZE );\n  }\n\n  public boolean showFileButtons() {\n    return false;\n  }\n\n  protected void setComboValues( ColumnInfo colInfo ) {\n    try {\n      super.setComboValues( colInfo );\n      String[] superValues = colInfo.getComboValues();\n\n      String[] s3value = { S3_ENVIRONMENT };\n      String[] comboValues = (String[]) ArrayUtils.addAll( superValues, s3value );\n\n      String[] namedClusters = namedClusterService.listNames( getMetaStore() ).toArray( new String[0] );\n      String[] values = (String[]) ArrayUtils.addAll( comboValues, namedClusters );\n      colInfo.setComboValues( values );\n    } catch ( MetaStoreException e ) {\n      log.logError( e.getMessage() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.lang.ArrayUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileType;\nimport org.eclipse.jface.window.Window;\nimport org.eclipse.jface.wizard.Wizard;\nimport org.eclipse.jface.wizard.WizardDialog;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.custom.ScrolledComposite;\nimport org.eclipse.swt.events.FocusListener;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.Cursor;\nimport org.eclipse.swt.graphics.Rectangle;\nimport org.eclipse.swt.layout.FillLayout;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.eclipse.swt.widgets.ToolBar;\nimport org.eclipse.swt.widgets.ToolItem;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.HadoopVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.compress.CompressionProvider;\nimport org.pentaho.di.core.compress.CompressionProviderFactory;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.fileinput.FileInputList;\nimport org.pentaho.di.core.gui.TextFileInputFieldInterface;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.util.EnvUtil;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entries.copyfiles.JobEntryCopyFiles;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransPreviewFactory;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileField;\nimport org.pentaho.di.trans.steps.fileinput.text.BufferedInputStreamReader;\nimport org.pentaho.di.trans.steps.fileinput.text.EncodingType;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileFilter;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileInputMeta;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileInputUtils;\nimport org.pentaho.di.ui.core.dialog.EnterNumberDialog;\nimport org.pentaho.di.ui.core.dialog.EnterSelectionDialog;\nimport org.pentaho.di.ui.core.dialog.EnterTextDialog;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.dialog.PreviewRowsDialog;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.core.widget.VariableButtonListenerFactory;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.dialog.TransPreviewProgressDialog;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.di.ui.trans.steps.fileinput.text.TextFileCSVImportProgressDialog;\nimport org.pentaho.di.ui.trans.steps.fileinput.text.TextFileImportWizardPage1;\nimport org.pentaho.di.ui.trans.steps.fileinput.text.TextFileImportWizardPage2;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.net.URI;\nimport java.nio.charset.Charset;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Locale;\nimport java.util.Map;\nimport java.util.Vector;\n\n@PluginDialog( id = \"HadoopFileInputPlugin\", image = \"HDI.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/hadoop-file-input-cp-main-page\" )\npublic class HadoopFileInputDialog extends BaseStepDialog implements StepDialogInterface {\n  private static final Class<?> BASE_PKG = TextFileInputMeta.class; // for i18n purposes, needed by Translator2!!\n  private static final Class<?> PKG = HadoopFileInputMeta.class; // for i18n purposes, needed by Translator2!!\n  private static final String[] ALL_FILES_TYPE = new String[] {\n    BaseMessages.getString( PKG, \"System.FileType.AllFiles\" ) };\n\n  private LogChannel log = new LogChannel( this );\n\n  private static final String COMBO_NO = BaseMessages.getString( BASE_PKG, \"System.Combo.No\" );\n  private static final String COMBO_YES = BaseMessages.getString( BASE_PKG, \"System.Combo.Yes\" );\n  private static final String[] YES_NO_COMBO = new String[] { COMBO_NO, COMBO_YES };\n\n  public static final String LOCAL_ENVIRONMENT = \"Local\";\n  public static final String STATIC_ENVIRONMENT = \"<Static>\";\n  public static final String S3_ENVIRONMENT = \"S3\";\n\n  public static final String BUTTON_BROWSE = BaseMessages.getString( BASE_PKG, \"System.Button.Browse\" );\n  public static final String BUTTON_VARIABLE = BaseMessages.getString( BASE_PKG, \"System.Button.Variable\" );\n  public static final String ERROR_TITLE = BaseMessages.getString( BASE_PKG, \"System.Dialog.Error.Title\" );\n  public static final String TOOLTIP_VARIABLE = BaseMessages.getString( BASE_PKG, \"System.Tooltip.VariableToDir\" );\n  public static final String LABEL_EXTENSION = BaseMessages.getString( BASE_PKG, \"System.Label.Extension\" );\n\n  private CTabFolder wTabFolder;\n\n  private Button wAccFilenames;\n\n  private Label wlPassThruFields;\n  private Button wPassThruFields;\n\n  private Label wlAccField;\n  private Text wAccField;\n\n  private Label wlAccStep;\n  private CCombo wAccStep;\n\n  private Label wlFilenameList;\n  private TableView wFilenameList;\n\n  private Button wbShowFiles;\n\n  private Button wFirst;\n\n  private Button wFirstHeader;\n\n  private CCombo wFiletype;\n\n  private Button wbSeparator;\n  private TextVar wSeparator;\n\n  private Text wEnclosure;\n\n  private Text wEscape;\n\n  private Button wHeader;\n\n  private Label wlNrHeader;\n  private Text wNrHeader;\n\n  private Button wFooter;\n\n  private Label wlNrFooter;\n  private Text wNrFooter;\n\n  private Button wWraps;\n\n  private Label wlNrWraps;\n  private Text wNrWraps;\n\n  private Button wLayoutPaged;\n\n  private Label wlNrLinesPerPage;\n  private Text wNrLinesPerPage;\n\n  private Label wlNrLinesDocHeader;\n  private Text wNrLinesDocHeader;\n\n  private CCombo wCompression;\n\n  private Button wNoempty;\n\n  private Button wInclFilename;\n\n  private Label wlInclFilenameField;\n  private Text wInclFilenameField;\n\n  private Button wInclRownum;\n\n  private Label wlRownumByFileField;\n  private Button wRownumByFile;\n\n  private Label wlInclRownumField;\n  private Text wInclRownumField;\n\n  private CCombo wFormat;\n\n  private CCombo wEncoding;\n\n  private Text wLimit;\n\n  private Button wDateLenient;\n\n  private CCombo wDateLocale;\n\n  // ERROR HANDLING...\n  private Button wErrorIgnored;\n\n  private Label wlSkipErrorLines;\n  private Button wSkipErrorLines;\n\n  private Label wlErrorCount;\n  private Text wErrorCount;\n\n  private Label wlErrorFields;\n  private Text wErrorFields;\n\n  private Label wlErrorText;\n  private Text wErrorText;\n\n  // New entries for intelligent error handling AKA replay functionality\n  // Bad files destination directory\n  private Label wlWarnDestDir;\n  private Button wbbWarnDestDir; // Browse: add file or directory\n  private Button wbvWarnDestDir; // Variable\n  private Text wWarnDestDir;\n  private Label wlWarnExt;\n  private Text wWarnExt;\n\n  // Error messages files destination directory\n  private Label wlErrorDestDir;\n  private Button wbbErrorDestDir; // Browse: add file or directory\n  private Button wbvErrorDestDir; // Variable\n  private Text wErrorDestDir;\n  private Label wlErrorExt;\n  private Text wErrorExt;\n\n  // Line numbers files destination directory\n  private Label wlLineNrDestDir;\n  private Button wbbLineNrDestDir; // Browse: add file or directory\n  private Button wbvLineNrDestDir; // Variable\n  private Text wLineNrDestDir;\n  private Label wlLineNrExt;\n  private Text wLineNrExt;\n\n  private TableView wFilter;\n\n  private TableView wFields;\n\n  private Button wAddResult;\n\n  private HadoopFileInputMeta input;\n\n  // Wizard info...\n  private Vector<TextFileInputFieldInterface> fields;\n\n  private int middle;\n  private int margin;\n  private ModifyListener lsMod;\n\n  public static final int[] dateLengths = new int[] { 23, 19, 14, 10, 10, 10, 10, 8, 8, 8, 8, 6, 6 };\n\n  private boolean gotEncodings = false;\n\n  protected boolean firstClickOnDateLocale;\n\n  private final NamedClusterService namedClusterService;\n\n  public HadoopFileInputDialog( Shell parent, Object in, TransMeta transMeta, String sname ) {\n    super( parent, (BaseStepMeta) in, transMeta, sname );\n    input = (HadoopFileInputMeta) in;\n    namedClusterService = input.getNamedClusterService();\n    input.setVariableSpace( variables );\n    firstClickOnDateLocale = true;\n  }\n\n  @Override\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MAX | SWT.MIN );\n    props.setLook( shell );\n    setShellImage( shell, input );\n\n    lsMod = e -> input.setChanged();\n\n    changed = input.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( PKG, \"HadoopFileInputDialog.DialogTitle\" ) );\n\n    middle = props.getMiddlePct();\n    margin = Const.MARGIN;\n\n    // Stepname line\n    wlStepname = new Label( shell, SWT.RIGHT );\n    wlStepname.setText( BaseMessages.getString( BASE_PKG, \"System.Label.StepName\" ) );\n    props.setLook( wlStepname );\n    fdlStepname = new FormData();\n    fdlStepname.left = new FormAttachment( 0, 0 );\n    fdlStepname.top = new FormAttachment( 0, margin );\n    fdlStepname.right = new FormAttachment( middle, -margin );\n    wlStepname.setLayoutData( fdlStepname );\n    wStepname = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wStepname.setText( stepname );\n    props.setLook( wStepname );\n    wStepname.addModifyListener( lsMod );\n    fdStepname = new FormData();\n    fdStepname.left = new FormAttachment( middle, 0 );\n    fdStepname.top = new FormAttachment( 0, margin );\n    fdStepname.right = new FormAttachment( 100, 0 );\n    wStepname.setLayoutData( fdStepname );\n\n    wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( wTabFolder, Props.WIDGET_STYLE_TAB );\n    wTabFolder.setSimple( false );\n\n    addFilesTab();\n    addContentTab();\n    addErrorTab();\n    addFiltersTabs();\n    addFieldsTabs();\n\n    FormData fdTabFolder = new FormData();\n    fdTabFolder.left = new FormAttachment( 0, 0 );\n    fdTabFolder.top = new FormAttachment( wStepname, margin );\n    fdTabFolder.right = new FormAttachment( 100, 0 );\n    fdTabFolder.bottom = new FormAttachment( 100, -50 );\n    wTabFolder.setLayoutData( fdTabFolder );\n\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( BASE_PKG, \"System.Button.OK\" ) );\n\n    wPreview = new Button( shell, SWT.PUSH );\n    wPreview.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Preview.Button\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( BASE_PKG, \"System.Button.Cancel\" ) );\n\n    positionBottomRightButtons( shell, new Button[] { wOK, wPreview, wCancel }, margin, wTabFolder );\n\n    // Add listeners\n    lsOK = e -> ok();\n    Listener lsFirst = e -> first( false );\n    Listener lsFirstHeader = e -> first( true );\n    lsGet = e -> get();\n    lsPreview = e -> preview();\n    lsCancel = e -> cancel();\n\n    wOK.addListener( SWT.Selection, lsOK );\n    wFirst.addListener( SWT.Selection, lsFirst );\n    wFirstHeader.addListener( SWT.Selection, lsFirstHeader );\n    wGet.addListener( SWT.Selection, lsGet );\n    wPreview.addListener( SWT.Selection, lsPreview );\n    wCancel.addListener( SWT.Selection, lsCancel );\n\n    lsDef = new SelectionAdapter() {\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    wAccFilenames.addSelectionListener( lsDef );\n    wStepname.addSelectionListener( lsDef );\n    wSeparator.addSelectionListener( lsDef );\n    wLimit.addSelectionListener( lsDef );\n    wInclRownumField.addSelectionListener( lsDef );\n    wInclFilenameField.addSelectionListener( lsDef );\n    wNrHeader.addSelectionListener( lsDef );\n    wNrFooter.addSelectionListener( lsDef );\n    wNrWraps.addSelectionListener( lsDef );\n    wWarnDestDir.addSelectionListener( lsDef );\n    wWarnExt.addSelectionListener( lsDef );\n    wErrorDestDir.addSelectionListener( lsDef );\n    wErrorExt.addSelectionListener( lsDef );\n    wLineNrDestDir.addSelectionListener( lsDef );\n    wLineNrExt.addSelectionListener( lsDef );\n    wAccField.addSelectionListener( lsDef );\n\n    // Show the files that are selected at this time...\n    wbShowFiles.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        showFiles();\n      }\n    } );\n\n    // Allow the insertion of tabs as separator...\n    wbSeparator.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent se ) {\n        wSeparator.getTextWidget().insert( \"\\t\" );\n      }\n    } );\n\n    SelectionAdapter lsFlags = new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        setFlags();\n      }\n    };\n\n    // Enable/disable the right fields...\n    wInclFilename.addSelectionListener( lsFlags );\n    wInclRownum.addSelectionListener( lsFlags );\n    wRownumByFile.addSelectionListener( lsFlags );\n    wErrorIgnored.addSelectionListener( lsFlags );\n    wHeader.addSelectionListener( lsFlags );\n    wFooter.addSelectionListener( lsFlags );\n    wWraps.addSelectionListener( lsFlags );\n    wLayoutPaged.addSelectionListener( lsFlags );\n    wAccFilenames.addSelectionListener( lsFlags );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    wTabFolder.setSelection( 0 );\n\n    // Set the shell size, based upon previous time...\n    getData( input );\n\n    setSize();\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return stepname;\n  }\n\n  /**\n   * Replaces the password present in each file URI with '***' before displaying it in the UI.\n   *\n   * @param files List of files to be processed\n   * @return The list of files to be processed with the password replaced with '***'\n   */\n  protected String[] getFriendlyURIs( String[] files ) {\n    for ( int i = 0; i < files.length; i++ ) {\n      String userinfo = URI.create( files[ i ] ).getUserInfo();\n\n      if ( userinfo != null ) {\n        String[] credentials = userinfo.split( \":\", 2 );\n\n        if ( credentials.length == 2 ) {\n          credentials[ 1 ] = \"***\";\n          files[ i ] = files[ i ].replaceFirst( userinfo, String.join( \":\", credentials ) );\n        }\n      }\n    }\n\n    return files;\n  }\n\n  private void showFiles() {\n    HadoopFileInputMeta tfii = new HadoopFileInputMeta();\n    getInfo( tfii );\n    String[] files = tfii.getFilePaths( transMeta.getBowl(), transMeta );\n    if ( files != null && files.length > 0 ) {\n      EnterSelectionDialog esd =\n        new EnterSelectionDialog( shell, getFriendlyURIs( files ), \"Files read\", \"Files read:\" );\n      esd.setViewOnly();\n      esd.open();\n    } else {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NoFilesFound.DialogMessage\" ) );\n      mb.setText( ERROR_TITLE );\n      mb.open();\n    }\n  }\n\n  private void addFilesTab() {\n    // ////////////////////////\n    // START OF FILE TAB ///\n    // ////////////////////////\n\n    CTabItem wFileTab = new CTabItem( wTabFolder, SWT.NONE );\n    wFileTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FileTab.TabTitle\" ) );\n\n    ScrolledComposite wFileSComp = new ScrolledComposite( wTabFolder, SWT.V_SCROLL | SWT.H_SCROLL );\n    wFileSComp.setLayout( new FillLayout() );\n\n    Composite wFileComp = new Composite( wFileSComp, SWT.NONE );\n    props.setLook( wFileComp );\n\n    FormLayout fileLayout = new FormLayout();\n    fileLayout.marginWidth = 3;\n    fileLayout.marginHeight = 3;\n    wFileComp.setLayout( fileLayout );\n\n    // Filename list line\n    wlFilenameList = new Label( wFileComp, SWT.RIGHT );\n    wlFilenameList.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilenameList.Label\" ) );\n    props.setLook( wlFilenameList );\n\n    FormData fdlFilenameList = new FormData();\n    fdlFilenameList.left = new FormAttachment( 0, 0 );\n    fdlFilenameList.top = new FormAttachment( wFileComp, 15 );\n    wlFilenameList.setLayoutData( fdlFilenameList );\n\n    ToolBar tb = new ToolBar( wFileComp, SWT.HORIZONTAL | SWT.FLAT );\n    props.setLook( tb );\n    FormData fdTb = new FormData();\n    fdTb.right = new FormAttachment( 100, 0 );\n    fdTb.top = new FormAttachment( wFileComp, margin );\n    tb.setLayoutData( fdTb );\n\n    ToolItem deleteToolItem = new ToolItem( tb, SWT.PUSH );\n    deleteToolItem.setImage( GUIResource.getInstance().getImageDelete() );\n    deleteToolItem\n      .setToolTipText( BaseMessages.getString( JobEntryCopyFiles.class, \"JobCopyFiles.FilenameDelete.Tooltip\" ) );\n    deleteToolItem.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent arg0 ) {\n\n        int[] idx = wFilenameList.getSelectionIndices();\n        wFilenameList.remove( idx );\n        wFilenameList.removeEmptyRows();\n        wFilenameList.setRowNums();\n      }\n    } );\n\n    wbShowFiles = new Button( wFileComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbShowFiles );\n    wbShowFiles.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ShowFiles.Button\" ) );\n\n    FormData fdbShowFiles = new FormData();\n    fdbShowFiles.left = new FormAttachment( middle, 0 );\n    fdbShowFiles.bottom = new FormAttachment( 100, 0 );\n    wbShowFiles.setLayoutData( fdbShowFiles );\n\n    wFirst = new Button( wFileComp, SWT.PUSH );\n    wFirst.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.First.Button\" ) );\n\n    FormData fdFirst = new FormData();\n    fdFirst.left = new FormAttachment( wbShowFiles, margin * 2 );\n    fdFirst.bottom = new FormAttachment( 100, 0 );\n    wFirst.setLayoutData( fdFirst );\n\n    wFirstHeader = new Button( wFileComp, SWT.PUSH );\n    wFirstHeader.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FirstHeader.Button\" ) );\n\n    FormData fdFirstHeader = new FormData();\n    fdFirstHeader.left = new FormAttachment( wFirst, margin * 2 );\n    fdFirstHeader.bottom = new FormAttachment( 100, 0 );\n    wFirstHeader.setLayoutData( fdFirstHeader );\n\n    // Accepting filenames group\n    //\n\n    Group gAccepting = new Group( wFileComp, SWT.SHADOW_ETCHED_IN );\n    gAccepting.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptingGroup.Label\" ) );\n    FormLayout acceptingLayout = new FormLayout();\n    acceptingLayout.marginWidth = 3;\n    acceptingLayout.marginHeight = 3;\n    gAccepting.setLayout( acceptingLayout );\n    props.setLook( gAccepting );\n\n    // Accept filenames from previous steps?\n    //\n    Label wlAccFilenames = new Label( gAccepting, SWT.RIGHT );\n    wlAccFilenames.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptFilenames.Label\" ) );\n    props.setLook( wlAccFilenames );\n\n    FormData fdlAccFilenames = new FormData();\n    fdlAccFilenames.top = new FormAttachment( 0, margin );\n    fdlAccFilenames.left = new FormAttachment( 0, 0 );\n    fdlAccFilenames.right = new FormAttachment( middle, -margin );\n    wlAccFilenames.setLayoutData( fdlAccFilenames );\n    wAccFilenames = new Button( gAccepting, SWT.CHECK );\n    wAccFilenames.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptFilenames.Tooltip\" ) );\n    props.setLook( wAccFilenames );\n\n    FormData fdAccFilenames = new FormData();\n    fdAccFilenames.top = new FormAttachment( 0, margin );\n    fdAccFilenames.left = new FormAttachment( middle, 0 );\n    fdAccFilenames.right = new FormAttachment( 100, 0 );\n    wAccFilenames.setLayoutData( fdAccFilenames );\n\n    // Accept filenames from previous steps?\n    //\n    wlPassThruFields = new Label( gAccepting, SWT.RIGHT );\n    wlPassThruFields.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.PassThruFields.Label\" ) );\n    props.setLook( wlPassThruFields );\n\n    FormData fdlPassThruFields = new FormData();\n    fdlPassThruFields.top = new FormAttachment( wAccFilenames, margin );\n    fdlPassThruFields.left = new FormAttachment( 0, 0 );\n    fdlPassThruFields.right = new FormAttachment( middle, -margin );\n    wlPassThruFields.setLayoutData( fdlPassThruFields );\n    wPassThruFields = new Button( gAccepting, SWT.CHECK );\n    wPassThruFields.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.PassThruFields.Tooltip\" ) );\n    props.setLook( wPassThruFields );\n\n    FormData fdPassThruFields = new FormData();\n    fdPassThruFields.top = new FormAttachment( wAccFilenames, margin );\n    fdPassThruFields.left = new FormAttachment( middle, 0 );\n    fdPassThruFields.right = new FormAttachment( 100, 0 );\n    wPassThruFields.setLayoutData( fdPassThruFields );\n\n    // Which step to read from?\n    wlAccStep = new Label( gAccepting, SWT.RIGHT );\n    wlAccStep.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptStep.Label\" ) );\n    props.setLook( wlAccStep );\n\n    FormData fdlAccStep = new FormData();\n    fdlAccStep.top = new FormAttachment( wPassThruFields, margin );\n    fdlAccStep.left = new FormAttachment( 0, 0 );\n    fdlAccStep.right = new FormAttachment( middle, -margin );\n    wlAccStep.setLayoutData( fdlAccStep );\n    wAccStep = new CCombo( gAccepting, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wAccStep.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptStep.Tooltip\" ) );\n    props.setLook( wAccStep );\n\n    FormData fdAccStep = new FormData();\n    fdAccStep.top = new FormAttachment( wPassThruFields, margin );\n    fdAccStep.left = new FormAttachment( middle, 0 );\n    fdAccStep.right = new FormAttachment( 100, 0 );\n    wAccStep.setLayoutData( fdAccStep );\n\n    // Which field?\n    //\n    wlAccField = new Label( gAccepting, SWT.RIGHT );\n    wlAccField.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptField.Label\" ) );\n    props.setLook( wlAccField );\n\n    FormData fdlAccField = new FormData();\n    fdlAccField.top = new FormAttachment( wAccStep, margin );\n    fdlAccField.left = new FormAttachment( 0, 0 );\n    fdlAccField.right = new FormAttachment( middle, -margin );\n    wlAccField.setLayoutData( fdlAccField );\n    wAccField = new Text( gAccepting, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wAccField.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AcceptField.Tooltip\" ) );\n    props.setLook( wAccField );\n\n    FormData fdAccField = new FormData();\n    fdAccField.top = new FormAttachment( wAccStep, margin );\n    fdAccField.left = new FormAttachment( middle, 0 );\n    fdAccField.right = new FormAttachment( 100, 0 );\n    wAccField.setLayoutData( fdAccField );\n\n    // Fill in the source steps...\n    List<StepMeta> prevSteps = transMeta.findPreviousSteps( transMeta.findStep( stepname ) );\n    for ( StepMeta prevStep : prevSteps ) {\n      wAccStep.add( prevStep.getName() );\n    }\n\n    FormData fdAccepting = new FormData();\n    fdAccepting.left = new FormAttachment( 0, 0 );\n    fdAccepting.right = new FormAttachment( 100, 0 );\n    fdAccepting.bottom = new FormAttachment( wFirstHeader, -margin * 2 );\n    gAccepting.setLayoutData( fdAccepting );\n\n    ColumnInfo[] colinfo =\n      new ColumnInfo[] {\n        new ColumnInfo( BaseMessages.getString( PKG, \"HadoopFileInputDialog.Environment\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, false, true ),\n        new ColumnInfo( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileFolderColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT_BUTTON, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.WildcardColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RequiredColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, YES_NO_COMBO ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.IncludeSubDirs.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, YES_NO_COMBO ) };\n\n    setComboValues( colinfo[ 0 ] );\n    colinfo[ 1 ].setUsingVariables( true );\n    colinfo[ 1 ].setTextVarButtonSelectionListener( getFileDirectoryListener() );\n    colinfo[ 2 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RegExpColumn.Column\" ) );\n    colinfo[ 3 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RequiredColumn.Tooltip\" ) );\n    colinfo[ 4 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.IncludeSubDirs.Tooltip\" ) );\n\n    wFilenameList =\n      new TableView( transMeta, wFileComp, SWT.FULL_SELECTION | SWT.SINGLE | SWT.BORDER, colinfo, 4, lsMod, props );\n    props.setLook( wFilenameList );\n\n    FormData fdFilenameList = new FormData();\n    fdFilenameList.bottom = new FormAttachment( gAccepting, 0 );\n    fdFilenameList.right = new FormAttachment( 100, 0 );\n    fdFilenameList.left = new FormAttachment( 0, 0 );\n    fdFilenameList.top = new FormAttachment( tb, margin );\n    wFilenameList.setLayoutData( fdFilenameList );\n\n    FormData fdFileComp = new FormData();\n    fdFileComp.left = new FormAttachment( 0, 0 );\n    fdFileComp.top = new FormAttachment( 0, 0 );\n    fdFileComp.right = new FormAttachment( 100, 0 );\n    fdFileComp.bottom = new FormAttachment( 100, 0 );\n    wFileComp.setLayoutData( fdFileComp );\n\n    wFileComp.pack();\n    Rectangle bounds = wFileComp.getBounds();\n\n    wFileSComp.setContent( wFileComp );\n    wFileSComp.setExpandHorizontal( true );\n    wFileSComp.setExpandVertical( true );\n    wFileSComp.setMinWidth( bounds.width );\n    wFileSComp.setMinHeight( bounds.height );\n\n    wFileTab.setControl( wFileSComp );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF FILE TAB\n    // ///////////////////////////////////////////////////////////\n  }\n\n  private void addContentTab() {\n    // ////////////////////////\n    // START OF CONTENT TAB///\n    // /\n    CTabItem wContentTab = new CTabItem( wTabFolder, SWT.NONE );\n    wContentTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ContentTab.TabTitle\" ) );\n\n    FormLayout contentLayout = new FormLayout();\n    contentLayout.marginWidth = 3;\n    contentLayout.marginHeight = 3;\n\n    ScrolledComposite wContentSComp = new ScrolledComposite( wTabFolder, SWT.V_SCROLL | SWT.H_SCROLL );\n    wContentSComp.setLayout( new FillLayout() );\n\n    Composite wContentComp = new Composite( wContentSComp, SWT.NONE );\n    props.setLook( wContentComp );\n    wContentComp.setLayout( contentLayout );\n\n    // Filetype line\n    Label wlFiletype = new Label( wContentComp, SWT.RIGHT );\n    wlFiletype.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Filetype.Label\" ) );\n    props.setLook( wlFiletype );\n\n    FormData fdlFiletype = new FormData();\n    fdlFiletype.left = new FormAttachment( 0, 0 );\n    fdlFiletype.top = new FormAttachment( 0, 0 );\n    fdlFiletype.right = new FormAttachment( middle, -margin );\n    wlFiletype.setLayoutData( fdlFiletype );\n    wFiletype = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wFiletype.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Filetype.Label\" ) );\n    props.setLook( wFiletype );\n    wFiletype.add( \"CSV\" );\n    wFiletype.add( \"Fixed\" );\n    wFiletype.select( 0 );\n    wFiletype.addModifyListener( lsMod );\n\n    FormData fdFiletype = new FormData();\n    fdFiletype.left = new FormAttachment( middle, 0 );\n    fdFiletype.top = new FormAttachment( 0, 0 );\n    fdFiletype.right = new FormAttachment( 100, 0 );\n    wFiletype.setLayoutData( fdFiletype );\n\n    Label wlSeparator = new Label( wContentComp, SWT.RIGHT );\n    wlSeparator.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Separator.Label\" ) );\n    props.setLook( wlSeparator );\n\n    FormData fdlSeparator = new FormData();\n    fdlSeparator.left = new FormAttachment( 0, 0 );\n    fdlSeparator.top = new FormAttachment( wFiletype, margin );\n    fdlSeparator.right = new FormAttachment( middle, -margin );\n    wlSeparator.setLayoutData( fdlSeparator );\n\n    wbSeparator = new Button( wContentComp, SWT.PUSH | SWT.CENTER );\n    wbSeparator.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Delimiter.Button\" ) );\n    props.setLook( wbSeparator );\n\n    FormData fdbSeparator = new FormData();\n    fdbSeparator.right = new FormAttachment( 100, 0 );\n    fdbSeparator.top = new FormAttachment( wFiletype, 0 );\n    wbSeparator.setLayoutData( fdbSeparator );\n    wSeparator = new TextVar( transMeta, wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wSeparator );\n    wSeparator.addModifyListener( lsMod );\n\n    FormData fdSeparator = new FormData();\n    fdSeparator.top = new FormAttachment( wFiletype, margin );\n    fdSeparator.left = new FormAttachment( middle, 0 );\n    fdSeparator.right = new FormAttachment( wbSeparator, -margin );\n    wSeparator.setLayoutData( fdSeparator );\n\n    // Enclosure\n    Label wlEnclosure = new Label( wContentComp, SWT.RIGHT );\n    wlEnclosure.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Enclosure.Label\" ) );\n    props.setLook( wlEnclosure );\n\n    FormData fdlEnclosure = new FormData();\n    fdlEnclosure.left = new FormAttachment( 0, 0 );\n    fdlEnclosure.top = new FormAttachment( wSeparator, margin );\n    fdlEnclosure.right = new FormAttachment( middle, -margin );\n    wlEnclosure.setLayoutData( fdlEnclosure );\n    wEnclosure = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wEnclosure );\n    wEnclosure.addModifyListener( lsMod );\n\n    FormData fdEnclosure = new FormData();\n    fdEnclosure.left = new FormAttachment( middle, 0 );\n    fdEnclosure.top = new FormAttachment( wSeparator, margin );\n    fdEnclosure.right = new FormAttachment( 100, 0 );\n    wEnclosure.setLayoutData( fdEnclosure );\n\n    // Allow Enclosure breaks checkbox\n    Label wlEnclBreaks = new Label( wContentComp, SWT.RIGHT );\n    wlEnclBreaks.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.EnclBreaks.Label\" ) );\n    props.setLook( wlEnclBreaks );\n\n    FormData fdlEnclBreaks = new FormData();\n    fdlEnclBreaks.left = new FormAttachment( 0, 0 );\n    fdlEnclBreaks.top = new FormAttachment( wEnclosure, margin );\n    fdlEnclBreaks.right = new FormAttachment( middle, -margin );\n    wlEnclBreaks.setLayoutData( fdlEnclBreaks );\n\n    Button wEnclBreaks = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wEnclBreaks );\n\n    FormData fdEnclBreaks = new FormData();\n    fdEnclBreaks.left = new FormAttachment( middle, 0 );\n    fdEnclBreaks.top = new FormAttachment( wEnclosure, margin );\n    wEnclBreaks.setLayoutData( fdEnclBreaks );\n\n    // Disable until the logic works...\n    wlEnclBreaks.setEnabled( false );\n    wEnclBreaks.setEnabled( false );\n\n    // Escape\n    Label wlEscape = new Label( wContentComp, SWT.RIGHT );\n    wlEscape.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Escape.Label\" ) );\n    props.setLook( wlEscape );\n\n    FormData fdlEscape = new FormData();\n    fdlEscape.left = new FormAttachment( 0, 0 );\n    fdlEscape.top = new FormAttachment( wEnclBreaks, margin );\n    fdlEscape.right = new FormAttachment( middle, -margin );\n    wlEscape.setLayoutData( fdlEscape );\n    wEscape = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wEscape );\n    wEscape.addModifyListener( lsMod );\n\n    FormData fdEscape = new FormData();\n    fdEscape.left = new FormAttachment( middle, 0 );\n    fdEscape.top = new FormAttachment( wEnclBreaks, margin );\n    fdEscape.right = new FormAttachment( 100, 0 );\n    wEscape.setLayoutData( fdEscape );\n\n    // Header checkbox\n    Label wlHeader = new Label( wContentComp, SWT.RIGHT );\n    wlHeader.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Header.Label\" ) );\n    props.setLook( wlHeader );\n\n    FormData fdlHeader = new FormData();\n    fdlHeader.left = new FormAttachment( 0, 0 );\n    fdlHeader.top = new FormAttachment( wEscape, margin );\n    fdlHeader.right = new FormAttachment( middle, -margin );\n    wlHeader.setLayoutData( fdlHeader );\n    wHeader = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wHeader );\n\n    FormData fdHeader = new FormData();\n    fdHeader.left = new FormAttachment( middle, 0 );\n    fdHeader.top = new FormAttachment( wEscape, margin );\n    wHeader.setLayoutData( fdHeader );\n\n    // NrHeader\n    wlNrHeader = new Label( wContentComp, SWT.RIGHT );\n    wlNrHeader.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NrHeader.Label\" ) );\n    props.setLook( wlNrHeader );\n\n    FormData fdlNrHeader = new FormData();\n    fdlNrHeader.left = new FormAttachment( wHeader, margin );\n    fdlNrHeader.top = new FormAttachment( wEscape, margin );\n    wlNrHeader.setLayoutData( fdlNrHeader );\n    wNrHeader = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wNrHeader.setTextLimit( 3 );\n    props.setLook( wNrHeader );\n    wNrHeader.addModifyListener( lsMod );\n\n    FormData fdNrHeader = new FormData();\n    fdNrHeader.left = new FormAttachment( wlNrHeader, margin );\n    fdNrHeader.top = new FormAttachment( wEscape, margin );\n    fdNrHeader.right = new FormAttachment( 100, 0 );\n    wNrHeader.setLayoutData( fdNrHeader );\n\n    Label wlFooter = new Label( wContentComp, SWT.RIGHT );\n    wlFooter.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Footer.Label\" ) );\n    props.setLook( wlFooter );\n\n    FormData fdlFooter = new FormData();\n    fdlFooter.left = new FormAttachment( 0, 0 );\n    fdlFooter.top = new FormAttachment( wHeader, margin );\n    fdlFooter.right = new FormAttachment( middle, -margin );\n    wlFooter.setLayoutData( fdlFooter );\n    wFooter = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wFooter );\n\n    FormData fdFooter = new FormData();\n    fdFooter.left = new FormAttachment( middle, 0 );\n    fdFooter.top = new FormAttachment( wHeader, margin );\n    wFooter.setLayoutData( fdFooter );\n\n    // NrFooter\n    wlNrFooter = new Label( wContentComp, SWT.RIGHT );\n    wlNrFooter.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NrFooter.Label\" ) );\n    props.setLook( wlNrFooter );\n\n    FormData fdlNrFooter = new FormData();\n    fdlNrFooter.left = new FormAttachment( wFooter, margin );\n    fdlNrFooter.top = new FormAttachment( wHeader, margin );\n    wlNrFooter.setLayoutData( fdlNrFooter );\n    wNrFooter = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wNrFooter.setTextLimit( 3 );\n    props.setLook( wNrFooter );\n    wNrFooter.addModifyListener( lsMod );\n\n    FormData fdNrFooter = new FormData();\n    fdNrFooter.left = new FormAttachment( wlNrFooter, margin );\n    fdNrFooter.top = new FormAttachment( wHeader, margin );\n    fdNrFooter.right = new FormAttachment( 100, 0 );\n    wNrFooter.setLayoutData( fdNrFooter );\n\n    // Wraps\n    Label wlWraps = new Label( wContentComp, SWT.RIGHT );\n    wlWraps.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Wraps.Label\" ) );\n    props.setLook( wlWraps );\n\n    FormData fdlWraps = new FormData();\n    fdlWraps.left = new FormAttachment( 0, 0 );\n    fdlWraps.top = new FormAttachment( wFooter, margin );\n    fdlWraps.right = new FormAttachment( middle, -margin );\n    wlWraps.setLayoutData( fdlWraps );\n    wWraps = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wWraps );\n\n    FormData fdWraps = new FormData();\n    fdWraps.left = new FormAttachment( middle, 0 );\n    fdWraps.top = new FormAttachment( wFooter, margin );\n    wWraps.setLayoutData( fdWraps );\n\n    // NrWraps\n    wlNrWraps = new Label( wContentComp, SWT.RIGHT );\n    wlNrWraps.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NrWraps.Label\" ) );\n    props.setLook( wlNrWraps );\n\n    FormData fdlNrWraps = new FormData();\n    fdlNrWraps.left = new FormAttachment( wWraps, margin );\n    fdlNrWraps.top = new FormAttachment( wFooter, margin );\n    wlNrWraps.setLayoutData( fdlNrWraps );\n    wNrWraps = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wNrWraps.setTextLimit( 3 );\n    props.setLook( wNrWraps );\n    wNrWraps.addModifyListener( lsMod );\n\n    FormData fdNrWraps = new FormData();\n    fdNrWraps.left = new FormAttachment( wlNrWraps, margin );\n    fdNrWraps.top = new FormAttachment( wFooter, margin );\n    fdNrWraps.right = new FormAttachment( 100, 0 );\n    wNrWraps.setLayoutData( fdNrWraps );\n\n    // Pages\n    Label wlLayoutPaged = new Label( wContentComp, SWT.RIGHT );\n    wlLayoutPaged.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LayoutPaged.Label\" ) );\n    props.setLook( wlLayoutPaged );\n\n    FormData fdlLayoutPaged = new FormData();\n    fdlLayoutPaged.left = new FormAttachment( 0, 0 );\n    fdlLayoutPaged.top = new FormAttachment( wWraps, margin );\n    fdlLayoutPaged.right = new FormAttachment( middle, -margin );\n    wlLayoutPaged.setLayoutData( fdlLayoutPaged );\n    wLayoutPaged = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wLayoutPaged );\n\n    FormData fdLayoutPaged = new FormData();\n    fdLayoutPaged.left = new FormAttachment( middle, 0 );\n    fdLayoutPaged.top = new FormAttachment( wWraps, margin );\n    wLayoutPaged.setLayoutData( fdLayoutPaged );\n\n    // Nr of lines per page\n    wlNrLinesPerPage = new Label( wContentComp, SWT.RIGHT );\n    wlNrLinesPerPage.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NrLinesPerPage.Label\" ) );\n    props.setLook( wlNrLinesPerPage );\n\n    FormData fdlNrLinesPerPage = new FormData();\n    fdlNrLinesPerPage.left = new FormAttachment( wLayoutPaged, margin );\n    fdlNrLinesPerPage.top = new FormAttachment( wWraps, margin );\n    wlNrLinesPerPage.setLayoutData( fdlNrLinesPerPage );\n    wNrLinesPerPage = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wNrLinesPerPage.setTextLimit( 3 );\n    props.setLook( wNrLinesPerPage );\n    wNrLinesPerPage.addModifyListener( lsMod );\n\n    FormData fdNrLinesPerPage = new FormData();\n    fdNrLinesPerPage.left = new FormAttachment( wlNrLinesPerPage, margin );\n    fdNrLinesPerPage.top = new FormAttachment( wWraps, margin );\n    fdNrLinesPerPage.right = new FormAttachment( 100, 0 );\n    wNrLinesPerPage.setLayoutData( fdNrLinesPerPage );\n\n    // NrPages\n    wlNrLinesDocHeader = new Label( wContentComp, SWT.RIGHT );\n    wlNrLinesDocHeader.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NrLinesDocHeader.Label\" ) );\n    props.setLook( wlNrLinesDocHeader );\n\n    FormData fdlNrLinesDocHeader = new FormData();\n    fdlNrLinesDocHeader.left = new FormAttachment( wLayoutPaged, margin );\n    fdlNrLinesDocHeader.top = new FormAttachment( wNrLinesPerPage, margin );\n    wlNrLinesDocHeader.setLayoutData( fdlNrLinesDocHeader );\n    wNrLinesDocHeader = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wNrLinesDocHeader.setTextLimit( 3 );\n    props.setLook( wNrLinesDocHeader );\n    wNrLinesDocHeader.addModifyListener( lsMod );\n\n    FormData fdNrLinesDocHeader = new FormData();\n    fdNrLinesDocHeader.left = new FormAttachment( wlNrLinesPerPage, margin );\n    fdNrLinesDocHeader.top = new FormAttachment( wNrLinesPerPage, margin );\n    fdNrLinesDocHeader.right = new FormAttachment( 100, 0 );\n    wNrLinesDocHeader.setLayoutData( fdNrLinesDocHeader );\n\n    // Compression type (None, Zip or GZip\n    Label wlCompression = new Label( wContentComp, SWT.RIGHT );\n    wlCompression.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Compression.Label\" ) );\n    props.setLook( wlCompression );\n\n    FormData fdlCompression = new FormData();\n    fdlCompression.left = new FormAttachment( 0, 0 );\n    fdlCompression.top = new FormAttachment( wNrLinesDocHeader, margin );\n    fdlCompression.right = new FormAttachment( middle, -margin );\n    wlCompression.setLayoutData( fdlCompression );\n    wCompression = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wCompression.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Compression.Label\" ) );\n    wCompression.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Compression.Tooltip\" ) );\n    props.setLook( wCompression );\n    wCompression.setItems( CompressionProviderFactory.getInstance().getCompressionProviderNames() );\n    wCompression.addModifyListener( lsMod );\n\n    FormData fdCompression = new FormData();\n    fdCompression.left = new FormAttachment( middle, 0 );\n    fdCompression.top = new FormAttachment( wNrLinesDocHeader, margin );\n    fdCompression.right = new FormAttachment( 100, 0 );\n    wCompression.setLayoutData( fdCompression );\n\n    Label wlNoempty = new Label( wContentComp, SWT.RIGHT );\n    wlNoempty.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NoEmpty.Label\" ) );\n    props.setLook( wlNoempty );\n\n    FormData fdlNoempty = new FormData();\n    fdlNoempty.left = new FormAttachment( 0, 0 );\n    fdlNoempty.top = new FormAttachment( wCompression, margin );\n    fdlNoempty.right = new FormAttachment( middle, -margin );\n    wlNoempty.setLayoutData( fdlNoempty );\n    wNoempty = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wNoempty );\n    wNoempty.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NoEmpty.Tooltip\" ) );\n\n    FormData fdNoempty = new FormData();\n    fdNoempty.left = new FormAttachment( middle, 0 );\n    fdNoempty.top = new FormAttachment( wCompression, margin );\n    fdNoempty.right = new FormAttachment( 100, 0 );\n    wNoempty.setLayoutData( fdNoempty );\n\n    Label wlInclFilename = new Label( wContentComp, SWT.RIGHT );\n    wlInclFilename.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclFilename.Label\" ) );\n    props.setLook( wlInclFilename );\n\n    FormData fdlInclFilename = new FormData();\n    fdlInclFilename.left = new FormAttachment( 0, 0 );\n    fdlInclFilename.top = new FormAttachment( wNoempty, margin );\n    fdlInclFilename.right = new FormAttachment( middle, -margin );\n    wlInclFilename.setLayoutData( fdlInclFilename );\n    wInclFilename = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wInclFilename );\n    wInclFilename.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclFilename.Tooltip\" ) );\n\n    FormData fdInclFilename = new FormData();\n    fdInclFilename.left = new FormAttachment( middle, 0 );\n    fdInclFilename.top = new FormAttachment( wNoempty, margin );\n    wInclFilename.setLayoutData( fdInclFilename );\n\n    wlInclFilenameField = new Label( wContentComp, SWT.LEFT );\n    wlInclFilenameField.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclFilenameField.Label\" ) );\n    props.setLook( wlInclFilenameField );\n\n    FormData fdlInclFilenameField = new FormData();\n    fdlInclFilenameField.left = new FormAttachment( wInclFilename, margin );\n    fdlInclFilenameField.top = new FormAttachment( wNoempty, margin );\n    wlInclFilenameField.setLayoutData( fdlInclFilenameField );\n    wInclFilenameField = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wInclFilenameField );\n    wInclFilenameField.addModifyListener( lsMod );\n\n    FormData fdInclFilenameField = new FormData();\n    fdInclFilenameField.left = new FormAttachment( wlInclFilenameField, margin );\n    fdInclFilenameField.top = new FormAttachment( wNoempty, margin );\n    fdInclFilenameField.right = new FormAttachment( 100, 0 );\n    wInclFilenameField.setLayoutData( fdInclFilenameField );\n\n    Label wlInclRownum = new Label( wContentComp, SWT.RIGHT );\n    wlInclRownum.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclRownum.Label\" ) );\n    props.setLook( wlInclRownum );\n\n    FormData fdlInclRownum = new FormData();\n    fdlInclRownum.left = new FormAttachment( 0, 0 );\n    fdlInclRownum.top = new FormAttachment( wInclFilenameField, margin );\n    fdlInclRownum.right = new FormAttachment( middle, -margin );\n    wlInclRownum.setLayoutData( fdlInclRownum );\n    wInclRownum = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wInclRownum );\n    wInclRownum.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclRownum.Tooltip\" ) );\n\n    FormData fdRownum = new FormData();\n    fdRownum.left = new FormAttachment( middle, 0 );\n    fdRownum.top = new FormAttachment( wInclFilenameField, margin );\n    wInclRownum.setLayoutData( fdRownum );\n\n    wlInclRownumField = new Label( wContentComp, SWT.RIGHT );\n    wlInclRownumField.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.InclRownumField.Label\" ) );\n    props.setLook( wlInclRownumField );\n\n    FormData fdlInclRownumField = new FormData();\n    fdlInclRownumField.left = new FormAttachment( wInclRownum, margin );\n    fdlInclRownumField.top = new FormAttachment( wInclFilenameField, margin );\n    wlInclRownumField.setLayoutData( fdlInclRownumField );\n    wInclRownumField = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wInclRownumField );\n    wInclRownumField.addModifyListener( lsMod );\n\n    FormData fdInclRownumField = new FormData();\n    fdInclRownumField.left = new FormAttachment( wlInclRownumField, margin );\n    fdInclRownumField.top = new FormAttachment( wInclFilenameField, margin );\n    fdInclRownumField.right = new FormAttachment( 100, 0 );\n    wInclRownumField.setLayoutData( fdInclRownumField );\n\n    wlRownumByFileField = new Label( wContentComp, SWT.RIGHT );\n    wlRownumByFileField.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RownumByFile.Label\" ) );\n    props.setLook( wlRownumByFileField );\n\n    FormData fdlRownumByFile = new FormData();\n    fdlRownumByFile.left = new FormAttachment( wInclRownum, margin );\n    fdlRownumByFile.top = new FormAttachment( wInclRownumField, margin );\n    wlRownumByFileField.setLayoutData( fdlRownumByFile );\n    wRownumByFile = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wRownumByFile );\n    wRownumByFile.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RownumByFile.Tooltip\" ) );\n\n    FormData fdRownumByFile = new FormData();\n    fdRownumByFile.left = new FormAttachment( wlRownumByFileField, margin );\n    fdRownumByFile.top = new FormAttachment( wInclRownumField, margin );\n    wRownumByFile.setLayoutData( fdRownumByFile );\n\n    Label wlFormat = new Label( wContentComp, SWT.RIGHT );\n    wlFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Format.Label\" ) );\n    props.setLook( wlFormat );\n\n    FormData fdlFormat = new FormData();\n    fdlFormat.left = new FormAttachment( 0, 0 );\n    fdlFormat.top = new FormAttachment( wRownumByFile, margin * 2 );\n    fdlFormat.right = new FormAttachment( middle, -margin );\n    wlFormat.setLayoutData( fdlFormat );\n    wFormat = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Format.Label\" ) );\n    props.setLook( wFormat );\n    wFormat.add( \"DOS\" );\n    wFormat.add( \"Unix\" );\n    wFormat.add( \"mixed\" );\n    wFormat.select( 0 );\n    wFormat.addModifyListener( lsMod );\n\n    FormData fdFormat = new FormData();\n    fdFormat.left = new FormAttachment( middle, 0 );\n    fdFormat.top = new FormAttachment( wRownumByFile, margin * 2 );\n    fdFormat.right = new FormAttachment( 100, 0 );\n    wFormat.setLayoutData( fdFormat );\n\n    Label wlEncoding = new Label( wContentComp, SWT.RIGHT );\n    wlEncoding.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Encoding.Label\" ) );\n    props.setLook( wlEncoding );\n\n    FormData fdlEncoding = new FormData();\n    fdlEncoding.left = new FormAttachment( 0, 0 );\n    fdlEncoding.top = new FormAttachment( wFormat, margin );\n    fdlEncoding.right = new FormAttachment( middle, -margin );\n    wlEncoding.setLayoutData( fdlEncoding );\n    wEncoding = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wEncoding.setEditable( true );\n    props.setLook( wEncoding );\n    wEncoding.addModifyListener( lsMod );\n\n    FormData fdEncoding = new FormData();\n    fdEncoding.left = new FormAttachment( middle, 0 );\n    fdEncoding.top = new FormAttachment( wFormat, margin );\n    fdEncoding.right = new FormAttachment( 100, 0 );\n    wEncoding.setLayoutData( fdEncoding );\n    wEncoding.addFocusListener( new FocusListener() {\n      @Override\n      public void focusLost( org.eclipse.swt.events.FocusEvent e ) {\n        // No-Op Necessary\n      }\n\n      @Override\n      public void focusGained( org.eclipse.swt.events.FocusEvent e ) {\n        Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n        shell.setCursor( busy );\n        setEncodings();\n        shell.setCursor( null );\n        busy.dispose();\n      }\n    } );\n\n    Label wlLimit = new Label( wContentComp, SWT.RIGHT );\n    wlLimit.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Limit.Label\" ) );\n    props.setLook( wlLimit );\n\n    FormData fdlLimit = new FormData();\n    fdlLimit.left = new FormAttachment( 0, 0 );\n    fdlLimit.top = new FormAttachment( wEncoding, margin );\n    fdlLimit.right = new FormAttachment( middle, -margin );\n    wlLimit.setLayoutData( fdlLimit );\n    wLimit = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wLimit );\n    wLimit.addModifyListener( lsMod );\n\n    FormData fdLimit = new FormData();\n    fdLimit.left = new FormAttachment( middle, 0 );\n    fdLimit.top = new FormAttachment( wEncoding, margin );\n    fdLimit.right = new FormAttachment( 100, 0 );\n    wLimit.setLayoutData( fdLimit );\n\n    // Date Lenient checkbox\n    Label wlDateLenient = new Label( wContentComp, SWT.RIGHT );\n    wlDateLenient.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.DateLenient.Label\" ) );\n    props.setLook( wlDateLenient );\n\n    FormData fdlDateLenient = new FormData();\n    fdlDateLenient.left = new FormAttachment( 0, 0 );\n    fdlDateLenient.top = new FormAttachment( wLimit, margin );\n    fdlDateLenient.right = new FormAttachment( middle, -margin );\n    wlDateLenient.setLayoutData( fdlDateLenient );\n    wDateLenient = new Button( wContentComp, SWT.CHECK );\n    wDateLenient.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.DateLenient.Tooltip\" ) );\n    props.setLook( wDateLenient );\n\n    FormData fdDateLenient = new FormData();\n    fdDateLenient.left = new FormAttachment( middle, 0 );\n    fdDateLenient.top = new FormAttachment( wLimit, margin );\n    wDateLenient.setLayoutData( fdDateLenient );\n\n    Label wlDateLocale = new Label( wContentComp, SWT.RIGHT );\n    wlDateLocale.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.DateLocale.Label\" ) );\n    props.setLook( wlDateLocale );\n\n    FormData fdlDateLocale = new FormData();\n    fdlDateLocale.left = new FormAttachment( 0, 0 );\n    fdlDateLocale.top = new FormAttachment( wDateLenient, margin );\n    fdlDateLocale.right = new FormAttachment( middle, -margin );\n    wlDateLocale.setLayoutData( fdlDateLocale );\n    wDateLocale = new CCombo( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wDateLocale.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.DateLocale.Tooltip\" ) );\n    props.setLook( wDateLocale );\n    wDateLocale.addModifyListener( lsMod );\n\n    FormData fdDateLocale = new FormData();\n    fdDateLocale.left = new FormAttachment( middle, 0 );\n    fdDateLocale.top = new FormAttachment( wDateLenient, margin );\n    fdDateLocale.right = new FormAttachment( 100, 0 );\n    wDateLocale.setLayoutData( fdDateLocale );\n    wDateLocale.addFocusListener( new FocusListener() {\n      @Override\n      public void focusLost( org.eclipse.swt.events.FocusEvent e ) {\n        // No-Op Necessary\n      }\n\n      @Override\n      public void focusGained( org.eclipse.swt.events.FocusEvent e ) {\n        Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n        shell.setCursor( busy );\n        setLocales();\n        shell.setCursor( null );\n        busy.dispose();\n      }\n    } );\n\n    // ///////////////////////////////\n    // START OF AddFileResult GROUP //\n    // ///////////////////////////////\n\n    Group wAddFileResult = new Group( wContentComp, SWT.SHADOW_NONE );\n    props.setLook( wAddFileResult );\n    wAddFileResult.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.wAddFileResult.Label\" ) );\n\n    FormLayout addFileResultgroupLayout = new FormLayout();\n    addFileResultgroupLayout.marginWidth = 10;\n    addFileResultgroupLayout.marginHeight = 10;\n    wAddFileResult.setLayout( addFileResultgroupLayout );\n\n    Label wlAddResult = new Label( wAddFileResult, SWT.RIGHT );\n    wlAddResult.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AddResult.Label\" ) );\n    props.setLook( wlAddResult );\n\n    FormData fdlAddResult = new FormData();\n    fdlAddResult.left = new FormAttachment( 0, 0 );\n    fdlAddResult.top = new FormAttachment( wDateLocale, margin );\n    fdlAddResult.right = new FormAttachment( middle, -margin );\n    wlAddResult.setLayoutData( fdlAddResult );\n    wAddResult = new Button( wAddFileResult, SWT.CHECK );\n    props.setLook( wAddResult );\n    wAddResult.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.AddResult.Tooltip\" ) );\n\n    FormData fdAddResult = new FormData();\n    fdAddResult.left = new FormAttachment( middle, 0 );\n    fdAddResult.top = new FormAttachment( wDateLocale, margin );\n    wAddResult.setLayoutData( fdAddResult );\n\n    FormData fdAddFileResult = new FormData();\n    fdAddFileResult.left = new FormAttachment( 0, margin );\n    fdAddFileResult.top = new FormAttachment( wDateLocale, margin );\n    fdAddFileResult.right = new FormAttachment( 100, -margin );\n    wAddFileResult.setLayoutData( fdAddFileResult );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF AddFileResult GROUP\n    // ///////////////////////////////////////////////////////////\n\n    wContentComp.pack();\n    // What's the size:\n    Rectangle bounds = wContentComp.getBounds();\n\n    wContentSComp.setContent( wContentComp );\n    wContentSComp.setExpandHorizontal( true );\n    wContentSComp.setExpandVertical( true );\n    wContentSComp.setMinWidth( bounds.width );\n    wContentSComp.setMinHeight( bounds.height );\n\n    FormData fdContentComp = new FormData();\n    fdContentComp.left = new FormAttachment( 0, 0 );\n    fdContentComp.top = new FormAttachment( 0, 0 );\n    fdContentComp.right = new FormAttachment( 100, 0 );\n    fdContentComp.bottom = new FormAttachment( 100, 0 );\n    wContentComp.setLayoutData( fdContentComp );\n\n    wContentTab.setControl( wContentSComp );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF CONTENT TAB\n    // ///////////////////////////////////////////////////////////\n\n  }\n\n  protected void setLocales() {\n    Locale[] locale = Locale.getAvailableLocales();\n    String[] dateLocale = new String[ locale.length ];\n    for ( int i = 0; i < locale.length; i++ ) {\n      dateLocale[ i ] = locale[ i ].toString();\n    }\n    wDateLocale.setItems( dateLocale );\n  }\n\n  private void addErrorTab() {\n    // ////////////////////////\n    // START OF ERROR TAB ///\n    // /\n    CTabItem wErrorTab = new CTabItem( wTabFolder, SWT.NONE );\n    wErrorTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorTab.TabTitle\" ) );\n\n    ScrolledComposite wErrorSComp = new ScrolledComposite( wTabFolder, SWT.V_SCROLL | SWT.H_SCROLL );\n    wErrorSComp.setLayout( new FillLayout() );\n\n    FormLayout errorLayout = new FormLayout();\n    errorLayout.marginWidth = 3;\n    errorLayout.marginHeight = 3;\n\n    Composite wErrorComp = new Composite( wErrorSComp, SWT.NONE );\n    props.setLook( wErrorComp );\n    wErrorComp.setLayout( errorLayout );\n\n    // ERROR HANDLING...\n    // ErrorIgnored?\n    Label wlErrorIgnored = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorIgnored.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorIgnored.Label\" ) );\n    props.setLook( wlErrorIgnored );\n\n    FormData fdlErrorIgnored = new FormData();\n    fdlErrorIgnored.left = new FormAttachment( 0, 0 );\n    fdlErrorIgnored.top = new FormAttachment( 0, margin );\n    fdlErrorIgnored.right = new FormAttachment( middle, -margin );\n    wlErrorIgnored.setLayoutData( fdlErrorIgnored );\n    wErrorIgnored = new Button( wErrorComp, SWT.CHECK );\n    props.setLook( wErrorIgnored );\n    wErrorIgnored.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorIgnored.Tooltip\" ) );\n\n    FormData fdErrorIgnored = new FormData();\n    fdErrorIgnored.left = new FormAttachment( middle, 0 );\n    fdErrorIgnored.top = new FormAttachment( 0, margin );\n    wErrorIgnored.setLayoutData( fdErrorIgnored );\n\n    // Skip error lines?\n    wlSkipErrorLines = new Label( wErrorComp, SWT.RIGHT );\n    wlSkipErrorLines.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.SkipErrorLines.Label\" ) );\n    props.setLook( wlSkipErrorLines );\n\n    FormData fdlSkipErrorLines = new FormData();\n    fdlSkipErrorLines.left = new FormAttachment( 0, 0 );\n    fdlSkipErrorLines.top = new FormAttachment( wErrorIgnored, margin );\n    fdlSkipErrorLines.right = new FormAttachment( middle, -margin );\n    wlSkipErrorLines.setLayoutData( fdlSkipErrorLines );\n    wSkipErrorLines = new Button( wErrorComp, SWT.CHECK );\n    props.setLook( wSkipErrorLines );\n    wSkipErrorLines.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.SkipErrorLines.Tooltip\" ) );\n\n    FormData fdSkipErrorLines = new FormData();\n    fdSkipErrorLines.left = new FormAttachment( middle, 0 );\n    fdSkipErrorLines.top = new FormAttachment( wErrorIgnored, margin );\n    wSkipErrorLines.setLayoutData( fdSkipErrorLines );\n\n    wlErrorCount = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorCount.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorCount.Label\" ) );\n    props.setLook( wlErrorCount );\n\n    FormData fdlErrorCount = new FormData();\n    fdlErrorCount.left = new FormAttachment( 0, 0 );\n    fdlErrorCount.top = new FormAttachment( wSkipErrorLines, margin );\n    fdlErrorCount.right = new FormAttachment( middle, -margin );\n    wlErrorCount.setLayoutData( fdlErrorCount );\n    wErrorCount = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wErrorCount );\n    wErrorCount.addModifyListener( lsMod );\n\n    FormData fdErrorCount = new FormData();\n    fdErrorCount.left = new FormAttachment( middle, 0 );\n    fdErrorCount.top = new FormAttachment( wSkipErrorLines, margin );\n    fdErrorCount.right = new FormAttachment( 100, 0 );\n    wErrorCount.setLayoutData( fdErrorCount );\n\n    wlErrorFields = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorFields.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorFields.Label\" ) );\n    props.setLook( wlErrorFields );\n\n    FormData fdlErrorFields = new FormData();\n    fdlErrorFields.left = new FormAttachment( 0, 0 );\n    fdlErrorFields.top = new FormAttachment( wErrorCount, margin );\n    fdlErrorFields.right = new FormAttachment( middle, -margin );\n    wlErrorFields.setLayoutData( fdlErrorFields );\n    wErrorFields = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wErrorFields );\n    wErrorFields.addModifyListener( lsMod );\n\n    FormData fdErrorFields = new FormData();\n    fdErrorFields.left = new FormAttachment( middle, 0 );\n    fdErrorFields.top = new FormAttachment( wErrorCount, margin );\n    fdErrorFields.right = new FormAttachment( 100, 0 );\n    wErrorFields.setLayoutData( fdErrorFields );\n\n    wlErrorText = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorText.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorText.Label\" ) );\n    props.setLook( wlErrorText );\n\n    FormData fdlErrorText = new FormData();\n    fdlErrorText.left = new FormAttachment( 0, 0 );\n    fdlErrorText.top = new FormAttachment( wErrorFields, margin );\n    fdlErrorText.right = new FormAttachment( middle, -margin );\n    wlErrorText.setLayoutData( fdlErrorText );\n    wErrorText = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wErrorText );\n    wErrorText.addModifyListener( lsMod );\n\n    FormData fdErrorText = new FormData();\n    fdErrorText.left = new FormAttachment( middle, 0 );\n    fdErrorText.top = new FormAttachment( wErrorFields, margin );\n    fdErrorText.right = new FormAttachment( 100, 0 );\n    wErrorText.setLayoutData( fdErrorText );\n\n    // Bad lines files directory + extension\n    Control previous = wErrorText;\n\n    // BadDestDir line\n    wlWarnDestDir = new Label( wErrorComp, SWT.RIGHT );\n    wlWarnDestDir.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.WarnDestDir.Label\" ) );\n    props.setLook( wlWarnDestDir );\n\n    FormData fdlWarnDestDir = new FormData();\n    fdlWarnDestDir.left = new FormAttachment( 0, 0 );\n    fdlWarnDestDir.top = new FormAttachment( previous, margin * 4 );\n    fdlWarnDestDir.right = new FormAttachment( middle, -margin );\n    wlWarnDestDir.setLayoutData( fdlWarnDestDir );\n\n    wbbWarnDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbbWarnDestDir );\n    wbbWarnDestDir.setText( BUTTON_BROWSE );\n    wbbWarnDestDir.setToolTipText( BaseMessages.getString( BASE_PKG, \"System.Tooltip.BrowseForDir\" ) );\n\n    FormData fdbBadDestDir = new FormData();\n    fdbBadDestDir.right = new FormAttachment( 100, 0 );\n    fdbBadDestDir.top = new FormAttachment( previous, margin * 4 );\n    wbbWarnDestDir.setLayoutData( fdbBadDestDir );\n\n    wbvWarnDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbvWarnDestDir );\n    wbvWarnDestDir.setText( BUTTON_VARIABLE );\n    wbvWarnDestDir.setToolTipText( TOOLTIP_VARIABLE );\n\n    FormData fdbvWarnDestDir = new FormData();\n    fdbvWarnDestDir.right = new FormAttachment( wbbWarnDestDir, -margin );\n    fdbvWarnDestDir.top = new FormAttachment( previous, margin * 4 );\n    wbvWarnDestDir.setLayoutData( fdbvWarnDestDir );\n\n    wWarnExt = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wWarnExt );\n    wWarnExt.addModifyListener( lsMod );\n\n    FormData fdWarnDestExt = new FormData();\n    fdWarnDestExt.left = new FormAttachment( wbvWarnDestDir, -150 );\n    fdWarnDestExt.right = new FormAttachment( wbvWarnDestDir, -margin );\n    fdWarnDestExt.top = new FormAttachment( previous, margin * 4 );\n    wWarnExt.setLayoutData( fdWarnDestExt );\n\n    wlWarnExt = new Label( wErrorComp, SWT.RIGHT );\n    wlWarnExt.setText( LABEL_EXTENSION );\n    props.setLook( wlWarnExt );\n\n    FormData fdlWarnDestExt = new FormData();\n    fdlWarnDestExt.top = new FormAttachment( previous, margin * 4 );\n    fdlWarnDestExt.right = new FormAttachment( wWarnExt, -margin );\n    wlWarnExt.setLayoutData( fdlWarnDestExt );\n\n    wWarnDestDir = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wWarnDestDir );\n    wWarnDestDir.addModifyListener( lsMod );\n\n    FormData fdBadDestDir = new FormData();\n    fdBadDestDir.left = new FormAttachment( middle, 0 );\n    fdBadDestDir.right = new FormAttachment( wlWarnExt, -margin );\n    fdBadDestDir.top = new FormAttachment( previous, margin * 4 );\n    wWarnDestDir.setLayoutData( fdBadDestDir );\n\n    // Listen to the Browse... button\n    wbbWarnDestDir.addSelectionListener( new DirectoryBrowserAdapter( wWarnDestDir ) );\n\n    // Listen to the Variable... button\n    wbvWarnDestDir.addSelectionListener( VariableButtonListenerFactory.getSelectionAdapter( shell, wWarnDestDir,\n      transMeta ) );\n\n    // Whenever something changes, set the tooltip to the expanded version of the directory:\n    wWarnDestDir.addModifyListener( getModifyListenerTooltipText( wWarnDestDir ) );\n\n    // Error lines files directory + extension\n    previous = wWarnDestDir;\n\n    // ErrorDestDir line\n    wlErrorDestDir = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorDestDir.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ErrorDestDir.Label\" ) );\n    props.setLook( wlErrorDestDir );\n\n    FormData fdlErrorDestDir = new FormData();\n    fdlErrorDestDir.left = new FormAttachment( 0, 0 );\n    fdlErrorDestDir.top = new FormAttachment( previous, margin );\n    fdlErrorDestDir.right = new FormAttachment( middle, -margin );\n    wlErrorDestDir.setLayoutData( fdlErrorDestDir );\n\n    wbbErrorDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbbErrorDestDir );\n    wbbErrorDestDir.setText( BUTTON_BROWSE );\n    wbbErrorDestDir.setToolTipText( BaseMessages.getString( BASE_PKG, \"System.Tooltip.BrowseForDir\" ) );\n\n    FormData fdbErrorDestDir = new FormData();\n    fdbErrorDestDir.right = new FormAttachment( 100, 0 );\n    fdbErrorDestDir.top = new FormAttachment( previous, margin );\n    wbbErrorDestDir.setLayoutData( fdbErrorDestDir );\n\n    wbvErrorDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbvErrorDestDir );\n    wbvErrorDestDir.setText( BUTTON_VARIABLE );\n    wbvErrorDestDir.setToolTipText( TOOLTIP_VARIABLE );\n\n    FormData fdbvErrorDestDir = new FormData();\n    fdbvErrorDestDir.right = new FormAttachment( wbbErrorDestDir, -margin );\n    fdbvErrorDestDir.left = new FormAttachment( wbvWarnDestDir, 0, SWT.LEFT );\n    fdbvErrorDestDir.top = new FormAttachment( previous, margin );\n    wbvErrorDestDir.setLayoutData( fdbvErrorDestDir );\n\n    wErrorExt = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wErrorExt );\n    wErrorExt.addModifyListener( lsMod );\n\n    FormData fdErrorDestExt = new FormData();\n    fdErrorDestExt.left = new FormAttachment( wWarnExt, 0, SWT.LEFT );\n    fdErrorDestExt.right = new FormAttachment( wWarnExt, 0, SWT.RIGHT );\n    fdErrorDestExt.top = new FormAttachment( previous, margin );\n    wErrorExt.setLayoutData( fdErrorDestExt );\n\n    wlErrorExt = new Label( wErrorComp, SWT.RIGHT );\n    wlErrorExt.setText( LABEL_EXTENSION );\n    props.setLook( wlErrorExt );\n\n    FormData fdlErrorDestExt = new FormData();\n    fdlErrorDestExt.top = new FormAttachment( previous, margin );\n    fdlErrorDestExt.right = new FormAttachment( wErrorExt, -margin );\n    wlErrorExt.setLayoutData( fdlErrorDestExt );\n\n    wErrorDestDir = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wErrorDestDir );\n    wErrorDestDir.addModifyListener( lsMod );\n\n    FormData fdErrorDestDir = new FormData();\n    fdErrorDestDir.left = new FormAttachment( middle, 0 );\n    fdErrorDestDir.right = new FormAttachment( wlErrorExt, -margin );\n    fdErrorDestDir.top = new FormAttachment( previous, margin );\n    wErrorDestDir.setLayoutData( fdErrorDestDir );\n\n    // Listen to the Browse... button\n    wbbErrorDestDir.addSelectionListener( new DirectoryBrowserAdapter( wErrorDestDir ) );\n\n    // Listen to the Variable... button\n    wbvErrorDestDir.addSelectionListener( VariableButtonListenerFactory.getSelectionAdapter( shell, wErrorDestDir,\n      transMeta ) );\n\n    // Whenever something changes, set the tooltip to the expanded version of the directory:\n    wErrorDestDir.addModifyListener( getModifyListenerTooltipText( wErrorDestDir ) );\n\n    // Data Error lines files directory + extention\n    previous = wErrorDestDir;\n\n    // LineNrDestDir line\n    wlLineNrDestDir = new Label( wErrorComp, SWT.RIGHT );\n    wlLineNrDestDir.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LineNrDestDir.Label\" ) );\n    props.setLook( wlLineNrDestDir );\n\n    FormData fdlLineNrDestDir = new FormData();\n    fdlLineNrDestDir.left = new FormAttachment( 0, 0 );\n    fdlLineNrDestDir.top = new FormAttachment( previous, margin );\n    fdlLineNrDestDir.right = new FormAttachment( middle, -margin );\n    wlLineNrDestDir.setLayoutData( fdlLineNrDestDir );\n\n    wbbLineNrDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbbLineNrDestDir );\n    wbbLineNrDestDir.setText( BUTTON_BROWSE );\n    wbbLineNrDestDir.setToolTipText( BaseMessages.getString( BASE_PKG, \"System.Tooltip.Browse\" ) );\n\n    FormData fdbLineNrDestDir = new FormData();\n    fdbLineNrDestDir.right = new FormAttachment( 100, 0 );\n    fdbLineNrDestDir.top = new FormAttachment( previous, margin );\n    wbbLineNrDestDir.setLayoutData( fdbLineNrDestDir );\n\n    wbvLineNrDestDir = new Button( wErrorComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbvLineNrDestDir );\n    wbvLineNrDestDir.setText( BUTTON_VARIABLE );\n    wbvLineNrDestDir.setToolTipText( TOOLTIP_VARIABLE );\n\n    FormData fdbvLineNrDestDir = new FormData();\n    fdbvLineNrDestDir.right = new FormAttachment( wbbLineNrDestDir, -margin );\n    fdbvLineNrDestDir.left = new FormAttachment( wbvErrorDestDir, 0, SWT.LEFT );\n    fdbvLineNrDestDir.top = new FormAttachment( previous, margin );\n    wbvLineNrDestDir.setLayoutData( fdbvLineNrDestDir );\n\n    wLineNrExt = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wLineNrExt );\n    wLineNrExt.addModifyListener( lsMod );\n\n    FormData fdLineNrDestExt = new FormData();\n    fdLineNrDestExt.left = new FormAttachment( wErrorExt, 0, SWT.LEFT );\n    fdLineNrDestExt.right = new FormAttachment( wErrorExt, 0, SWT.RIGHT );\n    fdLineNrDestExt.top = new FormAttachment( previous, margin );\n    wLineNrExt.setLayoutData( fdLineNrDestExt );\n\n    wlLineNrExt = new Label( wErrorComp, SWT.RIGHT );\n    wlLineNrExt.setText( LABEL_EXTENSION );\n    props.setLook( wlLineNrExt );\n\n    FormData fdlLineNrDestExt = new FormData();\n    fdlLineNrDestExt.top = new FormAttachment( previous, margin );\n    fdlLineNrDestExt.right = new FormAttachment( wLineNrExt, -margin );\n    wlLineNrExt.setLayoutData( fdlLineNrDestExt );\n\n    wLineNrDestDir = new Text( wErrorComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wLineNrDestDir );\n    wLineNrDestDir.addModifyListener( lsMod );\n\n    FormData fdLineNrDestDir = new FormData();\n    fdLineNrDestDir.left = new FormAttachment( middle, 0 );\n    fdLineNrDestDir.right = new FormAttachment( wlLineNrExt, -margin );\n    fdLineNrDestDir.top = new FormAttachment( previous, margin );\n    wLineNrDestDir.setLayoutData( fdLineNrDestDir );\n\n    // Listen to the Browse... button\n    wbbLineNrDestDir.addSelectionListener( new DirectoryBrowserAdapter( wLineNrDestDir ) );\n\n    // Listen to the Variable... button\n    wbvLineNrDestDir.addSelectionListener( VariableButtonListenerFactory.getSelectionAdapter( shell, wLineNrDestDir,\n      transMeta ) );\n\n    // Whenever something changes, set the tooltip to the expanded version of the directory:\n    wLineNrDestDir.addModifyListener( getModifyListenerTooltipText( wLineNrDestDir ) );\n\n    FormData fdErrorComp = new FormData();\n    fdErrorComp.left = new FormAttachment( 0, 0 );\n    fdErrorComp.top = new FormAttachment( 0, 0 );\n    fdErrorComp.right = new FormAttachment( 100, 0 );\n    fdErrorComp.bottom = new FormAttachment( 100, 0 );\n    wErrorComp.setLayoutData( fdErrorComp );\n\n    wErrorComp.pack();\n    // What's the size:\n    Rectangle bounds = wErrorComp.getBounds();\n\n    wErrorSComp.setContent( wErrorComp );\n    wErrorSComp.setExpandHorizontal( true );\n    wErrorSComp.setExpandVertical( true );\n    wErrorSComp.setMinWidth( bounds.width );\n    wErrorSComp.setMinHeight( bounds.height );\n\n    wErrorTab.setControl( wErrorSComp );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF CONTENT TAB\n    // ///////////////////////////////////////////////////////////\n\n  }\n\n  private void addFiltersTabs() {\n    // Filters tab...\n    CTabItem wFilterTab = new CTabItem( wTabFolder, SWT.NONE );\n    wFilterTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilterTab.TabTitle\" ) );\n\n    FormLayout filterLayout = new FormLayout();\n    filterLayout.marginWidth = Const.FORM_MARGIN;\n    filterLayout.marginHeight = Const.FORM_MARGIN;\n\n    Composite wFilterComp = new Composite( wTabFolder, SWT.NONE );\n    wFilterComp.setLayout( filterLayout );\n    props.setLook( wFilterComp );\n\n    final int FilterRows = input.getFilter().length;\n\n    ColumnInfo[] colinf =\n      new ColumnInfo[] {\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilterStringColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilterPositionColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.StopOnFilterColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, YES_NO_COMBO ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilterPositiveColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, YES_NO_COMBO ) };\n\n    colinf[ 2 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.StopOnFilterColumn.Tooltip\" ) );\n    colinf[ 3 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FilterPositiveColumn.Tooltip\" ) );\n\n    wFilter = new TableView( transMeta, wFilterComp, SWT.FULL_SELECTION | SWT.MULTI, colinf, FilterRows, lsMod, props );\n\n    FormData fdFilter = new FormData();\n    fdFilter.left = new FormAttachment( 0, 0 );\n    fdFilter.top = new FormAttachment( 0, 0 );\n    fdFilter.right = new FormAttachment( 100, 0 );\n    fdFilter.bottom = new FormAttachment( 100, 0 );\n    wFilter.setLayoutData( fdFilter );\n\n    FormData fdFilterComp = new FormData();\n    fdFilterComp.left = new FormAttachment( 0, 0 );\n    fdFilterComp.top = new FormAttachment( 0, 0 );\n    fdFilterComp.right = new FormAttachment( 100, 0 );\n    fdFilterComp.bottom = new FormAttachment( 100, 0 );\n    wFilterComp.setLayoutData( fdFilterComp );\n\n    wFilterComp.layout();\n    wFilterTab.setControl( wFilterComp );\n  }\n\n  private void addFieldsTabs() {\n    // Fields tab...\n    CTabItem wFieldsTab = new CTabItem( wTabFolder, SWT.NONE );\n    wFieldsTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FieldsTab.TabTitle\" ) );\n\n    FormLayout fieldsLayout = new FormLayout();\n    fieldsLayout.marginWidth = Const.FORM_MARGIN;\n    fieldsLayout.marginHeight = Const.FORM_MARGIN;\n\n    Composite wFieldsComp = new Composite( wTabFolder, SWT.NONE );\n    wFieldsComp.setLayout( fieldsLayout );\n    props.setLook( wFieldsComp );\n\n    wGet = new Button( wFieldsComp, SWT.PUSH );\n    wGet.setText( BaseMessages.getString( BASE_PKG, \"System.Button.GetFields\" ) );\n    fdGet = new FormData();\n    fdGet.left = new FormAttachment( 50, 0 );\n    fdGet.bottom = new FormAttachment( 100, 0 );\n    wGet.setLayoutData( fdGet );\n\n    final int FieldsRows = input.inputFields.length;\n\n    ColumnInfo[] colinf =\n      new ColumnInfo[] {\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NameColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.TypeColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaBase.getTypes(), true ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FormatColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_FORMAT, 2 ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.PositionColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LengthColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.PrecisionColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.CurrencyColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.DecimalColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.GroupColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NullIfColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.IfNullColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.TrimTypeColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaBase.trimTypeDesc, true ),\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RepeatColumn.Column\" ),\n          ColumnInfo.COLUMN_TYPE_CCOMBO, new String[] { COMBO_YES, COMBO_NO }, true ) };\n\n    colinf[ 12 ].setToolTip( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.RepeatColumn.Tooltip\" ) );\n\n    wFields = new TableView( transMeta, wFieldsComp, SWT.FULL_SELECTION | SWT.MULTI, colinf, FieldsRows, lsMod, props );\n\n    FormData fdFields = new FormData();\n    fdFields.left = new FormAttachment( 0, 0 );\n    fdFields.top = new FormAttachment( 0, 0 );\n    fdFields.right = new FormAttachment( 100, 0 );\n    fdFields.bottom = new FormAttachment( wGet, -margin );\n    wFields.setLayoutData( fdFields );\n\n    FormData fdFieldsComp = new FormData();\n    fdFieldsComp.left = new FormAttachment( 0, 0 );\n    fdFieldsComp.top = new FormAttachment( 0, 0 );\n    fdFieldsComp.right = new FormAttachment( 100, 0 );\n    fdFieldsComp.bottom = new FormAttachment( 100, 0 );\n    wFieldsComp.setLayoutData( fdFieldsComp );\n\n    wFieldsComp.layout();\n    wFieldsTab.setControl( wFieldsComp );\n  }\n\n  public void setFlags() {\n    boolean accept = wAccFilenames.getSelection();\n    wlPassThruFields.setEnabled( accept );\n    wPassThruFields.setEnabled( accept );\n    if ( !wAccFilenames.getSelection() ) {\n      wPassThruFields.setSelection( false );\n    }\n    wlAccField.setEnabled( accept );\n    wAccField.setEnabled( accept );\n    wlAccStep.setEnabled( accept );\n    wAccStep.setEnabled( accept );\n    wlFilenameList.setEnabled( !accept );\n    wFilenameList.setEnabled( !accept );\n    wbShowFiles.setEnabled( !accept );\n\n    wFirst.setEnabled( !accept );\n    wFirstHeader.setEnabled( !accept );\n\n    wlInclFilenameField.setEnabled( wInclFilename.getSelection() );\n    wInclFilenameField.setEnabled( wInclFilename.getSelection() );\n\n    wlInclRownumField.setEnabled( wInclRownum.getSelection() );\n    wInclRownumField.setEnabled( wInclRownum.getSelection() );\n    wlRownumByFileField.setEnabled( wInclRownum.getSelection() );\n    wRownumByFile.setEnabled( wInclRownum.getSelection() );\n\n    // Error handling tab...\n    wlSkipErrorLines.setEnabled( wErrorIgnored.getSelection() );\n    wSkipErrorLines.setEnabled( wErrorIgnored.getSelection() );\n    wlErrorCount.setEnabled( wErrorIgnored.getSelection() );\n    wErrorCount.setEnabled( wErrorIgnored.getSelection() );\n    wlErrorFields.setEnabled( wErrorIgnored.getSelection() );\n    wErrorFields.setEnabled( wErrorIgnored.getSelection() );\n    wlErrorText.setEnabled( wErrorIgnored.getSelection() );\n    wErrorText.setEnabled( wErrorIgnored.getSelection() );\n\n    wlWarnDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wWarnDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wlWarnExt.setEnabled( wErrorIgnored.getSelection() );\n    wWarnExt.setEnabled( wErrorIgnored.getSelection() );\n    wbbWarnDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wbvWarnDestDir.setEnabled( wErrorIgnored.getSelection() );\n\n    wlErrorDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wErrorDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wlErrorExt.setEnabled( wErrorIgnored.getSelection() );\n    wErrorExt.setEnabled( wErrorIgnored.getSelection() );\n    wbbErrorDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wbvErrorDestDir.setEnabled( wErrorIgnored.getSelection() );\n\n    wlLineNrDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wLineNrDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wlLineNrExt.setEnabled( wErrorIgnored.getSelection() );\n    wLineNrExt.setEnabled( wErrorIgnored.getSelection() );\n    wbbLineNrDestDir.setEnabled( wErrorIgnored.getSelection() );\n    wbvLineNrDestDir.setEnabled( wErrorIgnored.getSelection() );\n\n    wlNrHeader.setEnabled( wHeader.getSelection() );\n    wNrHeader.setEnabled( wHeader.getSelection() );\n    wlNrFooter.setEnabled( wFooter.getSelection() );\n    wNrFooter.setEnabled( wFooter.getSelection() );\n    wlNrWraps.setEnabled( wWraps.getSelection() );\n    wNrWraps.setEnabled( wWraps.getSelection() );\n\n    wlNrLinesPerPage.setEnabled( wLayoutPaged.getSelection() );\n    wNrLinesPerPage.setEnabled( wLayoutPaged.getSelection() );\n    wlNrLinesDocHeader.setEnabled( wLayoutPaged.getSelection() );\n    wNrLinesDocHeader.setEnabled( wLayoutPaged.getSelection() );\n  }\n\n  /**\n   * Read the data from the HadoopFileInputMeta object and show it in this dialog.\n   *\n   * @param meta The HadoopFileInputMeta object to obtain the data from.\n   */\n  public void getData( HadoopFileInputMeta meta ) {\n    final HadoopFileInputMeta in = meta;\n\n    wAccFilenames.setSelection( in.isAcceptingFilenames() );\n    wPassThruFields.setSelection( in.inputFiles.passingThruFields );\n    if ( in.getAcceptingField() != null ) {\n      wAccField.setText( in.getAcceptingField() );\n    }\n    if ( in.getAcceptingStep() != null ) {\n      wAccStep.setText( in.getAcceptingStep().getName() );\n    }\n    if ( in.getFileName() != null ) {\n      wFilenameList.removeAll();\n\n      for ( int i = 0; i < in.getFileName().length; i++ ) {\n        String sourceUrl = in.getFileName()[ i ];\n        String clusterName = input.getClusterNameBy( sourceUrl );\n        String environment = STATIC_ENVIRONMENT;\n        if ( in.environment != null && i < in.environment.length && in.environment[ i ] != null ) {\n          environment = in.environment[ i ];\n        }\n        if ( clusterName != null ) {\n          clusterName =\n            clusterName.startsWith( HadoopFileInputMeta.LOCAL_SOURCE_FILE ) ? LOCAL_ENVIRONMENT : clusterName;\n          clusterName =\n            clusterName.startsWith( HadoopFileInputMeta.STATIC_SOURCE_FILE ) ? STATIC_ENVIRONMENT : clusterName;\n          clusterName = clusterName.startsWith( HadoopFileInputMeta.S3_SOURCE_FILE ) ? S3_ENVIRONMENT : clusterName;\n          if ( clusterName.equals( LOCAL_ENVIRONMENT ) || clusterName.equals( STATIC_ENVIRONMENT )\n            || clusterName.equals( S3_ENVIRONMENT ) ) {\n            environment = clusterName;\n          } else {\n            sourceUrl = input.getUrlPath( sourceUrl );\n            NamedCluster c = namedClusterService.getNamedClusterByName( clusterName, metaStore );\n            environment = c == null ? \"\" : clusterName;\n          }\n        }\n\n        wFilenameList\n          .add( environment, sourceUrl, in.inputFiles.fileMask[ i ],\n            in.getRequiredFilesDesc( in.inputFiles.fileRequired[ i ] ),\n            in.getRequiredFilesDesc( in.inputFiles.includeSubFolders[ i ] ) );\n      }\n      wFilenameList.removeEmptyRows();\n      wFilenameList.setRowNums();\n      wFilenameList.optWidth( true );\n    }\n    if ( in.content.fileType != null ) {\n      wFiletype.setText( in.content.fileType );\n    }\n    if ( in.content.separator != null ) {\n      wSeparator.setText( in.content.separator );\n    }\n    if ( in.content.enclosure != null ) {\n      wEnclosure.setText( in.content.enclosure );\n    }\n    if ( in.content.escapeCharacter != null ) {\n      wEscape.setText( in.content.escapeCharacter );\n    }\n    wHeader.setSelection( in.content.header );\n    wNrHeader.setText( \"\" + in.content.nrHeaderLines );\n    wFooter.setSelection( in.content.footer );\n    wNrFooter.setText( \"\" + in.content.nrFooterLines );\n    wWraps.setSelection( in.content.lineWrapped );\n    wNrWraps.setText( \"\" + in.content.nrWraps );\n    wLayoutPaged.setSelection( in.content.layoutPaged );\n    wNrLinesPerPage.setText( \"\" + in.content.nrLinesPerPage );\n    wNrLinesDocHeader.setText( \"\" + in.content.nrLinesDocHeader );\n    if ( in.content.fileCompression != null ) {\n      wCompression.setText( in.content.fileCompression );\n    }\n    wNoempty.setSelection( in.content.noEmptyLines );\n    wInclFilename.setSelection( in.content.includeFilename );\n    wInclRownum.setSelection( in.content.includeRowNumber );\n    wRownumByFile.setSelection( in.content.rowNumberByFile );\n    wDateLenient.setSelection( in.content.dateFormatLenient );\n    wAddResult.setSelection( in.inputFiles.isaddresult );\n\n    if ( in.content.filenameField != null ) {\n      wInclFilenameField.setText( in.content.filenameField );\n    }\n    if ( in.content.rowNumberField != null ) {\n      wInclRownumField.setText( in.content.rowNumberField );\n    }\n    if ( in.content.fileFormat != null ) {\n      wFormat.setText( in.content.fileFormat );\n    }\n    wLimit.setText( \"\" + in.content.rowLimit );\n\n    logDebug( \"getting fields info...\" );\n    getFieldsData( in, false );\n\n    if ( in.getEncoding() != null ) {\n      wEncoding.setText( in.getEncoding() );\n    }\n\n    // Error handling fields...\n    wErrorIgnored.setSelection( in.errorHandling.errorIgnored );\n    wSkipErrorLines.setSelection( in.isErrorLineSkipped() );\n    if ( in.getErrorCountField() != null ) {\n      wErrorCount.setText( in.getErrorCountField() );\n    }\n    if ( in.getErrorFieldsField() != null ) {\n      wErrorFields.setText( in.getErrorFieldsField() );\n    }\n    if ( in.getErrorTextField() != null ) {\n      wErrorText.setText( in.getErrorTextField() );\n    }\n    if ( in.errorHandling.warningFilesDestinationDirectory != null ) {\n      wWarnDestDir.setText( in.errorHandling.warningFilesDestinationDirectory );\n    }\n    if ( in.errorHandling.warningFilesExtension != null ) {\n      wWarnExt.setText( in.errorHandling.warningFilesExtension );\n    }\n    if ( in.errorHandling.errorFilesDestinationDirectory != null ) {\n      wErrorDestDir.setText( in.errorHandling.errorFilesDestinationDirectory );\n    }\n    if ( in.errorHandling.errorFilesExtension != null ) {\n      wErrorExt.setText( in.errorHandling.errorFilesExtension );\n    }\n    if ( in.errorHandling.lineNumberFilesDestinationDirectory != null ) {\n      wLineNrDestDir.setText( in.errorHandling.lineNumberFilesDestinationDirectory );\n    }\n    if ( in.errorHandling.lineNumberFilesExtension != null ) {\n      wLineNrExt.setText( in.errorHandling.lineNumberFilesExtension );\n    }\n    for ( int i = 0; i < in.getFilter().length; i++ ) {\n      TableItem item = wFilter.table.getItem( i );\n\n      TextFileFilter filter = in.getFilter()[ i ];\n      if ( filter.getFilterString() != null ) {\n        item.setText( 1, filter.getFilterString() );\n      }\n      if ( filter.getFilterPosition() >= 0 ) {\n        item.setText( 2, \"\" + filter.getFilterPosition() );\n      }\n      item.setText( 3, filter.isFilterLastLine() ? COMBO_YES : COMBO_NO );\n      item.setText( 4, filter.isFilterPositive() ? COMBO_YES : COMBO_NO );\n    }\n\n    // Date locale\n    wDateLocale.setText( in.content.dateFormatLocale.toString() );\n\n    wFields.removeEmptyRows();\n    wFields.setRowNums();\n    wFields.optWidth( true );\n\n    wFilter.removeEmptyRows();\n    wFilter.setRowNums();\n    wFilter.optWidth( true );\n\n    setFlags();\n\n    wStepname.selectAll();\n  }\n\n  private void getFieldsData( HadoopFileInputMeta in, boolean insertAtTop ) {\n    for ( int i = 0; i < in.inputFields.length; i++ ) {\n      BaseFileField field = in.inputFields[ i ];\n\n      TableItem item;\n\n      if ( insertAtTop ) {\n        item = new TableItem( wFields.table, SWT.NONE, i );\n      } else {\n        if ( i >= wFields.table.getItemCount() ) {\n          item = wFields.table.getItem( i );\n        } else {\n          item = new TableItem( wFields.table, SWT.NONE );\n        }\n      }\n\n      item.setText( 1, field.getName() );\n      String type = field.getTypeDesc();\n      String format = field.getFormat();\n      String position = \"\" + field.getPosition();\n      String length = \"\" + field.getLength();\n      String prec = \"\" + field.getPrecision();\n      String curr = field.getCurrencySymbol();\n      String group = field.getGroupSymbol();\n      String decim = field.getDecimalSymbol();\n      String def = field.getNullString();\n      String ifNull = field.getIfNullValue();\n      String trim = field.getTrimTypeDesc();\n      String rep =\n        field.isRepeated() ? COMBO_YES : COMBO_NO;\n\n      if ( type != null ) {\n        item.setText( 2, type );\n      }\n      if ( format != null ) {\n        item.setText( 3, format );\n      }\n      if ( position != null && !\"-1\".equals( position ) ) {\n        item.setText( 4, position );\n      }\n      if ( length != null && !\"-1\".equals( length ) ) {\n        item.setText( 5, length );\n      }\n      if ( prec != null && !\"-1\".equals( prec ) ) {\n        item.setText( 6, prec );\n      }\n      if ( curr != null ) {\n        item.setText( 7, curr );\n      }\n      if ( decim != null ) {\n        item.setText( 8, decim );\n      }\n      if ( group != null ) {\n        item.setText( 9, group );\n      }\n      if ( def != null ) {\n        item.setText( 10, def );\n      }\n      if ( ifNull != null ) {\n        item.setText( 11, ifNull );\n      }\n      if ( trim != null ) {\n        item.setText( 12, trim );\n      }\n      if ( rep != null ) {\n        item.setText( 13, rep );\n      }\n    }\n\n  }\n\n  private void setEncodings() {\n    // Encoding of the text file:\n    if ( !gotEncodings ) {\n      gotEncodings = true;\n\n      wEncoding.removeAll();\n      List<Charset> values = new ArrayList<>( Charset.availableCharsets().values() );\n      for ( int i = 0; i < values.size(); i++ ) {\n        Charset charSet = values.get( i );\n        wEncoding.add( charSet.displayName() );\n      }\n\n      // Now select the default!\n      String defEncoding = Const.getEnvironmentVariable( \"file.encoding\", \"UTF-8\" );\n      int idx = Const.indexOfString( defEncoding, wEncoding.getItems() );\n      if ( idx >= 0 ) {\n        wEncoding.select( idx );\n      }\n    }\n  }\n\n  private void cancel() {\n    stepname = null;\n    input.setChanged( changed );\n    dispose();\n  }\n\n  private void ok() {\n    if ( Utils.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    getInfo( input );\n    dispose();\n  }\n\n  private void getInfo( HadoopFileInputMeta meta ) {\n    stepname = wStepname.getText(); // return value\n\n    // copy info to HadoopFileInputMeta class (input)\n    meta.inputFiles.acceptingFilenames = wAccFilenames.getSelection();\n    meta.inputFiles.passingThruFields = wPassThruFields.getSelection();\n    meta.inputFiles.acceptingField = wAccField.getText();\n    meta.inputFiles.acceptingStepName = wAccStep.getText();\n    meta.setAcceptingStep( transMeta.findStep( wAccStep.getText() ) );\n\n    meta.content.fileType = wFiletype.getText();\n    meta.content.fileFormat = wFormat.getText();\n    meta.content.separator = wSeparator.getText();\n    meta.content.enclosure = wEnclosure.getText();\n    meta.content.escapeCharacter = wEscape.getText();\n    meta.content.rowLimit = Const.toLong( wLimit.getText(), 0L );\n    meta.content.filenameField = wInclFilenameField.getText();\n    meta.content.rowNumberField = wInclRownumField.getText();\n    meta.inputFiles.isaddresult = wAddResult.getSelection();\n\n    meta.content.includeFilename = wInclFilename.getSelection();\n    meta.content.includeRowNumber = wInclRownum.getSelection();\n    meta.content.rowNumberByFile = wRownumByFile.getSelection();\n    meta.content.header = wHeader.getSelection();\n    meta.content.nrHeaderLines = Const.toInt( wNrHeader.getText(), 1 );\n    meta.content.footer = wFooter.getSelection();\n    meta.content.nrFooterLines = Const.toInt( wNrFooter.getText(), 1 );\n    meta.content.lineWrapped = wWraps.getSelection();\n    meta.content.nrWraps = Const.toInt( wNrWraps.getText(), 1 );\n    meta.content.layoutPaged = wLayoutPaged.getSelection();\n    meta.content.nrLinesPerPage = Const.toInt( wNrLinesPerPage.getText(), 80 );\n    meta.content.nrLinesDocHeader = Const.toInt( wNrLinesDocHeader.getText(), 0 );\n    meta.content.fileCompression = wCompression.getText();\n    meta.content.dateFormatLenient = wDateLenient.getSelection();\n    meta.content.noEmptyLines = wNoempty.getSelection();\n    meta.content.encoding = wEncoding.getText();\n\n    int nrfiles = wFilenameList.getItemCount();\n    int nrfields = wFields.nrNonEmpty();\n    int nrfilters = wFilter.nrNonEmpty();\n    meta.allocate( nrfiles, nrfields, nrfilters );\n\n    Map<String, String> namedClusterURLMappings = new HashMap<>();\n    String[] fileNames = new String[ wFilenameList.getItems( 1 ).length ];\n    meta.environment = wFilenameList.getItems( 0 );\n\n    for ( int i = 0; i < meta.environment.length; i++ ) {\n      String sourceNc = meta.environment[ i ];\n      sourceNc = sourceNc.equals( LOCAL_ENVIRONMENT ) ? HadoopFileInputMeta.LOCAL_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( STATIC_ENVIRONMENT ) ? HadoopFileInputMeta.STATIC_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( S3_ENVIRONMENT ) ? HadoopFileInputMeta.S3_SOURCE_FILE + i : sourceNc;\n      String source = wFilenameList.getItems( 1 )[ i ];\n      if ( !Utils.isEmpty( source ) ) {\n        fileNames[ i ] = input.loadUrl( source, sourceNc, getMetaStore(), namedClusterURLMappings );\n      } else {\n        fileNames[ i ] = \"\";\n      }\n    }\n\n    meta.setFileName( fileNames );\n    meta.inputFiles.fileMask = wFilenameList.getItems( 2 );\n    meta.inputFiles.setFileRequired( wFilenameList.getItems( 3 ) );\n    meta.inputFiles.setIncludeSubFolders( wFilenameList.getItems( 4 ) );\n\n    input.setNamedClusterURLMapping( namedClusterURLMappings );\n\n    for ( int i = 0; i < nrfields; i++ ) {\n      BaseFileField field = new BaseFileField();\n\n      TableItem item = wFields.getNonEmpty( i );\n      field.setName( item.getText( 1 ) );\n      field.setType( ValueMetaBase.getType( item.getText( 2 ) ) );\n      field.setFormat( item.getText( 3 ) );\n      field.setPosition( Const.toInt( item.getText( 4 ), -1 ) );\n      field.setLength( Const.toInt( item.getText( 5 ), -1 ) );\n      field.setPrecision( Const.toInt( item.getText( 6 ), -1 ) );\n      field.setCurrencySymbol( item.getText( 7 ) );\n      field.setDecimalSymbol( item.getText( 8 ) );\n      field.setGroupSymbol( item.getText( 9 ) );\n      field.setNullString( item.getText( 10 ) );\n      field.setIfNullValue( item.getText( 11 ) );\n      field.setTrimType( ValueMetaBase.getTrimTypeByDesc( item.getText( 12 ) ) );\n      field.setRepeated( COMBO_YES.equalsIgnoreCase( item.getText( 13 ) ) );\n\n      ( meta.inputFields )[ i ] = field;\n    }\n\n    for ( int i = 0; i < nrfilters; i++ ) {\n      TableItem item = wFilter.getNonEmpty( i );\n      TextFileFilter filter = new TextFileFilter();\n      ( meta.getFilter() )[ i ] = filter;\n\n      filter.setFilterString( item.getText( 1 ) );\n      filter.setFilterPosition( Const.toInt( item.getText( 2 ), -1 ) );\n      filter.setFilterLastLine( COMBO_YES.equalsIgnoreCase( item.getText( 3 ) ) );\n      filter.setFilterPositive( COMBO_YES.equalsIgnoreCase( item.getText( 4 ) ) );\n    }\n    // Error handling fields...\n    meta.errorHandling.errorIgnored = wErrorIgnored.getSelection();\n    meta.setErrorLineSkipped( wSkipErrorLines.getSelection() );\n    meta.setErrorCountField( wErrorCount.getText() );\n    meta.setErrorFieldsField( wErrorFields.getText() );\n    meta.setErrorTextField( wErrorText.getText() );\n\n    meta.errorHandling.warningFilesDestinationDirectory = wWarnDestDir.getText();\n    meta.errorHandling.warningFilesExtension = wWarnExt.getText();\n    meta.errorHandling.errorFilesDestinationDirectory = wErrorDestDir.getText();\n    meta.errorHandling.errorFilesExtension = wErrorExt.getText();\n    meta.errorHandling.lineNumberFilesDestinationDirectory = wLineNrDestDir.getText();\n    meta.errorHandling.lineNumberFilesExtension = wLineNrExt.getText();\n\n    // Date format Locale\n    Locale locale = EnvUtil.createLocale( wDateLocale.getText() );\n    if ( !locale.equals( Locale.getDefault() ) ) {\n      meta.content.dateFormatLocale = locale;\n    } else {\n      meta.content.dateFormatLocale = Locale.getDefault();\n    }\n  }\n\n  private void get() {\n    if ( wFiletype.getText().equalsIgnoreCase( \"CSV\" ) ) {\n      getCSV();\n    } else {\n      getFixed();\n    }\n  }\n\n  // Get the data layout\n  private void getCSV() {\n    HadoopFileInputMeta meta = new HadoopFileInputMeta();\n    getInfo( meta );\n    HadoopFileInputMeta previousMeta = (HadoopFileInputMeta) meta.clone();\n    FileInputList textFileList = meta.getTextFileList( transMeta.getBowl(), transMeta );\n    InputStream fileInputStream = null;\n    InputStream inputStream = null;\n    StringBuilder lineStringBuilder = new StringBuilder( 256 );\n    int fileFormatType = meta.getFileFormatTypeNr();\n\n    String delimiter = transMeta.environmentSubstitute( meta.content.separator );\n\n    if ( textFileList.nrOfFiles() > 0 ) {\n      int clearFields = meta.content.header ? SWT.YES : SWT.NO;\n      int nrInputFields = meta.inputFields.length;\n\n      if ( meta.content.header && nrInputFields > 0 ) {\n        MessageBox mb = new MessageBox( shell, SWT.YES | SWT.NO | SWT.CANCEL | SWT.ICON_QUESTION );\n        mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ClearFieldList.DialogMessage\" ) );\n        mb.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.ClearFieldList.DialogTitle\" ) );\n        clearFields = mb.open();\n        if ( clearFields == SWT.CANCEL ) {\n          return;\n        }\n      }\n\n      try {\n        wFields.table.removeAll();\n        Table table = wFields.table;\n        inputStream = getInputStream( meta, textFileList );\n        InputStreamReader reader = getInputStreamReader( meta, inputStream );\n\n\n        if ( clearFields == SWT.YES || !meta.content.header || nrInputFields > 0 ) {\n          // Scan the header-line, determine fields...\n          String line = null;\n\n          if ( meta.content.header || meta.inputFields.length == 0 ) {\n            line = getLine( meta, textFileList );\n            if ( line != null ) {\n              // Estimate the number of input fields...\n              // Chop up the line using the delimiter\n              String[] guessedFields =\n                TextFileInputUtils.guessStringsFromLine( new Variables(), log, line, meta, delimiter, StringUtil\n                  .substituteHex( meta.content.enclosure ), StringUtil.substituteHex( meta.content.escapeCharacter ) );\n\n              for ( int i = 0; i < guessedFields.length; i++ ) {\n                String field = guessedFields[ i ];\n                if ( field == null || field.length() == 0 || ( nrInputFields == 0 && !meta.content.header ) ) {\n                  field = \"Field\" + ( i + 1 );\n                } else {\n                  // Trim the field\n                  field = Const.trim( field );\n                  // Replace all spaces & - with underscore _\n                  field = Const.replace( field, \" \", \"_\" );\n                  field = Const.replace( field, \"-\", \"_\" );\n                }\n\n                TableItem item = new TableItem( table, SWT.NONE );\n                item.setText( 1, field );\n                item.setText( 2, \"String\" ); // The default type is String...\n              }\n\n              wFields.setRowNums();\n              wFields.optWidth( true );\n\n              // Copy it...\n              getInfo( meta );\n            }\n          }\n\n          // Sample a few lines to determine the correct type of the fields...\n          String shellText = BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LinesToSample.DialogTitle\" );\n          String lineText = BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LinesToSample.DialogMessage\" );\n          EnterNumberDialog end = new EnterNumberDialog( shell, 100, shellText, lineText );\n          int samples = end.open();\n          if ( samples >= 0 ) {\n            getInfo( meta );\n\n            TextFileCSVImportProgressDialog pd =\n              new TextFileCSVImportProgressDialog( shell, meta, transMeta, reader, samples, clearFields == SWT.YES );\n            String message = pd.open();\n            if ( message != null ) {\n              wFields.removeAll();\n\n              // OK, what's the result of our search?\n              getData( meta );\n\n              // If we didn't want the list to be cleared, we need to re-inject the previous values...\n              //\n              if ( clearFields == SWT.NO ) {\n                getFieldsData( previousMeta, true );\n                wFields.table.setSelection( previousMeta.inputFields.length, wFields.table.getItemCount() - 1 );\n              }\n\n              wFields.removeEmptyRows();\n              wFields.setRowNums();\n              wFields.optWidth( true );\n\n              EnterTextDialog etd =\n                new EnterTextDialog( shell, BaseMessages.getString( BASE_PKG,\n                  \"TextFileInputDialog.ScanResults.DialogTitle\" ), BaseMessages.getString( BASE_PKG,\n                  \"TextFileInputDialog.ScanResults.DialogMessage\" ), message, true );\n              etd.setReadOnly();\n              etd.open();\n            }\n          }\n        } else {\n          MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n          mb.setMessage(\n            BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.UnableToReadHeaderLine.DialogMessage\" ) );\n          mb.setText( ERROR_TITLE );\n          mb.open();\n        }\n      } catch ( IOException e ) {\n        new ErrorDialog( shell, BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.IOError.DialogTitle\" ),\n          BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.IOError.DialogMessage\" ), e );\n      } catch ( KettleException e ) {\n        new ErrorDialog( shell, ERROR_TITLE, BaseMessages\n          .getString( BASE_PKG, \"TextFileInputDialog.ErrorGettingFileDesc.DialogMessage\" ), e );\n      } finally {\n        try {\n          inputStream.close();\n        } catch ( Exception e ) {\n          // Ignore errors\n        }\n      }\n    } else {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NoValidFileFound.DialogMessage\" ) );\n      mb.setText( ERROR_TITLE );\n      mb.open();\n    }\n  }\n\n  public static final int guessPrecision( double d ) {\n    // Round numbers\n    long frac = Math.round( ( d - Math.floor( d ) ) * 1E10 ); // max precision : 10\n    int precision = 10;\n\n    // 0,34 --> 3400000000\n    // 0 to the right --> precision -1!\n    // 0 to the right means frac%10 == 0\n\n    while ( precision >= 0 && ( frac % 10 ) == 0 ) {\n      frac /= 10;\n      precision--;\n    }\n    precision++;\n\n    return precision;\n  }\n\n  public static final int guessIntLength( double d ) {\n    double flr = Math.floor( d );\n    int len = 1;\n\n    while ( flr > 9 ) {\n      flr /= 10;\n      flr = Math.floor( flr );\n      len++;\n    }\n\n    return len;\n  }\n\n  public static final int guessLength( double d ) {\n    int intlen = guessIntLength( d );\n    int precis = guessPrecision( d );\n    int length = 1;\n\n    if ( precis > 0 ) {\n      length = intlen + 1 + precis;\n    } else {\n      length = intlen;\n    }\n\n    return length;\n  }\n\n  // Preview the data\n  private void preview() {\n    // Create the XML input step\n    HadoopFileInputMeta oneMeta = new HadoopFileInputMeta();\n    getInfo( oneMeta );\n\n    if ( oneMeta.isAcceptingFilenames() ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_INFORMATION );\n\n      // Nothing found that matches your criteria\n      mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Dialog.SpecifyASampleFile.Message\" ) );\n\n      // Sorry!\n      mb.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.Dialog.SpecifyASampleFile.Title\" ) );\n      mb.open();\n      return;\n    }\n\n    TransMeta previewMeta =\n      TransPreviewFactory.generatePreviewTransformation( transMeta, oneMeta, wStepname.getText() );\n\n    EnterNumberDialog numberDialog =\n      new EnterNumberDialog( shell, props.getDefaultPreviewSize(), BaseMessages.getString( BASE_PKG,\n        \"TextFileInputDialog.PreviewSize.DialogTitle\" ), BaseMessages.getString( BASE_PKG,\n        \"TextFileInputDialog.PreviewSize.DialogMessage\" ) );\n    int previewSize = numberDialog.open();\n    if ( previewSize > 0 ) {\n      TransPreviewProgressDialog progressDialog =\n        new TransPreviewProgressDialog( shell, previewMeta, new String[] { wStepname.getText() },\n          new int[] { previewSize } );\n      progressDialog.open();\n\n      Trans trans = progressDialog.getTrans();\n      String loggingText = progressDialog.getLoggingText();\n\n      if ( !progressDialog.isCancelled() ) {\n        if ( trans.getResult() != null && trans.getResult().getNrErrors() > 0 ) {\n          EnterTextDialog etd =\n            new EnterTextDialog( shell, BaseMessages.getString( BASE_PKG, \"System.Dialog.PreviewError.Title\" ),\n              BaseMessages.getString( BASE_PKG, \"System.Dialog.PreviewError.Message\" ), loggingText, true );\n          etd.setReadOnly();\n          etd.open();\n        }\n      }\n\n      PreviewRowsDialog prd =\n        new PreviewRowsDialog( shell, transMeta, SWT.NONE, wStepname.getText(), progressDialog\n          .getPreviewRowsMeta( wStepname.getText() ), progressDialog.getPreviewRows( wStepname.getText() ),\n          loggingText );\n      prd.open();\n    }\n  }\n\n  // Get the first x lines\n  private void first( boolean skipHeaders ) {\n    HadoopFileInputMeta info = new HadoopFileInputMeta();\n    getInfo( info );\n\n    try {\n      if ( info.getTextFileList( transMeta.getBowl(), transMeta ).nrOfFiles() > 0 ) {\n        String shellText = BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LinesToView.DialogTitle\" );\n        String lineText = BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.LinesToView.DialogMessage\" );\n        EnterNumberDialog end = new EnterNumberDialog( shell, 100, shellText, lineText );\n        int nrLines = end.open();\n        if ( nrLines >= 0 ) {\n          List<String> linesList = getFirst( nrLines, skipHeaders );\n          if ( linesList != null && linesList.size() > 0 ) {\n            String firstlines = \"\";\n            for ( int i = 0; i < linesList.size(); i++ ) {\n              firstlines += linesList.get( i ) + Const.CR;\n            }\n            EnterTextDialog etd =\n              new EnterTextDialog( shell, BaseMessages.getString( BASE_PKG,\n                \"TextFileInputDialog.ContentOfFirstFile.DialogTitle\" ),\n                ( nrLines == 0 ? BaseMessages.getString( BASE_PKG,\n                  \"TextFileInputDialog.ContentOfFirstFile.AllLines.DialogMessage\" ) : BaseMessages.getString(\n                  BASE_PKG, \"TextFileInputDialog.ContentOfFirstFile.NLines.DialogMessage\", \"\" + nrLines ) ),\n                firstlines, true );\n            etd.setReadOnly();\n            etd.open();\n          } else {\n            MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n            mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.UnableToReadLines.DialogMessage\" ) );\n            mb.setText( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.UnableToReadLines.DialogTitle\" ) );\n            mb.open();\n          }\n        }\n      } else {\n        MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n        mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.NoValidFile.DialogMessage\" ) );\n        mb.setText( ERROR_TITLE );\n        mb.open();\n      }\n    } catch ( KettleException e ) {\n      new ErrorDialog( shell, ERROR_TITLE, BaseMessages.getString(\n        BASE_PKG, \"TextFileInputDialog.ErrorGettingData.DialogMessage\" ), e );\n    }\n  }\n\n  // Get the first x lines\n  private List<String> getFirst( int nrlines, boolean skipHeaders ) throws KettleException {\n    HadoopFileInputMeta meta = new HadoopFileInputMeta();\n    getInfo( meta );\n    FileInputList textFileList = meta.getTextFileList( transMeta.getBowl(), transMeta );\n\n    InputStream fi = null;\n    InputStream f = null;\n    StringBuilder lineStringBuilder = new StringBuilder( 256 );\n    int fileFormatType = meta.getFileFormatTypeNr();\n\n    List<String> retval = new ArrayList<>();\n\n    if ( textFileList.nrOfFiles() > 0 ) {\n      FileObject file = textFileList.getFile( 0 );\n      try {\n        fi = KettleVFS.getInputStream( file );\n\n        CompressionProvider provider =\n          CompressionProviderFactory.getInstance().createCompressionProviderInstance( meta.content.fileCompression );\n        f = provider.createInputStream( fi );\n\n        BufferedInputStreamReader reader;\n        if ( meta.getEncoding() != null && meta.getEncoding().length() > 0 ) {\n          reader = new BufferedInputStreamReader( new InputStreamReader( f, meta.getEncoding() ) );\n        } else {\n          reader = new BufferedInputStreamReader( new InputStreamReader( f ) );\n        }\n\n        int linenr = 0;\n        int maxnr = nrlines + ( meta.content.header ? meta.content.nrHeaderLines : 0 );\n\n        if ( skipHeaders ) {\n          // Skip the header lines first if more then one, it helps us position\n          if ( meta.content.layoutPaged && meta.content.nrLinesDocHeader > 0 ) {\n            int skipped = 0;\n            String line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n            while ( line != null && skipped < meta.content.nrLinesDocHeader - 1 ) {\n              skipped++;\n              line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n            }\n          }\n\n          // Skip the header lines first if more then one, it helps us position\n          if ( meta.content.header && meta.content.nrHeaderLines > 0 ) {\n            int skipped = 0;\n            String line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n            while ( line != null && skipped < meta.content.nrHeaderLines - 1 ) {\n              skipped++;\n              line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n            }\n          }\n        }\n\n        String line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n        while ( line != null && ( linenr < maxnr || nrlines == 0 ) ) {\n          retval.add( line );\n          linenr++;\n          line = TextFileInputUtils.getLine( log, reader, fileFormatType, lineStringBuilder );\n        }\n      } catch ( Exception e ) {\n        throw new KettleException( BaseMessages.getString( BASE_PKG,\n          \"TextFileInputDialog.Exception.ErrorGettingFirstLines\", \"\" + nrlines, file.getName().getURI() ), e );\n      } finally {\n        try {\n          f.close();\n        } catch ( Exception e ) {\n          // Ignore errors\n        }\n      }\n    }\n\n    return retval;\n  }\n\n  private void getFixed() {\n    HadoopFileInputMeta info = new HadoopFileInputMeta();\n    getInfo( info );\n\n    Shell sh = new Shell( shell, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MAX | SWT.MIN );\n\n    try {\n      List<String> rows = getFirst( 50, false );\n      fields = getFields( info, rows );\n\n      final TextFileImportWizardPage1 page1 = new TextFileImportWizardPage1( \"1\", props, rows, fields );\n      page1.createControl( sh );\n      final TextFileImportWizardPage2 page2 = new TextFileImportWizardPage2( \"2\", props, rows, fields );\n      page2.createControl( sh );\n\n      Wizard wizard = new Wizard() {\n        @Override\n        public boolean performFinish() {\n          wFields.clearAll( false );\n\n          for ( int i = 0; i < fields.size(); i++ ) {\n            BaseFileField field = (BaseFileField) fields.get( i );\n            if ( !field.isIgnored() && field.getLength() > 0 ) {\n              TableItem item = new TableItem( wFields.table, SWT.NONE );\n              item.setText( 1, field.getName() );\n              item.setText( 2, \"\" + field.getTypeDesc() );\n              item.setText( 3, \"\" + field.getFormat() );\n              item.setText( 4, \"\" + field.getPosition() );\n              item.setText( 5, field.getLength() < 0 ? \"\" : \"\" + field.getLength() );\n              item.setText( 6, field.getPrecision() < 0 ? \"\" : \"\" + field.getPrecision() );\n              item.setText( 7, \"\" + field.getCurrencySymbol() );\n              item.setText( 8, \"\" + field.getDecimalSymbol() );\n              item.setText( 9, \"\" + field.getGroupSymbol() );\n              item.setText( 10, \"\" + field.getNullString() );\n              item.setText( 11, \"\" + field.getIfNullValue() );\n              item.setText( 12, \"\" + field.getTrimTypeDesc() );\n              item.setText( 13, field.isRepeated() ? COMBO_YES : COMBO_NO );\n            }\n\n          }\n          int size = wFields.table.getItemCount();\n          if ( size == 0 ) {\n            new TableItem( wFields.table, SWT.NONE );\n          }\n\n          wFields.removeEmptyRows();\n          wFields.setRowNums();\n          wFields.optWidth( true );\n\n          input.setChanged();\n\n          return true;\n        }\n      };\n\n      wizard.addPage( page1 );\n      wizard.addPage( page2 );\n\n      WizardDialog wd = new WizardDialog( shell, wizard );\n      Window.setDefaultImage( GUIResource.getInstance().getImageWizard() );\n      wd.setMinimumPageSize( 700, 375 );\n      wd.updateSize();\n      wd.open();\n    } catch ( Exception e ) {\n      new ErrorDialog( shell, BaseMessages.getString( BASE_PKG,\n        \"TextFileInputDialog.ErrorShowingFixedWizard.DialogTitle\" ), BaseMessages.getString( BASE_PKG,\n        \"TextFileInputDialog.ErrorShowingFixedWizard.DialogMessage\" ), e );\n    }\n  }\n\n  private Vector<TextFileInputFieldInterface> getFields( HadoopFileInputMeta info, List<String> rows ) {\n    Vector<TextFileInputFieldInterface> result = new Vector<>();\n\n    int maxsize = 0;\n    for ( int i = 0; i < rows.size(); i++ ) {\n      int len = rows.get( i ).length();\n      if ( len > maxsize ) {\n        maxsize = len;\n      }\n    }\n\n    int prevEnd = 0;\n    int dummynr = 1;\n\n    for ( int i = 0; i < info.inputFields.length; i++ ) {\n      BaseFileField f = info.inputFields[ i ];\n\n      // See if positions are skipped, if this is the case, add dummy fields...\n      if ( f.getPosition() != prevEnd ) {\n        // gap\n        BaseFileField field = new BaseFileField( \"Dummy\" + dummynr, prevEnd, f.getPosition() - prevEnd );\n        field.setIgnored( true ); // don't include in result by default.\n        result.add( field );\n        dummynr++;\n      }\n\n      BaseFileField field = new BaseFileField( f.getName(), f.getPosition(), f.getLength() );\n      field.setType( f.getType() );\n      field.setIgnored( false );\n      field.setFormat( f.getFormat() );\n      field.setPrecision( f.getPrecision() );\n      field.setTrimType( f.getTrimType() );\n      field.setDecimalSymbol( f.getDecimalSymbol() );\n      field.setGroupSymbol( f.getGroupSymbol() );\n      field.setCurrencySymbol( f.getCurrencySymbol() );\n      field.setRepeated( f.isRepeated() );\n      field.setNullString( f.getNullString() );\n\n      result.add( field );\n\n      prevEnd = field.getPosition() + field.getLength();\n    }\n\n    if ( info.inputFields.length == 0 ) {\n      BaseFileField field = new BaseFileField( \"Field1\", 0, maxsize );\n      result.add( field );\n    } else {\n      // Take the last field and see if it reached until the maximum...\n      BaseFileField f = info.inputFields[ info.inputFields.length - 1 ];\n\n      int pos = f.getPosition();\n      int len = f.getLength();\n      if ( pos + len < maxsize ) {\n        // If not, add an extra trailing field!\n        BaseFileField field = new BaseFileField( \"Dummy\" + dummynr, pos + len, maxsize - pos - len );\n        field.setIgnored( true ); // don't include in result by default.\n        result.add( field );\n        dummynr++;\n      }\n    }\n\n    Collections.sort( result );\n\n    return result;\n  }\n\n  @Override\n  public String toString() {\n    return this.getClass().getName();\n  }\n\n  private SelectionAdapter getFileDirectoryListener() {\n\n    return new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        try {\n          // Setup file type filtering\n          String[] fileFilters = null;\n          String[] fileFilterNames = null;\n          if ( !wCompression.getText().equals( \"None\" ) ) {\n            fileFilters = new String[] { \"*.zip;*.gz\", \"*.txt;*.csv\", \"*.csv\", \"*.txt\", \"*\" };\n            fileFilterNames =\n              new String[] { BaseMessages.getString( BASE_PKG, \"System.FileType.ZIPFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FileType.TextAndCSVFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"System.FileType.CSVFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"System.FileType.TextFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"System.FileType.AllFiles\" ) };\n          } else {\n            fileFilters = new String[] { \"*\", \"*.txt;*.csv\", \"*.csv\", \"*.txt\" };\n            fileFilterNames =\n              new String[] { BaseMessages.getString( BASE_PKG, \"System.FileType.AllFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"TextFileInputDialog.FileType.TextAndCSVFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"System.FileType.CSVFiles\" ),\n                BaseMessages.getString( BASE_PKG, \"System.FileType.TextFiles\" ) };\n          }\n\n          String clusterName = wFilenameList.getActiveTableItem().getText( wFilenameList.getActiveTableColumn() - 1 );\n          String path = wFilenameList.getActiveTableItem().getText( wFilenameList.getActiveTableColumn() );\n\n          if ( clusterName.equals( S3_ENVIRONMENT ) && !path.startsWith( Schemes.S3_SCHEME + \"://\" ) ) {\n            path = Schemes.S3_SCHEME + \"://\";\n          }\n\n          // Get current file\n          FileObject rootFile = null;\n          FileObject initialFile = null;\n          FileObject defaultInitialFile = null;\n\n          boolean isCluster = false;\n          if ( !clusterName.equals( LOCAL_ENVIRONMENT ) && !clusterName.equals( S3_ENVIRONMENT ) ) {\n            if ( Const.isEmpty( path ) ) {\n              path = \"/\";\n            }\n            NamedCluster namedCluster = namedClusterService.getNamedClusterByName( clusterName, getMetaStore() );\n            if ( namedCluster == null ) {\n              return;\n            }\n            isCluster = true;\n            path = namedCluster.processURLsubstitution( path, getMetaStore(), transMeta );\n          }\n\n          boolean resolvedInitialFile = false;\n\n          if ( path != null ) {\n            String fileName = transMeta.environmentSubstitute( path );\n            if ( fileName != null && !fileName.equals( \"\" ) ) {\n              try {\n                initialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( fileName );\n                resolvedInitialFile = true;\n              } catch ( Exception ex ) {\n                showMessageAndLog( BaseMessages.getString( PKG, \"HadoopFileInputDialog.Connection.Error.title\" ),\n                  BaseMessages.getString( PKG, \"HadoopFileInputDialog.Connection.error\" ), ex.getMessage() );\n                return;\n              }\n              File startFile = new File( System.getProperty( \"user.home\" ) );\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( startFile.getAbsolutePath() );\n              rootFile = initialFile.getFileSystem().getRoot();\n            } else {\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( Spoon.getInstance().getLastFileOpened() );\n            }\n          }\n\n          if ( rootFile == null ) {\n            if ( defaultInitialFile == null ) {\n              return;\n            }\n            rootFile = defaultInitialFile.getFileSystem().getRoot();\n            initialFile = defaultInitialFile;\n          }\n\n          VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n          fileChooserDialog.defaultInitialFile = defaultInitialFile;\n\n          NamedClusterWidgetImpl namedClusterWidget = null;\n\n          FileObject selectedFile = null;\n\n          if ( clusterName.equals( LOCAL_ENVIRONMENT ) ) {\n            selectedFile =\n              fileChooserDialog.open( shell, new String[] { \"file\" }, \"file\", true, path, fileFilters,\n                fileFilterNames, false, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n          } else if ( clusterName.equals( S3_ENVIRONMENT ) ) {\n            selectedFile =\n              fileChooserDialog.open( shell, new String[] { Schemes.S3_SCHEME }, Schemes.S3_SCHEME, true,\n                path, fileFilters, fileFilterNames, false, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY,\n                false, true );\n          } else {\n            NamedCluster namedCluster = namedClusterService.getNamedClusterByName( clusterName, getMetaStore() );\n            if ( namedCluster != null ) {\n              if ( namedCluster.isMapr() ) {\n                selectedFile =\n                  fileChooserDialog.open( shell, new String[] { Schemes.MAPRFS_SCHEME },\n                    Schemes.MAPRFS_SCHEME, true, path, fileFilters, fileFilterNames, true,\n                    VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n              } else {\n                List<CustomVfsUiPanel> customPanels = fileChooserDialog.getCustomVfsUiPanels();\n                for ( CustomVfsUiPanel panel : customPanels ) {\n                  if ( panel instanceof HadoopVfsFileChooserDialog ) {\n                    HadoopVfsFileChooserDialog hadoopDialog = ( (HadoopVfsFileChooserDialog) panel );\n                    namedClusterWidget = hadoopDialog.getNamedClusterWidget();\n                    namedClusterWidget.initiate();\n                    hadoopDialog.setNamedCluster( clusterName );\n                    hadoopDialog.initializeConnectionPanel( initialFile );\n                  }\n                }\n                if ( resolvedInitialFile ) {\n                  fileChooserDialog.initialFile = initialFile;\n                }\n                selectedFile =\n                  fileChooserDialog.open( shell, new String[] { Schemes.HDFS_SCHEME },\n                    Schemes.HDFS_SCHEME, false, path, fileFilters, fileFilterNames, true,\n                    VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n              }\n\n            }\n          }\n\n          CustomVfsUiPanel currentPanel = fileChooserDialog.getCurrentPanel();\n          if ( currentPanel instanceof HadoopVfsFileChooserDialog ) {\n            namedClusterWidget = ( (HadoopVfsFileChooserDialog) currentPanel ).getNamedClusterWidget();\n          }\n\n          if ( selectedFile != null ) {\n            String url = selectedFile.getURL().toString();\n            if ( currentPanel != null ) {\n              if ( currentPanel.getVfsSchemeDisplayText().equals( LOCAL_ENVIRONMENT ) ) {\n                wFilenameList.getActiveTableItem()\n                  .setText( wFilenameList.getActiveTableColumn() - 1, LOCAL_ENVIRONMENT );\n              } else if ( currentPanel.getVfsSchemeDisplayText().equals( S3_ENVIRONMENT ) ) {\n                wFilenameList.getActiveTableItem().setText( wFilenameList.getActiveTableColumn() - 1, S3_ENVIRONMENT );\n              } else if ( isCluster ) {\n                url = input.getUrlPath( url );\n                wFilenameList.getActiveTableItem().setText( wFilenameList.getActiveTableColumn() - 1,\n                  clusterName );\n              }\n            }\n\n            wFilenameList.getActiveTableItem().setText( wFilenameList.getActiveTableColumn(), url );\n          }\n        } catch ( KettleFileException ex ) {\n          log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.KettleFileException\" ) );\n        } catch ( FileSystemException ex ) {\n          log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.FileSystemException\" ) );\n        }\n      }\n    };\n  }\n\n  protected void setComboValues( ColumnInfo colInfo ) {\n    try {\n      String[] comboValues = { LOCAL_ENVIRONMENT, STATIC_ENVIRONMENT, S3_ENVIRONMENT };\n      String[] namedClusters = namedClusterService.listNames( getMetaStore() ).toArray( new String[ 0 ] );\n      String[] values = (String[]) ArrayUtils.addAll( comboValues, namedClusters );\n      colInfo.setComboValues( values );\n    } catch ( MetaStoreException e ) {\n      log.logError( e.getMessage() );\n    }\n  }\n\n  private InputStream getInputStream( HadoopFileInputMeta meta, FileInputList textFileList ) throws IOException {\n\n    FileObject fileObject = textFileList.getFile( 0 );\n    InputStream fileInputStream = KettleVFS.getInputStream( fileObject );\n    CompressionProvider provider =\n      CompressionProviderFactory.getInstance().createCompressionProviderInstance( meta.content.fileCompression );\n\n    return provider.createInputStream( fileInputStream );\n  }\n\n  private InputStreamReader getInputStreamReader( HadoopFileInputMeta meta, InputStream inputStream )\n    throws IOException {\n\n    if ( meta.getEncoding() != null && meta.getEncoding().length() > 0 ) {\n      return new InputStreamReader( inputStream, meta.getEncoding() );\n    }\n\n    return new InputStreamReader( inputStream );\n  }\n\n  private BufferedInputStreamReader getBufferedInputStreamReader( HadoopFileInputMeta meta, InputStream inputStream )\n    throws IOException {\n    return new BufferedInputStreamReader( getInputStreamReader( meta, inputStream ) );\n  }\n\n  private void showMessageAndLog( String title, String message, String messageToLog ) {\n    MessageBox box = new MessageBox( shell );\n    box.setText( title ); //$NON-NLS-1$\n    box.setMessage( message );\n    log.logError( messageToLog );\n    box.open();\n  }\n\n  private String getLine( HadoopFileInputMeta meta, FileInputList textFileList )\n    throws IOException, KettleFileException {\n\n    InputStream inputStream = null;\n    BufferedInputStreamReader reader = null;\n\n    inputStream = getInputStream( meta, textFileList );\n    reader = getBufferedInputStreamReader( meta, inputStream );\n\n    EncodingType encodingType = EncodingType.guessEncodingType( reader.getEncoding() );\n    StringBuilder lineStringBuilder = new StringBuilder( 256 );\n    String enclosure = StringUtil.substituteHex( meta.content.enclosure );\n    String sLine =\n      TextFileInputUtils.getLine( log, reader, encodingType, meta.getFileFormatTypeNr(), lineStringBuilder, enclosure );\n    inputStream.close();\n\n    return sLine;\n  }\n\n  private class DirectoryBrowserAdapter extends SelectionAdapter {\n    private Text widget;\n\n    /**\n     * Create a new Directory Browser Adapter that reads/sets the text of {@code widget} to the directory chosen.\n     *\n     * @param widget Text widget linked to the VFS browser\n     */\n    public DirectoryBrowserAdapter( Text widget ) {\n      this.widget = widget;\n    }\n\n    @Override\n    public void widgetSelected( SelectionEvent e ) {\n      try {\n        // Get current file\n        FileObject rootFile = null;\n        FileObject initialFile = null;\n        FileObject defaultInitialFile = null;\n\n        if ( widget.getText() != null ) {\n          String fileName = transMeta.environmentSubstitute( widget.getText() );\n\n          if ( fileName != null && !fileName.equals( \"\" ) ) {\n            initialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( fileName );\n            rootFile = initialFile.getFileSystem().getRoot();\n          } else {\n            defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n              .getFileObject( Spoon.getInstance().getLastFileOpened() );\n          }\n        }\n\n        defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( \"file:///c:/\" );\n        if ( rootFile == null ) {\n          rootFile = defaultInitialFile.getFileSystem().getRoot();\n          initialFile = defaultInitialFile;\n        }\n\n        VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n        fileChooserDialog.defaultInitialFile = defaultInitialFile;\n        FileObject selectedFile =\n          fileChooserDialog.open( shell, null, Schemes.HDFS_SCHEME, false, null, new String[] { \"*.*\" },\n            ALL_FILES_TYPE, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n        if ( selectedFile != null ) {\n          if ( !selectedFile.getType().equals( FileType.FOLDER ) ) {\n            selectedFile = selectedFile.getParent();\n          }\n          widget.setText( selectedFile.getURL().toString() );\n        }\n      } catch ( KettleFileException ex ) {\n        log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.KettleFileException\" ) );\n      } catch ( FileSystemException ex ) {\n        log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.FileSystemException\" ) );\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.lang.Validate;\nimport org.apache.commons.vfs2.FileName;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.fileinput.FileInputList;\nimport org.pentaho.di.core.fileinput.NonAccessibleFileObject;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileInputMeta;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.security.InvalidParameterException;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputDialog.LOCAL_ENVIRONMENT;\nimport static org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputDialog.S3_ENVIRONMENT;\nimport static org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputDialog.STATIC_ENVIRONMENT;\nimport static org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes.NAMED_CLUSTER_SCHEME;\n\n@Step( id = \"HadoopFileInputPlugin\", image = \"HDI.svg\", name = \"HadoopFileInputPlugin.Name\",\n  description = \"HadoopFileInputPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.hadoopfileinput\" )\n@InjectionSupported( localizationPrefix = \"HadoopFileInput.Injection.\", groups = { \"FILENAME_LINES\", \"FIELDS\",\n  \"FILTERS\" } )\npublic class HadoopFileInputMeta extends TextFileInputMeta implements HadoopFileMeta {\n\n  // is not used. Can we delete it?\n\n  @SuppressWarnings( \"squid:S1068\" )\n  private VariableSpace variableSpace;\n\n  private Map<String, String> namedClusterURLMapping = null;\n\n  public static final String SOURCE_CONFIGURATION_NAME = \"source_configuration_name\";\n  public static final String LOCAL_SOURCE_FILE = \"LOCAL-SOURCE-FILE-\";\n  public static final String STATIC_SOURCE_FILE = \"STATIC-SOURCE-FILE-\";\n  public static final String S3_SOURCE_FILE = \"S3-SOURCE-FILE-\";\n  public static final String S3_DEST_FILE = \"S3-DEST-FILE-\";\n  private final NamedClusterService namedClusterService;\n  private final HadoopFileSystemLocator hadoopFileSystemLocator;\n  private final boolean fatalErrorOnHdfsNotFound = \"Y\".equalsIgnoreCase(\n    System.getProperty( Const.KETTLE_FATAL_ERROR_ON_HDFS_NOT_FOUND, Const.KETTLE_FATAL_ERROR_ON_HDFS_NOT_FOUND_DEFAULT ) );\n\n  enum EncryptDirection { ENCRYPT, DECRYPT }\n\n  /**\n   * The environment of the selected file/folder\n   */\n  @Injection( name = \"ENVIRONMENT\", group = \"FILENAME_LINES\" )\n  public String[] environment = {};\n\n  public HadoopFileInputMeta() {\n    this( NamedClusterManager.getInstance(), null );\n  }\n\n  public HadoopFileInputMeta( NamedClusterService namedClusterService, HadoopFileSystemLocator hadoopFileSystemLocator ) {\n    this.namedClusterService = namedClusterService;\n    this.hadoopFileSystemLocator = hadoopFileSystemLocator;\n    namedClusterURLMapping = new HashMap<>();\n  }\n\n  @Override\n  protected String loadSource( Node filenode, Node filenamenode, int i, IMetaStore metaStore ) {\n    String source_filefolder = XMLHandler.getNodeValue( filenamenode );\n    Node sourceNode = XMLHandler.getSubNodeByNr( filenode, SOURCE_CONFIGURATION_NAME, i );\n    String source = XMLHandler.getNodeValue( sourceNode );\n    try {\n      return source_filefolder == null ? null\n        : loadUrl( encryptDecryptPassword( source_filefolder, EncryptDirection.DECRYPT ), source, metaStore,\n          namedClusterURLMapping );\n    } catch ( Exception ex ) {\n      // Do nothing\n    }\n    return null;\n  }\n\n  @Override\n  protected void saveSource( StringBuilder retVal, String source ) {\n    String namedCluster = namedClusterURLMapping.get( source );\n    retVal.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"name\", encryptDecryptPassword( source, EncryptDirection.ENCRYPT ) ) );\n    retVal.append( \"          \" ).append( XMLHandler.addTagValue( SOURCE_CONFIGURATION_NAME, namedCluster ) );\n  }\n\n  // Receiving metaStore because RepositoryProxy.getMetaStore() returns a hard-coded null\n  @Override\n  protected String loadSourceRep( Repository rep, ObjectId id_step, int i, IMetaStore metaStore )\n    throws KettleException {\n    String source_filefolder = rep.getStepAttributeString( id_step, i, \"file_name\" );\n    String ncName = rep.getJobEntryAttributeString( id_step, i, SOURCE_CONFIGURATION_NAME );\n    return loadUrl( encryptDecryptPassword( source_filefolder, EncryptDirection.DECRYPT ), ncName, metaStore,\n      namedClusterURLMapping );\n  }\n\n  @Override\n  protected void saveSourceRep( Repository rep, ObjectId id_transformation, ObjectId id_step, int i, String fileName )\n    throws KettleException {\n    String namedCluster = namedClusterURLMapping.get( fileName );\n    rep.saveStepAttribute( id_transformation, id_step, i, \"file_name\",\n      encryptDecryptPassword( fileName, EncryptDirection.ENCRYPT ) );\n    rep.saveStepAttribute( id_transformation, id_step, i, SOURCE_CONFIGURATION_NAME, namedCluster );\n  }\n\n  public String loadUrl( String url, String ncName, IMetaStore metastore, Map<String, String> mappings ) {\n    NamedCluster c = namedClusterService.getNamedClusterByName( ncName, metastore );\n    if ( c != null ) {\n      url = c.processURLsubstitution( url, metastore, new Variables() );\n    }\n    if ( !Utils.isEmpty( ncName ) && !Utils.isEmpty( url ) && mappings != null ) {\n      mappings.put( url, ncName );\n      // in addition to the url as-is, add the public uri string version of the url (hidden password) to the map,\n      // since that is the value that the data-lineage analyzer will have access to for cluster lookup\n      try {\n        mappings.put( getFriendlyUri( url ).toString(), ncName );\n      } catch ( final Exception e ) {\n        // no-op\n      }\n    }\n    return url;\n  }\n\n  public void setNamedClusterURLMapping( Map<String, String> mappings ) {\n    this.namedClusterURLMapping = mappings;\n  }\n\n  public Map<String, String> getNamedClusterURLMapping() {\n    return this.namedClusterURLMapping;\n  }\n\n  @Override\n  public String getClusterName( final String url ) {\n    String clusterName = null;\n    try {\n      URI friendlyUri = getFriendlyUri( url );\n      clusterName = getClusterNameBy( friendlyUri.toString() );\n    } catch ( final URISyntaxException e ) {\n      // no-op\n    }\n    return clusterName;\n  }\n\n  private URI getFriendlyUri( String url ) throws URISyntaxException {\n    URI origUri = new URI( url );\n    return new URI( origUri.getScheme(), null, origUri.getHost(), origUri.getPort(),\n      origUri.getPath(), origUri.getQuery(), origUri.getFragment() );\n  }\n\n  public String getClusterNameBy( String url ) {\n    return this.namedClusterURLMapping.get( url );\n  }\n\n  public String getUrlPath( String incomingURL ) {\n    String path = null;\n    FileName fileName = getUrlFileName( incomingURL );\n    if ( fileName != null ) {\n      String root = fileName.getRootURI();\n      path = incomingURL.substring( root.length() - 1 );\n    }\n    return path;\n  }\n\n  public void setVariableSpace( VariableSpace variableSpace ) {\n    this.variableSpace = variableSpace;\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  @Override\n  public FileInputList getFileInputList( Bowl bowl, VariableSpace space ) {\n    inputFiles.normalizeAllocation( inputFiles.fileName.length );\n    for ( int i = 0; i < environment.length; i++ ) {\n      if ( inputFiles.fileName[ i ].contains( \"://\" ) ) {\n        continue;\n      }\n      String sourceNc = environment[ i ];\n      sourceNc = sourceNc.equals( LOCAL_ENVIRONMENT ) ? HadoopFileInputMeta.LOCAL_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( STATIC_ENVIRONMENT ) ? HadoopFileInputMeta.STATIC_SOURCE_FILE + i : sourceNc;\n      sourceNc = sourceNc.equals( S3_ENVIRONMENT ) ? HadoopFileInputMeta.S3_SOURCE_FILE + i : sourceNc;\n      String source = inputFiles.fileName[ i ];\n      if ( !Utils.isEmpty( source ) ) {\n        inputFiles.fileName[ i ] =\n          loadUrl( source, sourceNc, getParentStepMeta().getParentTransMeta().getMetaStore(), null );\n      } else {\n        inputFiles.fileName[ i ] = \"\";\n      }\n    }\n    FileInputList returnList = createFileList( bowl, space );\n    for ( int i = 0; i < inputFiles.fileName.length; i++ ) {\n      if ( !canAccessHdfs( inputFiles.fileName[ i ], fatalErrorOnHdfsNotFound ) ) {\n        returnList.addNonAccessibleFile( new NonAccessibleFileObject( inputFiles.fileName[ i ] ) );\n      }\n    }\n    return returnList;\n  }\n\n  /**\n   * If the KETTLE_FATAL_ERROR_ON_HDFS_NOT_FOUND property is set to Y, return false if we can find a named cluster that should\n   * be used to access the file AND there is no corresponding HDFS file system for that named cluster.\n   *\n   * @param fileName\n   * @return false if the filename should be accessed via a named cluster and HDFS and it cannot and the KETTLE_FATAL_ERROR_ON_HDFS_NOT_FOUND\n   * property is Y\n   */\n  protected boolean canAccessHdfs( String fileName, boolean checkHdfs ) {\n    if ( checkHdfs ) {\n      try {\n        URI fileUri = new URI( fileName );\n        NamedCluster c = namedClusterService.getNamedClusterByHost( fileUri.getHost(), getParentStepMeta().getParentTransMeta().getMetaStore() );\n        if ( null == c && NAMED_CLUSTER_SCHEME.equalsIgnoreCase( fileUri.getScheme() ) ) {\n          c = namedClusterService.getNamedClusterByName( fileUri.getHost(), getParentStepMeta().getParentTransMeta().getMetaStore() );\n        }\n        if ( null != c && null == hadoopFileSystemLocator.getHadoopFilesystem( c, fileUri ) ) {\n          return false;\n        }\n      } catch ( URISyntaxException | ClusterInitializationException e ) {\n        return false;\n      }\n    }\n    return true;\n  }\n\n  FileInputList createFileList( VariableSpace space ) {\n    return createFileList( null, space );\n  }\n\n  /**\n   * Created for test purposes\n   */\n  FileInputList createFileList( Bowl bowl, VariableSpace space ) {\n    return FileInputList.createFileList( bowl, space, inputFiles.fileName, inputFiles.fileMask, inputFiles.excludeFileMask,\n      inputFiles.fileRequired, inputFiles.includeSubFolderBoolean() );\n  }\n\n  protected String encryptDecryptPassword( String source, EncryptDirection direction ) {\n    Validate.notNull( direction, \"'direction' must not be null\" );\n    try {\n      URI uri = new URI( source );\n      String userInfo = uri.getUserInfo();\n      if ( userInfo != null ) {\n        String[] userInfoArray = userInfo.split( \":\", 2 );\n        if ( userInfoArray.length < 2 ) {\n          return source; //no password present\n        }\n        String password = userInfoArray[ 1 ];\n        String processedPassword;\n        switch ( direction ) {\n          case ENCRYPT:\n            processedPassword = Encr.encryptPasswordIfNotUsingVariables( password );\n            break;\n          case DECRYPT:\n            processedPassword = Encr.decryptPasswordOptionallyEncrypted( password );\n            break;\n          default:\n            throw new InvalidParameterException( \"direction must be 'ENCODE' or 'DECODE'\" );\n        }\n        URI encryptedUri =\n          new URI( uri.getScheme(), userInfoArray[ 0 ] + \":\" + processedPassword, uri.getHost(), uri.getPort(),\n            uri.getPath(), uri.getQuery(), uri.getFragment() );\n        return encryptedUri.toString();\n      }\n    } catch ( URISyntaxException e ) {\n      return source; // if this is non-parseable as a uri just return the source without changing it.\n    }\n    return source; // Just for the compiler should NEVER hit this code\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.provider.URLFileName;\nimport org.pentaho.di.core.vfs.KettleVFS;\n\n/**\n * Common functionality for a hadoop based {@link org.pentaho.di.trans.steps.file.BaseFileMeta}.\n */\npublic interface HadoopFileMeta {\n\n  default String getUrlHostName( final String incomingURL ) {\n    String hostName = null;\n    final FileName fileName = getUrlFileName( incomingURL );\n    if ( fileName instanceof URLFileName ) {\n      hostName = ( (URLFileName) fileName ).getHostName();\n    }\n    return hostName;\n  }\n\n  default FileName getUrlFileName( final String incomingURL ) {\n    FileName fileName = null;\n    try {\n      final String noVariablesURL = incomingURL.replaceAll( \"[${}]\", \"/\" );\n      fileName = KettleVFS.getInstance().getFileSystemManager().resolveURI( noVariablesURL );\n    } catch ( FileSystemException e ) {\n      // no-op\n    }\n    return fileName;\n  }\n\n  String getUrlPath( final String incomingURL );\n\n  String getClusterName( final String incomingURL );\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileOutputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.FocusListener;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.graphics.Cursor;\nimport org.eclipse.swt.graphics.Point;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Combo;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.HadoopVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.compress.CompressionProviderFactory;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBase;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.textfileoutput.TextFileField;\nimport org.pentaho.di.trans.steps.textfileoutput.TextFileOutputMeta;\nimport org.pentaho.di.ui.core.dialog.EnterSelectionDialog;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ComboVar;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.di.ui.trans.step.TableItemInsertListener;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.io.File;\nimport java.nio.charset.Charset;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\n\n@PluginDialog( id = \"HadoopFileOutputPlugin\", image = \"HDO.svg\", pluginType = PluginDialog.PluginType.STEP,\n        documentationUrl = \"pdi-transformation-steps-reference-overview/hadoop-file-output-cp-main-page\" )\npublic class HadoopFileOutputDialog extends BaseStepDialog implements StepDialogInterface {\n  private static Class<?> BASE_PKG = TextFileOutputMeta.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n  private static Class<?> PKG = HadoopFileOutputMeta.class;\n\n  private CTabFolder wTabFolder;\n  private FormData fdTabFolder;\n\n  private CTabItem wFileTab, wContentTab, wFieldsTab;\n\n  private FormData fdFileComp, fdContentComp, fdFieldsComp;\n\n  private Label wlFilename;\n  private Button wbFilename;\n  private TextVar wFilename;\n  private FormData fdlFilename, fdbFilename, fdFilename;\n\n  private Label wlExtension;\n  private TextVar wExtension;\n  private FormData fdlExtension, fdExtension;\n\n  private Label wlAddStepnr;\n  private Button wAddStepnr;\n  private FormData fdlAddStepnr, fdAddStepnr;\n\n  private Label wlAddPartnr;\n  private Button wAddPartnr;\n  private FormData fdlAddPartnr, fdAddPartnr;\n\n  private Label wlAddDate;\n  private Button wAddDate;\n  private FormData fdlAddDate, fdAddDate;\n\n  private Label wlAddTime;\n  private Button wAddTime;\n  private FormData fdlAddTime, fdAddTime;\n\n  private Button wbShowFiles;\n  private FormData fdbShowFiles;\n\n  /* Additional fields */\n  private Label wlFileNameInField;\n  private Button wFileNameInField;\n  private FormData fdlFileNameInField, fdFileNameInField;\n\n  private Label wlFileNameField;\n  private ComboVar wFileNameField;\n  private FormData fdlFileNameField, fdFileNameField;\n  /* END */\n\n  private Label wlAppend;\n  private Button wAppend;\n  private FormData fdlAppend, fdAppend;\n\n  private Label wlSeparator;\n  private Button wbSeparator;\n  private TextVar wSeparator;\n  private FormData fdlSeparator, fdbSeparator, fdSeparator;\n\n  private Label wlEnclosure;\n  private TextVar wEnclosure;\n  private FormData fdlEnclosure, fdEnclosure;\n\n  private Label wlEndedLine;\n  private Text wEndedLine;\n  private FormData fdlEndedLine, fdEndedLine;\n\n  private Label wlEnclForced;\n  private Button wEnclForced;\n  private FormData fdlEnclForced, fdEnclForced;\n\n  private Label wlHeader;\n  private Button wHeader;\n  private FormData fdlHeader, fdHeader;\n\n  private Label wlFooter;\n  private Button wFooter;\n  private FormData fdlFooter, fdFooter;\n\n  private Label wlFormat;\n  private CCombo wFormat;\n  private FormData fdlFormat, fdFormat;\n\n  private Label wlCompression;\n  private CCombo wCompression;\n  private FormData fdlCompression, fdCompression;\n\n  private Label wlEncoding;\n  private CCombo wEncoding;\n  private FormData fdlEncoding, fdEncoding;\n\n  private Label wlPad;\n  private Button wPad;\n  private FormData fdlPad, fdPad;\n\n  private Label wlFastDump;\n  private Button wFastDump;\n  private FormData fdlFastDump, fdFastDump;\n\n  private Label wlSplitEvery;\n  private Text wSplitEvery;\n  private FormData fdlSplitEvery, fdSplitEvery;\n\n  private TableView wFields;\n  private FormData fdFields;\n\n  private HadoopFileOutputMeta input;\n\n  private Button wMinWidth;\n  private Listener lsMinWidth;\n  private boolean gotEncodings = false;\n\n  private Label wlAddToResult;\n  private Button wAddToResult;\n  private FormData fdlAddToResult, fdAddToResult;\n\n  private Label wlDoNotOpenNewFileInit;\n  private Button wDoNotOpenNewFileInit;\n  private FormData fdlDoNotOpenNewFileInit, fdDoNotOpenNewFileInit;\n\n  private Label wlDateTimeFormat;\n  private CCombo wDateTimeFormat;\n  private FormData fdlDateTimeFormat, fdDateTimeFormat;\n\n  private Label wlSpecifyFormat;\n  private Button wSpecifyFormat;\n  private FormData fdlSpecifyFormat, fdSpecifyFormat;\n\n  private Label wlCreateParentFolder;\n  private Button wCreateParentFolder;\n  private FormData fdlCreateParentFolder, fdCreateParentFolder;\n\n  private ColumnInfo[] colinf;\n  private NamedClusterWidgetImpl namedClusterWidget;\n\n  private Map<String, Integer> inputFields;\n\n  private boolean gotPreviousFields = false;\n\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  public HadoopFileOutputDialog( Shell parent, Object in, TransMeta transMeta, String sname ) {\n    super( parent, (BaseStepMeta) in, transMeta, sname );\n    input = (HadoopFileOutputMeta) in;\n    namedClusterService = input.getNamedClusterService();\n    runtimeTestActionService = input.getRuntimeTestActionService();\n    runtimeTester = input.getRuntimeTester();\n    inputFields = new HashMap<String, Integer>();\n  }\n\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MAX | SWT.MIN );\n    props.setLook( shell );\n    setShellImage( shell, input );\n\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        input.setChanged();\n      }\n    };\n    changed = input.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( PKG, \"HadoopFileOutputDialog.DialogTitle\" ) );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    wlStepname = new Label( shell, SWT.RIGHT );\n    wlStepname.setText( BaseMessages.getString( BASE_PKG, \"System.Label.StepName\" ) );\n    props.setLook( wlStepname );\n    fdlStepname = new FormData();\n    fdlStepname.left = new FormAttachment( 0, 0 );\n    fdlStepname.top = new FormAttachment( 0, margin );\n    fdlStepname.right = new FormAttachment( middle, -margin );\n    wlStepname.setLayoutData( fdlStepname );\n    wStepname = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wStepname.setText( stepname );\n    props.setLook( wStepname );\n    wStepname.addModifyListener( lsMod );\n    fdStepname = new FormData();\n    fdStepname.left = new FormAttachment( middle, 0 );\n    fdStepname.top = new FormAttachment( 0, margin );\n    fdStepname.right = new FormAttachment( 100, 0 );\n    wStepname.setLayoutData( fdStepname );\n\n    wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( wTabFolder, Props.WIDGET_STYLE_TAB );\n    wTabFolder.setSimple( false );\n\n    // ////////////////////////\n    // START OF FILE TAB///\n    // /\n    wFileTab = new CTabItem( wTabFolder, SWT.NONE );\n    wFileTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FileTab.TabTitle\" ) );\n\n    Composite wFileComp = new Composite( wTabFolder, SWT.NONE );\n    props.setLook( wFileComp );\n\n    FormLayout fileLayout = new FormLayout();\n    fileLayout.marginWidth = 3;\n    fileLayout.marginHeight = 3;\n    wFileComp.setLayout( fileLayout );\n\n    namedClusterWidget = new NamedClusterWidgetImpl( wFileComp, true, namedClusterService, runtimeTestActionService, runtimeTester, false );\n    namedClusterWidget.initiate();\n    props.setLook( namedClusterWidget );\n    FormData fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( 0, 235 );\n    namedClusterWidget.setLayoutData( fd );\n\n    namedClusterWidget.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent evt ) {\n        String ncName = ( (Combo) evt.getSource() ).getText();\n        NamedCluster nc = namedClusterService.getNamedClusterByName( ncName, getMetaStore() );\n        if ( nc != null ) {\n          HadoopFileOutputMeta meta = (HadoopFileOutputMeta) input;\n          meta.setSourceConfigurationName( nc.getName() );\n        }\n      }\n    } );\n\n    // Filename line\n    wlFilename = new Label( wFileComp, SWT.RIGHT );\n    wlFilename.setText( BaseMessages.getString( PKG, \"HadoopFileOutputDialog.Filename.Label\" ) );\n    props.setLook( wlFilename );\n    fdlFilename = new FormData();\n    fdlFilename.left = new FormAttachment( 0, 0 );\n    fdlFilename.top = new FormAttachment( namedClusterWidget, margin );\n    fdlFilename.right = new FormAttachment( middle, -margin );\n    wlFilename.setLayoutData( fdlFilename );\n\n    wbFilename = new Button( wFileComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbFilename );\n    wbFilename.setText( BaseMessages.getString( BASE_PKG, \"System.Button.Browse\" ) );\n    fdbFilename = new FormData();\n    fdbFilename.right = new FormAttachment( 100, 0 );\n    fdbFilename.top = new FormAttachment( namedClusterWidget, 0 );\n    wbFilename.setLayoutData( fdbFilename );\n\n    wFilename = new TextVar( transMeta, wFileComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wFilename );\n    wFilename.addModifyListener( lsMod );\n    fdFilename = new FormData();\n    fdFilename.left = new FormAttachment( middle, 0 );\n    fdFilename.top = new FormAttachment( namedClusterWidget, margin );\n    fdFilename.right = new FormAttachment( wbFilename, -margin );\n    wFilename.setLayoutData( fdFilename );\n\n    // Create Parent Folder\n    wlCreateParentFolder = new Label( wFileComp, SWT.RIGHT );\n    wlCreateParentFolder.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.CreateParentFolder.Label\" ) );\n    props.setLook( wlCreateParentFolder );\n    fdlCreateParentFolder = new FormData();\n    fdlCreateParentFolder.left = new FormAttachment( 0, 0 );\n    fdlCreateParentFolder.top = new FormAttachment( wFilename, margin );\n    fdlCreateParentFolder.right = new FormAttachment( middle, -margin );\n    wlCreateParentFolder.setLayoutData( fdlCreateParentFolder );\n    wCreateParentFolder = new Button( wFileComp, SWT.CHECK );\n    wCreateParentFolder.setToolTipText( BaseMessages.getString( BASE_PKG,\n        \"TextFileOutputDialog.CreateParentFolder.Tooltip\" ) );\n    props.setLook( wCreateParentFolder );\n    fdCreateParentFolder = new FormData();\n    fdCreateParentFolder.left = new FormAttachment( middle, 0 );\n    fdCreateParentFolder.top = new FormAttachment( wFilename, margin );\n    fdCreateParentFolder.right = new FormAttachment( 100, 0 );\n    wCreateParentFolder.setLayoutData( fdCreateParentFolder );\n    wCreateParentFolder.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    // Open new File at Init\n    wlDoNotOpenNewFileInit = new Label( wFileComp, SWT.RIGHT );\n    wlDoNotOpenNewFileInit.setText( BaseMessages\n        .getString( BASE_PKG, \"TextFileOutputDialog.DoNotOpenNewFileInit.Label\" ) );\n    props.setLook( wlDoNotOpenNewFileInit );\n    fdlDoNotOpenNewFileInit = new FormData();\n    fdlDoNotOpenNewFileInit.left = new FormAttachment( 0, 0 );\n    fdlDoNotOpenNewFileInit.top = new FormAttachment( wCreateParentFolder, margin );\n    fdlDoNotOpenNewFileInit.right = new FormAttachment( middle, -margin );\n    wlDoNotOpenNewFileInit.setLayoutData( fdlDoNotOpenNewFileInit );\n    wDoNotOpenNewFileInit = new Button( wFileComp, SWT.CHECK );\n    wDoNotOpenNewFileInit.setToolTipText( BaseMessages.getString( BASE_PKG,\n        \"TextFileOutputDialog.DoNotOpenNewFileInit.Tooltip\" ) );\n    props.setLook( wDoNotOpenNewFileInit );\n    fdDoNotOpenNewFileInit = new FormData();\n    fdDoNotOpenNewFileInit.left = new FormAttachment( middle, 0 );\n    fdDoNotOpenNewFileInit.top = new FormAttachment( wCreateParentFolder, margin );\n    fdDoNotOpenNewFileInit.right = new FormAttachment( 100, 0 );\n    wDoNotOpenNewFileInit.setLayoutData( fdDoNotOpenNewFileInit );\n    wDoNotOpenNewFileInit.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    /* next Lines */\n    // FileNameInField line\n    wlFileNameInField = new Label( wFileComp, SWT.RIGHT );\n    wlFileNameInField.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FileNameInField.Label\" ) );\n    props.setLook( wlFileNameInField );\n    fdlFileNameInField = new FormData();\n    fdlFileNameInField.left = new FormAttachment( 0, 0 );\n    fdlFileNameInField.top = new FormAttachment( wDoNotOpenNewFileInit, margin );\n    fdlFileNameInField.right = new FormAttachment( middle, -margin );\n    wlFileNameInField.setLayoutData( fdlFileNameInField );\n    wFileNameInField = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wFileNameInField );\n    fdFileNameInField = new FormData();\n    fdFileNameInField.left = new FormAttachment( middle, 0 );\n    fdFileNameInField.top = new FormAttachment( wDoNotOpenNewFileInit, margin );\n    fdFileNameInField.right = new FormAttachment( 100, 0 );\n    wFileNameInField.setLayoutData( fdFileNameInField );\n    wFileNameInField.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n        activeFileNameField();\n      }\n    } );\n\n    // FileNameField Line\n    wlFileNameField = new Label( wFileComp, SWT.RIGHT );\n    wlFileNameField.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FileNameField.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlFileNameField );\n    fdlFileNameField = new FormData();\n    fdlFileNameField.left = new FormAttachment( 0, 0 );\n    fdlFileNameField.right = new FormAttachment( middle, -margin );\n    fdlFileNameField.top = new FormAttachment( wFileNameInField, margin );\n    wlFileNameField.setLayoutData( fdlFileNameField );\n\n    wFileNameField = new ComboVar( transMeta, wFileComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wFileNameField );\n    wFileNameField.addModifyListener( lsMod );\n    fdFileNameField = new FormData();\n    fdFileNameField.left = new FormAttachment( middle, 0 );\n    fdFileNameField.top = new FormAttachment( wFileNameInField, margin );\n    fdFileNameField.right = new FormAttachment( 100, 0 );\n    wFileNameField.setLayoutData( fdFileNameField );\n    wFileNameField.setEnabled( false );\n    wFileNameField.addFocusListener( new FocusListener() {\n      public void focusLost( org.eclipse.swt.events.FocusEvent e ) {\n      }\n\n      public void focusGained( org.eclipse.swt.events.FocusEvent e ) {\n        Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n        shell.setCursor( busy );\n        getFields();\n        shell.setCursor( null );\n        busy.dispose();\n      }\n    } );\n    /* End */\n\n    // Extension line\n    wlExtension = new Label( wFileComp, SWT.RIGHT );\n    wlExtension.setText( BaseMessages.getString( BASE_PKG, \"System.Label.Extension\" ) );\n    props.setLook( wlExtension );\n    fdlExtension = new FormData();\n    fdlExtension.left = new FormAttachment( 0, 0 );\n    fdlExtension.top = new FormAttachment( wFileNameField, margin );\n    fdlExtension.right = new FormAttachment( middle, -margin );\n    wlExtension.setLayoutData( fdlExtension );\n    wExtension = new TextVar( transMeta, wFileComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wExtension.setText( \"\" );\n    props.setLook( wExtension );\n    wExtension.addModifyListener( lsMod );\n    fdExtension = new FormData();\n    fdExtension.left = new FormAttachment( middle, 0 );\n    fdExtension.top = new FormAttachment( wFileNameField, margin );\n    fdExtension.right = new FormAttachment( 100, 0 );\n    wExtension.setLayoutData( fdExtension );\n\n    // Create multi-part file?\n    wlAddStepnr = new Label( wFileComp, SWT.RIGHT );\n    wlAddStepnr.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddStepnr.Label\" ) );\n    props.setLook( wlAddStepnr );\n    fdlAddStepnr = new FormData();\n    fdlAddStepnr.left = new FormAttachment( 0, 0 );\n    fdlAddStepnr.top = new FormAttachment( wExtension, margin );\n    fdlAddStepnr.right = new FormAttachment( middle, -margin );\n    wlAddStepnr.setLayoutData( fdlAddStepnr );\n    wAddStepnr = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wAddStepnr );\n    fdAddStepnr = new FormData();\n    fdAddStepnr.left = new FormAttachment( middle, 0 );\n    fdAddStepnr.top = new FormAttachment( wExtension, margin );\n    fdAddStepnr.right = new FormAttachment( 100, 0 );\n    wAddStepnr.setLayoutData( fdAddStepnr );\n    wAddStepnr.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    // Create multi-part file?\n    wlAddPartnr = new Label( wFileComp, SWT.RIGHT );\n    wlAddPartnr.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddPartnr.Label\" ) );\n    props.setLook( wlAddPartnr );\n    fdlAddPartnr = new FormData();\n    fdlAddPartnr.left = new FormAttachment( 0, 0 );\n    fdlAddPartnr.top = new FormAttachment( wAddStepnr, margin );\n    fdlAddPartnr.right = new FormAttachment( middle, -margin );\n    wlAddPartnr.setLayoutData( fdlAddPartnr );\n    wAddPartnr = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wAddPartnr );\n    fdAddPartnr = new FormData();\n    fdAddPartnr.left = new FormAttachment( middle, 0 );\n    fdAddPartnr.top = new FormAttachment( wAddStepnr, margin );\n    fdAddPartnr.right = new FormAttachment( 100, 0 );\n    wAddPartnr.setLayoutData( fdAddPartnr );\n    wAddPartnr.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    // Create multi-part file?\n    wlAddDate = new Label( wFileComp, SWT.RIGHT );\n    wlAddDate.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddDate.Label\" ) );\n    props.setLook( wlAddDate );\n    fdlAddDate = new FormData();\n    fdlAddDate.left = new FormAttachment( 0, 0 );\n    fdlAddDate.top = new FormAttachment( wAddPartnr, margin );\n    fdlAddDate.right = new FormAttachment( middle, -margin );\n    wlAddDate.setLayoutData( fdlAddDate );\n    wAddDate = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wAddDate );\n    fdAddDate = new FormData();\n    fdAddDate.left = new FormAttachment( middle, 0 );\n    fdAddDate.top = new FormAttachment( wAddPartnr, margin );\n    fdAddDate.right = new FormAttachment( 100, 0 );\n    wAddDate.setLayoutData( fdAddDate );\n    wAddDate.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n        // System.out.println(\"wAddDate.getSelection()=\"+wAddDate.getSelection());\n      }\n    } );\n    // Create multi-part file?\n    wlAddTime = new Label( wFileComp, SWT.RIGHT );\n    wlAddTime.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddTime.Label\" ) );\n    props.setLook( wlAddTime );\n    fdlAddTime = new FormData();\n    fdlAddTime.left = new FormAttachment( 0, 0 );\n    fdlAddTime.top = new FormAttachment( wAddDate, margin );\n    fdlAddTime.right = new FormAttachment( middle, -margin );\n    wlAddTime.setLayoutData( fdlAddTime );\n    wAddTime = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wAddTime );\n    fdAddTime = new FormData();\n    fdAddTime.left = new FormAttachment( middle, 0 );\n    fdAddTime.top = new FormAttachment( wAddDate, margin );\n    fdAddTime.right = new FormAttachment( 100, 0 );\n    wAddTime.setLayoutData( fdAddTime );\n    wAddTime.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    // Specify date time format?\n    wlSpecifyFormat = new Label( wFileComp, SWT.RIGHT );\n    wlSpecifyFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.SpecifyFormat.Label\" ) );\n    props.setLook( wlSpecifyFormat );\n    fdlSpecifyFormat = new FormData();\n    fdlSpecifyFormat.left = new FormAttachment( 0, 0 );\n    fdlSpecifyFormat.top = new FormAttachment( wAddTime, margin );\n    fdlSpecifyFormat.right = new FormAttachment( middle, -margin );\n    wlSpecifyFormat.setLayoutData( fdlSpecifyFormat );\n    wSpecifyFormat = new Button( wFileComp, SWT.CHECK );\n    props.setLook( wSpecifyFormat );\n    wSpecifyFormat.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.SpecifyFormat.Tooltip\" ) );\n    fdSpecifyFormat = new FormData();\n    fdSpecifyFormat.left = new FormAttachment( middle, 0 );\n    fdSpecifyFormat.top = new FormAttachment( wAddTime, margin );\n    fdSpecifyFormat.right = new FormAttachment( 100, 0 );\n    wSpecifyFormat.setLayoutData( fdSpecifyFormat );\n    wSpecifyFormat.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n        setDateTimeFormat();\n      }\n    } );\n\n    // DateTimeFormat\n    wlDateTimeFormat = new Label( wFileComp, SWT.RIGHT );\n    wlDateTimeFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.DateTimeFormat.Label\" ) );\n    props.setLook( wlDateTimeFormat );\n    fdlDateTimeFormat = new FormData();\n    fdlDateTimeFormat.left = new FormAttachment( 0, 0 );\n    fdlDateTimeFormat.top = new FormAttachment( wSpecifyFormat, margin );\n    fdlDateTimeFormat.right = new FormAttachment( middle, -margin );\n    wlDateTimeFormat.setLayoutData( fdlDateTimeFormat );\n    wDateTimeFormat = new CCombo( wFileComp, SWT.BORDER | SWT.READ_ONLY );\n    wDateTimeFormat.setEditable( true );\n    props.setLook( wDateTimeFormat );\n    wDateTimeFormat.addModifyListener( lsMod );\n    fdDateTimeFormat = new FormData();\n    fdDateTimeFormat.left = new FormAttachment( middle, 0 );\n    fdDateTimeFormat.top = new FormAttachment( wSpecifyFormat, margin );\n    fdDateTimeFormat.right = new FormAttachment( 100, 0 );\n    wDateTimeFormat.setLayoutData( fdDateTimeFormat );\n    String[] dates = Const.getDateFormats();\n    fillWithSupportedDateFormats( wDateTimeFormat, dates );\n    wbShowFiles = new Button( wFileComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbShowFiles );\n    wbShowFiles.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.ShowFiles.Button\" ) );\n    fdbShowFiles = new FormData();\n    fdbShowFiles.left = new FormAttachment( middle, 0 );\n    fdbShowFiles.top = new FormAttachment( wDateTimeFormat, margin * 2 );\n    wbShowFiles.setLayoutData( fdbShowFiles );\n    wbShowFiles.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        HadoopFileOutputMeta tfoi = new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService,\n          runtimeTester );\n        getInfo( tfoi );\n        String[] files = tfoi.getFiles( transMeta );\n        if ( files != null && files.length > 0 ) {\n          EnterSelectionDialog esd =\n            new EnterSelectionDialog( shell, files, BaseMessages.getString( BASE_PKG,\n              \"TextFileOutputDialog.SelectOutputFiles.DialogTitle\" ), BaseMessages.getString( BASE_PKG,\n                \"TextFileOutputDialog.SelectOutputFiles.DialogMessage\" ) );\n          esd.setViewOnly();\n          esd.open();\n        } else {\n          MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n          mb.setMessage( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.NoFilesFound.DialogMessage\" ) );\n          mb.setText( BaseMessages.getString( BASE_PKG, \"System.Dialog.Error.Title\" ) );\n          mb.open();\n        }\n      }\n    } );\n\n    // Add File to the result files name\n    wlAddToResult = new Label( wFileComp, SWT.RIGHT );\n    wlAddToResult.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddFileToResult.Label\" ) );\n    props.setLook( wlAddToResult );\n    fdlAddToResult = new FormData();\n    fdlAddToResult.left = new FormAttachment( 0, 0 );\n    fdlAddToResult.top = new FormAttachment( wbShowFiles, 2 * margin );\n    fdlAddToResult.right = new FormAttachment( middle, -margin );\n    wlAddToResult.setLayoutData( fdlAddToResult );\n    wAddToResult = new Button( wFileComp, SWT.CHECK );\n    wAddToResult.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.AddFileToResult.Tooltip\" ) );\n    props.setLook( wAddToResult );\n    fdAddToResult = new FormData();\n    fdAddToResult.left = new FormAttachment( middle, 0 );\n    fdAddToResult.top = new FormAttachment( wbShowFiles, 2 * margin );\n    fdAddToResult.right = new FormAttachment( 100, 0 );\n    wAddToResult.setLayoutData( fdAddToResult );\n    SelectionAdapter lsSelR = new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent arg0 ) {\n        input.setChanged();\n      }\n    };\n    wAddToResult.addSelectionListener( lsSelR );\n\n    fdFileComp = new FormData();\n    fdFileComp.left = new FormAttachment( 0, 0 );\n    fdFileComp.top = new FormAttachment( 0, 0 );\n    fdFileComp.right = new FormAttachment( 100, 0 );\n    fdFileComp.bottom = new FormAttachment( 100, 0 );\n    wFileComp.setLayoutData( fdFileComp );\n\n    wFileComp.layout();\n    wFileTab.setControl( wFileComp );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF FILE TAB\n    // ///////////////////////////////////////////////////////////\n\n    // ////////////////////////\n    // START OF CONTENT TAB///\n    // /\n    wContentTab = new CTabItem( wTabFolder, SWT.NONE );\n    wContentTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.ContentTab.TabTitle\" ) );\n\n    FormLayout contentLayout = new FormLayout();\n    contentLayout.marginWidth = 3;\n    contentLayout.marginHeight = 3;\n\n    Composite wContentComp = new Composite( wTabFolder, SWT.NONE );\n    props.setLook( wContentComp );\n    wContentComp.setLayout( contentLayout );\n\n    // Append to end of file?\n    wlAppend = new Label( wContentComp, SWT.RIGHT );\n    wlAppend.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Append.Label\" ) );\n    props.setLook( wlAppend );\n    fdlAppend = new FormData();\n    fdlAppend.left = new FormAttachment( 0, 0 );\n    fdlAppend.top = new FormAttachment( 0, 0 );\n    fdlAppend.right = new FormAttachment( middle, -margin );\n    wlAppend.setLayoutData( fdlAppend );\n    wAppend = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wAppend );\n    fdAppend = new FormData();\n    fdAppend.left = new FormAttachment( middle, 0 );\n    fdAppend.top = new FormAttachment( 0, 0 );\n    fdAppend.right = new FormAttachment( 100, 0 );\n    wAppend.setLayoutData( fdAppend );\n    wAppend.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlSeparator = new Label( wContentComp, SWT.RIGHT );\n    wlSeparator.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Separator.Label\" ) );\n    props.setLook( wlSeparator );\n    fdlSeparator = new FormData();\n    fdlSeparator.left = new FormAttachment( 0, 0 );\n    fdlSeparator.top = new FormAttachment( wAppend, margin );\n    fdlSeparator.right = new FormAttachment( middle, -margin );\n    wlSeparator.setLayoutData( fdlSeparator );\n\n    wbSeparator = new Button( wContentComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( wbSeparator );\n    wbSeparator.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Separator.Button\" ) );\n    fdbSeparator = new FormData();\n    fdbSeparator.right = new FormAttachment( 100, 0 );\n    fdbSeparator.top = new FormAttachment( wAppend, 0 );\n    wbSeparator.setLayoutData( fdbSeparator );\n    wbSeparator.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent se ) {\n        // wSeparator.insert(\"\\t\");\n        wSeparator.getTextWidget().insert( \"\\t\" );\n      }\n    } );\n\n    wSeparator = new TextVar( transMeta, wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wSeparator );\n    wSeparator.addModifyListener( lsMod );\n    fdSeparator = new FormData();\n    fdSeparator.left = new FormAttachment( middle, 0 );\n    fdSeparator.top = new FormAttachment( wAppend, margin );\n    fdSeparator.right = new FormAttachment( wbSeparator, -margin );\n    wSeparator.setLayoutData( fdSeparator );\n\n    // Enclosure line...\n    wlEnclosure = new Label( wContentComp, SWT.RIGHT );\n    wlEnclosure.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Enclosure.Label\" ) );\n    props.setLook( wlEnclosure );\n    fdlEnclosure = new FormData();\n    fdlEnclosure.left = new FormAttachment( 0, 0 );\n    fdlEnclosure.top = new FormAttachment( wSeparator, margin );\n    fdlEnclosure.right = new FormAttachment( middle, -margin );\n    wlEnclosure.setLayoutData( fdlEnclosure );\n    wEnclosure = new TextVar( transMeta, wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wEnclosure );\n    wEnclosure.addModifyListener( lsMod );\n    fdEnclosure = new FormData();\n    fdEnclosure.left = new FormAttachment( middle, 0 );\n    fdEnclosure.top = new FormAttachment( wSeparator, margin );\n    fdEnclosure.right = new FormAttachment( 100, 0 );\n    wEnclosure.setLayoutData( fdEnclosure );\n\n    wlEnclForced = new Label( wContentComp, SWT.RIGHT );\n    wlEnclForced.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.EnclForced.Label\" ) );\n    props.setLook( wlEnclForced );\n    fdlEnclForced = new FormData();\n    fdlEnclForced.left = new FormAttachment( 0, 0 );\n    fdlEnclForced.top = new FormAttachment( wEnclosure, margin );\n    fdlEnclForced.right = new FormAttachment( middle, -margin );\n    wlEnclForced.setLayoutData( fdlEnclForced );\n    wEnclForced = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wEnclForced );\n    fdEnclForced = new FormData();\n    fdEnclForced.left = new FormAttachment( middle, 0 );\n    fdEnclForced.top = new FormAttachment( wEnclosure, margin );\n    fdEnclForced.right = new FormAttachment( 100, 0 );\n    wEnclForced.setLayoutData( fdEnclForced );\n    wEnclForced.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlHeader = new Label( wContentComp, SWT.RIGHT );\n    wlHeader.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Header.Label\" ) );\n    props.setLook( wlHeader );\n    fdlHeader = new FormData();\n    fdlHeader.left = new FormAttachment( 0, 0 );\n    fdlHeader.top = new FormAttachment( wEnclForced, margin );\n    fdlHeader.right = new FormAttachment( middle, -margin );\n    wlHeader.setLayoutData( fdlHeader );\n    wHeader = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wHeader );\n    fdHeader = new FormData();\n    fdHeader.left = new FormAttachment( middle, 0 );\n    fdHeader.top = new FormAttachment( wEnclForced, margin );\n    fdHeader.right = new FormAttachment( 100, 0 );\n    wHeader.setLayoutData( fdHeader );\n    wHeader.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlFooter = new Label( wContentComp, SWT.RIGHT );\n    wlFooter.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Footer.Label\" ) );\n    props.setLook( wlFooter );\n    fdlFooter = new FormData();\n    fdlFooter.left = new FormAttachment( 0, 0 );\n    fdlFooter.top = new FormAttachment( wHeader, margin );\n    fdlFooter.right = new FormAttachment( middle, -margin );\n    wlFooter.setLayoutData( fdlFooter );\n    wFooter = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wFooter );\n    fdFooter = new FormData();\n    fdFooter.left = new FormAttachment( middle, 0 );\n    fdFooter.top = new FormAttachment( wHeader, margin );\n    fdFooter.right = new FormAttachment( 100, 0 );\n    wFooter.setLayoutData( fdFooter );\n    wFooter.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlFormat = new Label( wContentComp, SWT.RIGHT );\n    wlFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Format.Label\" ) );\n    props.setLook( wlFormat );\n    fdlFormat = new FormData();\n    fdlFormat.left = new FormAttachment( 0, 0 );\n    fdlFormat.top = new FormAttachment( wFooter, margin );\n    fdlFormat.right = new FormAttachment( middle, -margin );\n    wlFormat.setLayoutData( fdlFormat );\n    wFormat = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wFormat.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Format.Label\" ) );\n    props.setLook( wFormat );\n\n    for ( int i = 0; i < HadoopFileOutputMeta.formatMapperLineTerminator.length; i++ ) {\n      wFormat.add( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Format.\"\n        + HadoopFileOutputMeta.formatMapperLineTerminator[i] ) );\n    }\n    wFormat.select( 0 );\n    wFormat.addModifyListener( lsMod );\n    fdFormat = new FormData();\n    fdFormat.left = new FormAttachment( middle, 0 );\n    fdFormat.top = new FormAttachment( wFooter, margin );\n    fdFormat.right = new FormAttachment( 100, 0 );\n    wFormat.setLayoutData( fdFormat );\n\n    wlCompression = new Label( wContentComp, SWT.RIGHT );\n    wlCompression.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Compression.Label\" ) );\n    props.setLook( wlCompression );\n    fdlCompression = new FormData();\n    fdlCompression.left = new FormAttachment( 0, 0 );\n    fdlCompression.top = new FormAttachment( wFormat, margin );\n    fdlCompression.right = new FormAttachment( middle, -margin );\n    wlCompression.setLayoutData( fdlCompression );\n    wCompression = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wCompression.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Compression.Label\" ) );\n    props.setLook( wCompression );\n\n    wCompression.setItems( CompressionProviderFactory.getInstance().getCompressionProviderNames() );\n    wCompression.addModifyListener( lsMod );\n    fdCompression = new FormData();\n    fdCompression.left = new FormAttachment( middle, 0 );\n    fdCompression.top = new FormAttachment( wFormat, margin );\n    fdCompression.right = new FormAttachment( 100, 0 );\n    wCompression.setLayoutData( fdCompression );\n\n    wlEncoding = new Label( wContentComp, SWT.RIGHT );\n    wlEncoding.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Encoding.Label\" ) );\n    props.setLook( wlEncoding );\n    fdlEncoding = new FormData();\n    fdlEncoding.left = new FormAttachment( 0, 0 );\n    fdlEncoding.top = new FormAttachment( wCompression, margin );\n    fdlEncoding.right = new FormAttachment( middle, -margin );\n    wlEncoding.setLayoutData( fdlEncoding );\n    wEncoding = new CCombo( wContentComp, SWT.BORDER | SWT.READ_ONLY );\n    wEncoding.setEditable( true );\n    props.setLook( wEncoding );\n    wEncoding.addModifyListener( lsMod );\n    fdEncoding = new FormData();\n    fdEncoding.left = new FormAttachment( middle, 0 );\n    fdEncoding.top = new FormAttachment( wCompression, margin );\n    fdEncoding.right = new FormAttachment( 100, 0 );\n    wEncoding.setLayoutData( fdEncoding );\n    wEncoding.addFocusListener( new FocusListener() {\n      public void focusLost( org.eclipse.swt.events.FocusEvent e ) {\n      }\n\n      public void focusGained( org.eclipse.swt.events.FocusEvent e ) {\n        Cursor busy = new Cursor( shell.getDisplay(), SWT.CURSOR_WAIT );\n        shell.setCursor( busy );\n        setEncodings();\n        shell.setCursor( null );\n        busy.dispose();\n      }\n    } );\n\n    wlPad = new Label( wContentComp, SWT.RIGHT );\n    wlPad.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.Pad.Label\" ) );\n    props.setLook( wlPad );\n    fdlPad = new FormData();\n    fdlPad.left = new FormAttachment( 0, 0 );\n    fdlPad.top = new FormAttachment( wEncoding, margin );\n    fdlPad.right = new FormAttachment( middle, -margin );\n    wlPad.setLayoutData( fdlPad );\n    wPad = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wPad );\n    fdPad = new FormData();\n    fdPad.left = new FormAttachment( middle, 0 );\n    fdPad.top = new FormAttachment( wEncoding, margin );\n    fdPad.right = new FormAttachment( 100, 0 );\n    wPad.setLayoutData( fdPad );\n    wPad.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlFastDump = new Label( wContentComp, SWT.RIGHT );\n    wlFastDump.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FastDump.Label\" ) );\n    props.setLook( wlFastDump );\n    fdlFastDump = new FormData();\n    fdlFastDump.left = new FormAttachment( 0, 0 );\n    fdlFastDump.top = new FormAttachment( wPad, margin );\n    fdlFastDump.right = new FormAttachment( middle, -margin );\n    wlFastDump.setLayoutData( fdlFastDump );\n    wFastDump = new Button( wContentComp, SWT.CHECK );\n    props.setLook( wFastDump );\n    fdFastDump = new FormData();\n    fdFastDump.left = new FormAttachment( middle, 0 );\n    fdFastDump.top = new FormAttachment( wPad, margin );\n    fdFastDump.right = new FormAttachment( 100, 0 );\n    wFastDump.setLayoutData( fdFastDump );\n    wFastDump.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        input.setChanged();\n      }\n    } );\n\n    wlSplitEvery = new Label( wContentComp, SWT.RIGHT );\n    wlSplitEvery.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.SplitEvery.Label\" ) );\n    props.setLook( wlSplitEvery );\n    fdlSplitEvery = new FormData();\n    fdlSplitEvery.left = new FormAttachment( 0, 0 );\n    fdlSplitEvery.top = new FormAttachment( wFastDump, margin );\n    fdlSplitEvery.right = new FormAttachment( middle, -margin );\n    wlSplitEvery.setLayoutData( fdlSplitEvery );\n    wSplitEvery = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wSplitEvery );\n    wSplitEvery.addModifyListener( lsMod );\n    fdSplitEvery = new FormData();\n    fdSplitEvery.left = new FormAttachment( middle, 0 );\n    fdSplitEvery.top = new FormAttachment( wFastDump, margin );\n    fdSplitEvery.right = new FormAttachment( 100, 0 );\n    wSplitEvery.setLayoutData( fdSplitEvery );\n\n    // Bruise:\n    wlEndedLine = new Label( wContentComp, SWT.RIGHT );\n    wlEndedLine.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.EndedLine.Label\" ) );\n    props.setLook( wlEndedLine );\n    fdlEndedLine = new FormData();\n    fdlEndedLine.left = new FormAttachment( 0, 0 );\n    fdlEndedLine.top = new FormAttachment( wSplitEvery, margin );\n    fdlEndedLine.right = new FormAttachment( middle, -margin );\n    wlEndedLine.setLayoutData( fdlEndedLine );\n    wEndedLine = new Text( wContentComp, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wEndedLine );\n    wEndedLine.addModifyListener( lsMod );\n    fdEndedLine = new FormData();\n    fdEndedLine.left = new FormAttachment( middle, 0 );\n    fdEndedLine.top = new FormAttachment( wSplitEvery, margin );\n    fdEndedLine.right = new FormAttachment( 100, 0 );\n    wEndedLine.setLayoutData( fdEndedLine );\n\n    fdContentComp = new FormData();\n    fdContentComp.left = new FormAttachment( 0, 0 );\n    fdContentComp.top = new FormAttachment( 0, 0 );\n    fdContentComp.right = new FormAttachment( 100, 0 );\n    fdContentComp.bottom = new FormAttachment( 100, 0 );\n    wContentComp.setLayoutData( fdContentComp );\n\n    wContentComp.layout();\n    wContentTab.setControl( wContentComp );\n\n    // ///////////////////////////////////////////////////////////\n    // / END OF CONTENT TAB\n    // ///////////////////////////////////////////////////////////\n\n    // Fields tab...\n    //\n    wFieldsTab = new CTabItem( wTabFolder, SWT.NONE );\n    wFieldsTab.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FieldsTab.TabTitle\" ) );\n\n    FormLayout fieldsLayout = new FormLayout();\n    fieldsLayout.marginWidth = Const.FORM_MARGIN;\n    fieldsLayout.marginHeight = Const.FORM_MARGIN;\n\n    Composite wFieldsComp = new Composite( wTabFolder, SWT.NONE );\n    wFieldsComp.setLayout( fieldsLayout );\n    props.setLook( wFieldsComp );\n\n    wGet = new Button( wFieldsComp, SWT.PUSH );\n    wGet.setText( BaseMessages.getString( BASE_PKG, \"System.Button.GetFields\" ) );\n    wGet.setToolTipText( BaseMessages.getString( BASE_PKG, \"System.Tooltip.GetFields\" ) );\n\n    wMinWidth = new Button( wFieldsComp, SWT.PUSH );\n    wMinWidth.setText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.MinWidth.Button\" ) );\n    wMinWidth.setToolTipText( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.MinWidth.Tooltip\" ) );\n\n    setButtonPositions( new Button[] { wGet, wMinWidth }, margin, null );\n\n    final int FieldsCols = 10;\n    final int FieldsRows = input.getOutputFields().length;\n\n    // Prepare a list of possible formats...\n    String[] nums = Const.getNumberFormats();\n    int totsize = dates.length + nums.length;\n    String[] formats = new String[totsize];\n    for ( int x = 0; x < dates.length; x++ ) {\n      formats[x] = dates[x];\n    }\n    for ( int x = 0; x < nums.length; x++ ) {\n      formats[dates.length + x] = nums[x];\n    }\n    colinf = new ColumnInfo[FieldsCols];\n    colinf[0] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.NameColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_CCOMBO, new String[] { \"\" }, false );\n    colinf[1] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.TypeColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaBase.getTypes() );\n    colinf[2] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FormatColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_CCOMBO, formats );\n    colinf[3] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.LengthColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n    colinf[4] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.PrecisionColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n    colinf[5] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.CurrencyColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n    colinf[6] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.DecimalColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n    colinf[7] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.GroupColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n    colinf[8] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.TrimTypeColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_CCOMBO, ValueMetaBase.trimTypeDesc, true );\n    colinf[9] =\n        new ColumnInfo( BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.NullColumn.Column\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false );\n\n    wFields =\n        new TableView( transMeta, wFieldsComp, SWT.BORDER | SWT.FULL_SELECTION | SWT.MULTI, colinf, FieldsRows, lsMod,\n            props );\n\n    fdFields = new FormData();\n    fdFields.left = new FormAttachment( 0, 0 );\n    fdFields.top = new FormAttachment( 0, 0 );\n    fdFields.right = new FormAttachment( 100, 0 );\n    fdFields.bottom = new FormAttachment( wGet, -margin );\n    wFields.setLayoutData( fdFields );\n\n    //\n    // Search the fields in the background\n\n    final Runnable runnable = new Runnable() {\n      public void run() {\n        StepMeta stepMeta = transMeta.findStep( stepname );\n        if ( stepMeta != null ) {\n          try {\n            RowMetaInterface row = transMeta.getPrevStepFields( stepMeta );\n\n            // Remember these fields...\n            for ( int i = 0; i < row.size(); i++ ) {\n              inputFields.put( row.getValueMeta( i ).getName(), Integer.valueOf( i ) );\n            }\n            setComboBoxes();\n          } catch ( KettleException e ) {\n            logError( BaseMessages.getString( BASE_PKG, \"System.Dialog.GetFieldsFailed.Message\" ) );\n          }\n        }\n      }\n    };\n    new Thread( runnable ).start();\n\n    fdFieldsComp = new FormData();\n    fdFieldsComp.left = new FormAttachment( 0, 0 );\n    fdFieldsComp.top = new FormAttachment( 0, 0 );\n    fdFieldsComp.right = new FormAttachment( 100, 0 );\n    fdFieldsComp.bottom = new FormAttachment( 100, 0 );\n    wFieldsComp.setLayoutData( fdFieldsComp );\n\n    wFieldsComp.layout();\n    wFieldsTab.setControl( wFieldsComp );\n\n    fdTabFolder = new FormData();\n    fdTabFolder.left = new FormAttachment( 0, 0 );\n    fdTabFolder.top = new FormAttachment( wStepname, margin );\n    fdTabFolder.right = new FormAttachment( 100, 0 );\n    fdTabFolder.bottom = new FormAttachment( 100, -50 );\n    wTabFolder.setLayoutData( fdTabFolder );\n\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( BASE_PKG, \"System.Button.OK\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( BASE_PKG, \"System.Button.Cancel\" ) );\n\n    positionBottomRightButtons( shell, new Button[] { wOK, wCancel }, margin, wTabFolder );\n\n    // Add listeners\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n    lsGet = new Listener() {\n      public void handleEvent( Event e ) {\n        get();\n      }\n    };\n    lsMinWidth = new Listener() {\n      public void handleEvent( Event e ) {\n        setMinimalWidth();\n      }\n    };\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n\n    wOK.addListener( SWT.Selection, lsOK );\n    wGet.addListener( SWT.Selection, lsGet );\n    wMinWidth.addListener( SWT.Selection, lsMinWidth );\n    wCancel.addListener( SWT.Selection, lsCancel );\n\n    lsDef = new SelectionAdapter() {\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    wStepname.addSelectionListener( lsDef );\n    wFilename.addSelectionListener( lsDef );\n    wSeparator.addSelectionListener( lsDef );\n\n    // Whenever something changes, set the tooltip to the expanded version:\n    wFilename.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        wFilename.setToolTipText( transMeta.environmentSubstitute( wFilename.getText() ) );\n      }\n    } );\n\n    // Listen to the Browse... button\n    wbFilename.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        try {\n          // Setup file type filtering\n          String[] fileFilters = new String[] { \"*.txt\", \"*.csv\", \"*\" };\n          String[] fileFilterNames =\n              new String[] { BaseMessages.getString( BASE_PKG, \"System.FileType.TextFiles\" ),\n                  BaseMessages.getString( BASE_PKG, \"System.FileType.CSVFiles\" ),\n                  BaseMessages.getString( BASE_PKG, \"System.FileType.AllFiles\" ) };\n\n          NamedCluster namedCluster = namedClusterWidget.getSelectedNamedCluster();\n          if ( namedCluster == null ) {\n            return;\n          }\n\n          String path = wFilename.getText();\n\n          // Get current file\n          FileObject rootFile = null;\n          FileObject initialFile = null;\n          FileObject defaultInitialFile = null;\n\n          if ( Utils.isEmpty( path ) ) {\n            path = \"/\";\n          }\n          path = namedCluster.processURLsubstitution( path, getMetaStore(), transMeta );\n\n          boolean resolvedInitialFile = false;\n\n          if ( path != null ) {\n\n            String fileName = transMeta.environmentSubstitute( path );\n\n            if ( fileName != null && !fileName.equals( \"\" ) ) {\n              try {\n                initialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( fileName );\n                resolvedInitialFile = true;\n              } catch ( Exception ex ) {\n                showMessageAndLog( BaseMessages.getString( PKG, \"HadoopFileOutputDialog.Connection.Error.title\" ),\n                    BaseMessages.getString( PKG, \"HadoopFileOutputDialog.Connection.error\" ), ex.getMessage() );\n                return;\n              }\n              File startFile = new File( System.getProperty( \"user.home\" ) );\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( startFile.getAbsolutePath() );\n              rootFile = initialFile.getFileSystem().getRoot();\n            } else {\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( Spoon.getInstance().getLastFileOpened() );\n            }\n          }\n\n          if ( rootFile == null ) {\n            if ( defaultInitialFile == null ) {\n              return;\n            }\n            rootFile = defaultInitialFile.getFileSystem().getRoot();\n            initialFile = defaultInitialFile;\n          }\n\n          VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n          fileChooserDialog.defaultInitialFile = defaultInitialFile;\n          FileObject selectedFile = null;\n\n          if ( namedCluster != null ) {\n            if ( namedCluster.isMapr() ) {\n              selectedFile =\n                  fileChooserDialog.open( shell, new String[] { Schemes.MAPRFS_SCHEME },\n                    Schemes.MAPRFS_SCHEME, true, path, fileFilters, fileFilterNames, true,\n                      VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n            } else {\n              List<CustomVfsUiPanel> customPanels = fileChooserDialog.getCustomVfsUiPanels();\n              String ncName = null;\n              HadoopVfsFileChooserDialog hadoopDialog = null;\n              for ( CustomVfsUiPanel panel : customPanels ) {\n                if ( panel instanceof HadoopVfsFileChooserDialog ) {\n                  hadoopDialog = ( (HadoopVfsFileChooserDialog) panel );\n                  NamedClusterWidgetImpl ncWidget = hadoopDialog.getNamedClusterWidget();\n                  ncWidget.initiate();\n                  ncName = null;\n                  if ( initialFile != null ) {\n                    HadoopFileOutputMeta meta = (HadoopFileOutputMeta) input;\n                    ncName = meta.getSourceConfigurationName();\n                  }\n                  hadoopDialog.setNamedCluster( ncName );\n                  hadoopDialog.initializeConnectionPanel( initialFile );\n                }\n              }\n              if ( resolvedInitialFile ) {\n                fileChooserDialog.initialFile = initialFile;\n              }\n              selectedFile =\n                  fileChooserDialog.open( shell, new String[] { Schemes.HDFS_SCHEME },\n                    Schemes.HDFS_SCHEME, true, path, fileFilters, fileFilterNames, true,\n                      VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n            }\n          }\n\n          if ( selectedFile != null ) {\n            String filename = selectedFile.getURL().toString();\n\n            String extension = wExtension.getText();\n            if ( extension != null && filename.endsWith( \".\" + extension ) ) {\n              // The extension is filled in and matches the end\n              // of the selected file => Strip off the extension.\n              wFilename.setText( getUrlPath( filename.substring( 0, filename.length() - ( extension.length() + 1 ) ) ) );\n            } else {\n              wFilename.setText( getUrlPath( filename ) );\n            }\n          }\n        } catch ( KettleFileException ex ) {\n          log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.KettleFileException\" ) );\n        } catch ( FileSystemException ex ) {\n          log.logError( BaseMessages.getString( PKG, \"HadoopFileInputDialog.FileBrowser.FileSystemException\" ) );\n        }\n      }\n    } );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    lsResize = new Listener() {\n      public void handleEvent( Event event ) {\n        Point size = shell.getSize();\n        wFields.setSize( size.x - 10, size.y - 50 );\n        wFields.table.setSize( size.x - 10, size.y - 50 );\n        wFields.redraw();\n      }\n    };\n    shell.addListener( SWT.Resize, lsResize );\n\n    wTabFolder.setSelection( 0 );\n\n    // Set the shell size, based upon previous time...\n    setSize();\n\n    getData();\n    activeFileNameField();\n    enableParentFolder();\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return stepname;\n  }\n\n  protected void fillWithSupportedDateFormats( CCombo combo, String[] dates ) {\n    for ( String s : dates ) {\n      // ':' is not supported in filenames by hadoop file system, add other characters if needed to the regex below\n      if ( s.matches( \"[^:]+\" ) ) {\n        combo.add( s );\n      }\n    }\n  }\n\n  private void activeFileNameField() {\n    wlFileNameField.setEnabled( wFileNameInField.getSelection() );\n    wFileNameField.setEnabled( wFileNameInField.getSelection() );\n    wlFilename.setEnabled( !wFileNameInField.getSelection() );\n    wFilename.setEnabled( !wFileNameInField.getSelection() );\n\n    if ( wFileNameInField.getSelection() ) {\n      if ( !wDoNotOpenNewFileInit.getSelection() ) {\n        wDoNotOpenNewFileInit.setSelection( true );\n      }\n      wAddDate.setSelection( false );\n      wAddTime.setSelection( false );\n      wSpecifyFormat.setSelection( false );\n      wAddStepnr.setSelection( false );\n      wAddPartnr.setSelection( false );\n    }\n\n    wlDoNotOpenNewFileInit.setEnabled( !wFileNameInField.getSelection() );\n    wDoNotOpenNewFileInit.setEnabled( !wFileNameInField.getSelection() );\n    wlSpecifyFormat.setEnabled( !wFileNameInField.getSelection() );\n    wSpecifyFormat.setEnabled( !wFileNameInField.getSelection() );\n\n    wAddStepnr.setEnabled( !wFileNameInField.getSelection() );\n    wlAddStepnr.setEnabled( !wFileNameInField.getSelection() );\n    wAddPartnr.setEnabled( !wFileNameInField.getSelection() );\n    wlAddPartnr.setEnabled( !wFileNameInField.getSelection() );\n    if ( wFileNameInField.getSelection() ) {\n      wSplitEvery.setText( \"0\" );\n    }\n    wSplitEvery.setEnabled( !wFileNameInField.getSelection() );\n    wlSplitEvery.setEnabled( !wFileNameInField.getSelection() );\n    if ( wFileNameInField.getSelection() ) {\n      wEndedLine.setText( \"\" );\n    }\n    wEndedLine.setEnabled( !wFileNameInField.getSelection() );\n    wbShowFiles.setEnabled( !wFileNameInField.getSelection() );\n    wbFilename.setEnabled( !wFileNameInField.getSelection() );\n\n    setDateTimeFormat();\n  }\n\n  protected void setComboBoxes() {\n    // Something was changed in the row.\n    //\n    final Map<String, Integer> fields = new HashMap<String, Integer>();\n\n    // Add the currentMeta fields...\n    fields.putAll( inputFields );\n\n    Set<String> keySet = fields.keySet();\n    List<String> entries = new ArrayList<String>( keySet );\n\n    String[] fieldNames = (String[]) entries.toArray( new String[entries.size()] );\n\n    Const.sortStrings( fieldNames );\n    colinf[0].setComboValues( fieldNames );\n  }\n\n  private void setDateTimeFormat() {\n    if ( wSpecifyFormat.getSelection() ) {\n      wAddDate.setSelection( false );\n      wAddTime.setSelection( false );\n    }\n\n    wDateTimeFormat.setEnabled( wSpecifyFormat.getSelection() && !wFileNameInField.getSelection() );\n    wlDateTimeFormat.setEnabled( wSpecifyFormat.getSelection() && !wFileNameInField.getSelection() );\n    wAddDate.setEnabled( !( wFileNameInField.getSelection() || wSpecifyFormat.getSelection() ) );\n    wlAddDate.setEnabled( !( wSpecifyFormat.getSelection() || wFileNameInField.getSelection() ) );\n    wAddTime.setEnabled( !( wSpecifyFormat.getSelection() || wFileNameInField.getSelection() ) );\n    wlAddTime.setEnabled( !( wSpecifyFormat.getSelection() || wFileNameInField.getSelection() ) );\n  }\n\n  private void setEncodings() {\n    // Encoding of the text file:\n    if ( !gotEncodings ) {\n      gotEncodings = true;\n\n      wEncoding.removeAll();\n      List<Charset> values = new ArrayList<Charset>( Charset.availableCharsets().values() );\n      for ( int i = 0; i < values.size(); i++ ) {\n        Charset charSet = (Charset) values.get( i );\n        wEncoding.add( charSet.displayName() );\n      }\n\n      // Now select the default!\n      String defEncoding = Const.getEnvironmentVariable( \"file.encoding\", \"UTF-8\" );\n      int idx = Const.indexOfString( defEncoding, wEncoding.getItems() );\n      if ( idx >= 0 ) {\n        wEncoding.select( idx );\n      }\n    }\n  }\n\n  private void getFields() {\n    if ( !gotPreviousFields ) {\n      try {\n        String field = wFileNameField.getText();\n        RowMetaInterface r = transMeta.getPrevStepFields( stepname );\n        if ( r != null ) {\n          wFileNameField.setItems( r.getFieldNames() );\n        }\n        if ( field != null ) {\n          wFileNameField.setText( field );\n        }\n      } catch ( KettleException ke ) {\n        new ErrorDialog( shell,\n            BaseMessages.getString( BASE_PKG, \"TextFileOutputDialog.FailedToGetFields.DialogTitle\" ), BaseMessages\n                .getString( BASE_PKG, \"TextFileOutputDialog.FailedToGetFields.DialogMessage\" ), ke );\n      }\n      gotPreviousFields = true;\n    }\n  }\n\n  /**\n   * Copy information from the meta-data input to the dialog fields.\n   */\n  public void getData() {\n\n    HadoopFileOutputMeta meta = (HadoopFileOutputMeta) input;\n    String ncName = meta.getSourceConfigurationName();\n    if ( ncName != null ) {\n      namedClusterWidget.setSelectedNamedCluster( ncName );\n    }\n\n    if ( input.getFileName() != null ) {\n      String fileName = input.getFileName();\n      fileName = getUrlPath( fileName );\n      if ( fileName != null ) {\n        wFilename.setText( fileName );\n      }\n    }\n\n    wDoNotOpenNewFileInit.setSelection( input.isDoNotOpenNewFileInit() );\n    wCreateParentFolder.setSelection( input.isCreateParentFolder() );\n    if ( input.getExtension() != null ) {\n      wExtension.setText( input.getExtension() );\n    }\n    if ( input.getSeparator() != null ) {\n      wSeparator.setText( input.getSeparator() );\n    }\n    if ( input.getEnclosure() != null ) {\n      wEnclosure.setText( input.getEnclosure() );\n    }\n    if ( input.getFileFormat() != null ) {\n      wFormat.select( 0 ); // default if not found: CR+LF\n      for ( int i = 0; i < HadoopFileOutputMeta.formatMapperLineTerminator.length; i++ ) {\n        if ( input.getFileFormat().equalsIgnoreCase( HadoopFileOutputMeta.formatMapperLineTerminator[i] ) ) {\n          wFormat.select( i );\n        }\n      }\n    }\n    if ( input.getFileCompression() != null ) {\n      wCompression.setText( input.getFileCompression() );\n    }\n    if ( input.getEncoding() != null ) {\n      wEncoding.setText( input.getEncoding() );\n    }\n    if ( input.getEndedLine() != null ) {\n      wEndedLine.setText( input.getEndedLine() );\n    }\n    wFileNameInField.setSelection( input.isFileNameInField() );\n    if ( input.getFileNameField() != null ) {\n      wFileNameField.setText( input.getFileNameField() );\n    }\n    wSplitEvery.setText( \"\" + input.getSplitEvery() );\n\n    wEnclForced.setSelection( input.isEnclosureForced() );\n    wHeader.setSelection( input.isHeaderEnabled() );\n    wFooter.setSelection( input.isFooterEnabled() );\n    wAddDate.setSelection( input.isDateInFilename() );\n    wAddTime.setSelection( input.isTimeInFilename() );\n    if ( input.getDateTimeFormat() != null ) {\n      wDateTimeFormat.setText( input.getDateTimeFormat() );\n    }\n    wSpecifyFormat.setSelection( input.isSpecifyingFormat() );\n\n    wAppend.setSelection( input.isFileAppended() );\n    wAddStepnr.setSelection( input.isStepNrInFilename() );\n    wAddPartnr.setSelection( input.isPartNrInFilename() );\n    wPad.setSelection( input.isPadded() );\n    wFastDump.setSelection( input.isFastDump() );\n    wAddToResult.setSelection( input.isAddToResultFiles() );\n\n    logDebug( \"getting fields info...\" );\n\n    for ( int i = 0; i < input.getOutputFields().length; i++ ) {\n      TextFileField field = input.getOutputFields()[i];\n\n      TableItem item = wFields.table.getItem( i );\n      if ( field.getName() != null ) {\n        item.setText( 1, field.getName() );\n      }\n      item.setText( 2, field.getTypeDesc() );\n      if ( field.getFormat() != null ) {\n        item.setText( 3, field.getFormat() );\n      }\n      if ( field.getLength() >= 0 ) {\n        item.setText( 4, \"\" + field.getLength() );\n      }\n      if ( field.getPrecision() >= 0 ) {\n        item.setText( 5, \"\" + field.getPrecision() );\n      }\n      if ( field.getCurrencySymbol() != null ) {\n        item.setText( 6, field.getCurrencySymbol() );\n      }\n      if ( field.getDecimalSymbol() != null ) {\n        item.setText( 7, field.getDecimalSymbol() );\n      }\n      if ( field.getGroupingSymbol() != null ) {\n        item.setText( 8, field.getGroupingSymbol() );\n      }\n      String trim = field.getTrimTypeDesc();\n      if ( trim != null ) {\n        item.setText( 9, trim );\n      }\n      if ( field.getNullString() != null ) {\n        item.setText( 10, field.getNullString() );\n      }\n    }\n\n    wFields.optWidth( true );\n    wStepname.selectAll();\n  }\n\n  private void cancel() {\n    stepname = null;\n\n    input.setChanged( backupChanged );\n\n    dispose();\n  }\n\n  private void getInfo( HadoopFileOutputMeta tfoi ) {\n    String ncName = ( (HadoopFileOutputMeta) tfoi ).getSourceConfigurationName();\n    String fileName = wFilename.getText();\n\n    NamedCluster c = getMetaStore() == null ? null\n      : namedClusterService.getNamedClusterByName( ncName, getMetaStore() );\n    if ( c != null ) {\n      fileName = c.processURLsubstitution( fileName, getMetaStore(), variables );\n    }\n\n    tfoi.setFileName( fileName );\n    tfoi.setDoNotOpenNewFileInit( wDoNotOpenNewFileInit.getSelection() );\n    tfoi.setCreateParentFolder( wCreateParentFolder.getSelection() );\n    tfoi.setFileFormat( HadoopFileOutputMeta.formatMapperLineTerminator[wFormat.getSelectionIndex()] );\n    tfoi.setFileCompression( wCompression.getText() );\n    tfoi.setEncoding( wEncoding.getText() );\n    tfoi.setSeparator( wSeparator.getText() );\n    tfoi.setEnclosure( wEnclosure.getText() );\n    tfoi.setExtension( wExtension.getText() );\n    tfoi.setSplitEvery( Const.toInt( wSplitEvery.getText(), 0 ) );\n    tfoi.setEndedLine( wEndedLine.getText() );\n\n    tfoi.setFileNameField( wFileNameField.getText() );\n    tfoi.setFileNameInField( wFileNameInField.getSelection() );\n\n    tfoi.setEnclosureForced( wEnclForced.getSelection() );\n    tfoi.setHeaderEnabled( wHeader.getSelection() );\n    tfoi.setFooterEnabled( wFooter.getSelection() );\n    tfoi.setFileAppended( wAppend.getSelection() );\n    tfoi.setStepNrInFilename( wAddStepnr.getSelection() );\n    tfoi.setPartNrInFilename( wAddPartnr.getSelection() );\n    tfoi.setDateInFilename( wAddDate.getSelection() );\n    tfoi.setTimeInFilename( wAddTime.getSelection() );\n    tfoi.setDateTimeFormat( wDateTimeFormat.getText() );\n    tfoi.setSpecifyingFormat( wSpecifyFormat.getSelection() );\n    tfoi.setPadded( wPad.getSelection() );\n    tfoi.setAddToResultFiles( wAddToResult.getSelection() );\n    tfoi.setFastDump( wFastDump.getSelection() );\n\n    int i;\n    // Table table = wFields.table;\n\n    int nrfields = wFields.nrNonEmpty();\n\n    tfoi.allocate( nrfields );\n\n    for ( i = 0; i < nrfields; i++ ) {\n      TextFileField field = new TextFileField();\n\n      TableItem item = wFields.getNonEmpty( i );\n      field.setName( item.getText( 1 ) );\n      field.setType( item.getText( 2 ) );\n      field.setFormat( item.getText( 3 ) );\n      field.setLength( Const.toInt( item.getText( 4 ), -1 ) );\n      field.setPrecision( Const.toInt( item.getText( 5 ), -1 ) );\n      field.setCurrencySymbol( item.getText( 6 ) );\n      field.setDecimalSymbol( item.getText( 7 ) );\n      field.setGroupingSymbol( item.getText( 8 ) );\n      field.setTrimType( ValueMetaBase.getTrimTypeByDesc( item.getText( 9 ) ) );\n      field.setNullString( item.getText( 10 ) );\n      ( tfoi.getOutputFields() )[i] = field;\n    }\n  }\n\n  private void ok() {\n    if ( Utils.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    stepname = wStepname.getText(); // return value\n\n    getInfo( input );\n\n    dispose();\n  }\n\n  private void get() {\n    try {\n      RowMetaInterface r = transMeta.getPrevStepFields( stepname );\n      if ( r != null ) {\n        TableItemInsertListener listener = new TableItemInsertListener() {\n          public boolean tableItemInserted( TableItem tableItem, ValueMetaInterface v ) {\n            if ( v.isNumber() ) {\n              if ( v.getLength() > 0 ) {\n                int le = v.getLength();\n                int pr = v.getPrecision();\n\n                if ( v.getPrecision() <= 0 ) {\n                  pr = 0;\n                }\n\n                String mask = \"\";\n                for ( int m = 0; m < le - pr; m++ ) {\n                  mask += \"0\";\n                }\n                if ( pr > 0 ) {\n                  mask += \".\";\n                }\n                for ( int m = 0; m < pr; m++ ) {\n                  mask += \"0\";\n                }\n                tableItem.setText( 3, mask );\n              }\n            }\n            return true;\n          }\n        };\n        BaseStepDialog.getFieldsFromPrevious( r, wFields, 1, new int[] { 1 }, new int[] { 2 }, 4, 5, listener );\n      }\n    } catch ( KettleException ke ) {\n      new ErrorDialog( shell, BaseMessages.getString( BASE_PKG, \"System.Dialog.GetFieldsFailed.Title\" ), BaseMessages\n          .getString( BASE_PKG, \"System.Dialog.GetFieldsFailed.Message\" ), ke );\n    }\n\n  }\n\n  /**\n   * Sets the output width to minimal width...\n   *\n   */\n  public void setMinimalWidth() {\n    int nrNonEmptyFields = wFields.nrNonEmpty();\n    for ( int i = 0; i < nrNonEmptyFields; i++ ) {\n      TableItem item = wFields.getNonEmpty( i );\n\n      item.setText( 4, \"\" );\n      item.setText( 5, \"\" );\n      item.setText( 9, ValueMetaBase.getTrimTypeDesc( ValueMetaInterface.TRIM_TYPE_BOTH ) );\n\n      int type = ValueMetaBase.getType( item.getText( 2 ) );\n      switch ( type ) {\n        case ValueMetaInterface.TYPE_STRING:\n          item.setText( 3, \"\" );\n          break;\n        case ValueMetaInterface.TYPE_INTEGER:\n          item.setText( 3, \"0\" );\n          break;\n        case ValueMetaInterface.TYPE_NUMBER:\n          item.setText( 3, \"0.#####\" );\n          break;\n        case ValueMetaInterface.TYPE_DATE:\n          break;\n        default:\n          break;\n      }\n    }\n\n    for ( int i = 0; i < input.getOutputFields().length; i++ ) {\n      input.getOutputFields()[i].setTrimType( ValueMetaInterface.TRIM_TYPE_BOTH );\n    }\n    wFields.optWidth( true );\n  }\n\n  public String toString() {\n    return this.getClass().getName();\n  }\n\n  private void enableParentFolder() {\n    wlCreateParentFolder.setEnabled( true );\n    wCreateParentFolder.setEnabled( true );\n  }\n\n  public static String getUrlPath( String incomingURL ) {\n    String path = incomingURL;\n    try {\n      String noVariablesURL = incomingURL.replaceAll( \"[${}]\", \"/\" );\n      FileName fileName = KettleVFS.getInstance().getFileSystemManager().resolveURI( noVariablesURL );\n      String root = fileName.getRootURI().replaceFirst( \"/$\", \"\" );\n      if ( noVariablesURL.startsWith( root ) ) {\n        path = incomingURL.length() > root.length() ? incomingURL.substring( root.length() ) : \"/\";\n      }\n    } catch ( FileSystemException e ) {\n      path = incomingURL;\n    }\n    return path;\n  }\n\n  private void showMessageAndLog( String title, String message, String messageToLog ) {\n    MessageBox box = new MessageBox( shell );\n    box.setText( title ); //$NON-NLS-1$\n    box.setMessage( message );\n    log.logError( messageToLog );\n    box.open();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileOutputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.plugins.ParentFirst;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.resource.ResourceNamingInterface;\nimport org.pentaho.di.trans.steps.textfileoutput.TextFileOutputMeta;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.util.Map;\n\n@Step( id = \"HadoopFileOutputPlugin\", image = \"HDO.svg\", name = \"HadoopFileOutputPlugin.Name\",\n    description = \"HadoopFileOutputPlugin.Description\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.hadoopfileoutput\" )\n@InjectionSupported( localizationPrefix = \"HadoopFileOutput.Injection.\", groups = { \"OUTPUT_FIELDS\" } )\n//@ParentFirst( patterns = { \"../../lib\" } )\npublic class HadoopFileOutputMeta extends TextFileOutputMeta implements HadoopFileMeta {\n\n  // for message resolution\n  private static Class<?> PKG = HadoopFileOutputMeta.class;\n\n  private String sourceConfigurationName;\n\n  private static final String SOURCE_CONFIGURATION_NAME = \"source_configuration_name\";\n\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private IMetaStore metaStore;\n  private Node embeddedNamedClusterNode;\n\n  public HadoopFileOutputMeta() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n  }\n\n  public HadoopFileOutputMeta( NamedClusterService namedClusterService,\n                               RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n  }\n\n  @Override\n  public void setDefault() {\n    // call the base classes method\n    super.setDefault();\n\n    // now set the default for the\n    // filename to an empty string\n    setFileName( \"\" );\n  }\n\n  public String getSourceConfigurationName() {\n    return sourceConfigurationName;\n  }\n\n  public void setSourceConfigurationName( String ncName ) {\n    this.sourceConfigurationName = ncName;\n  }\n\n  protected String loadSource( Node stepnode, IMetaStore metastore ) {\n    this.metaStore = metastore;\n    String url = XMLHandler.getTagValue( stepnode, \"file\", \"name\" );\n    sourceConfigurationName = XMLHandler.getTagValue( stepnode, \"file\", SOURCE_CONFIGURATION_NAME );\n    embeddedNamedClusterNode = XMLHandler.getSubNode( stepnode, \"NamedCluster\" );\n\n    return getProcessedUrl( metastore, url );\n  }\n\n  protected String getProcessedUrl( IMetaStore metastore, String url ) {\n    if ( url == null ) {\n      return null;\n    }\n    if ( metastore == null ) {\n      // Maybe we can get a metastore from spoon\n      try {\n        metaStore = MetaStoreConst.openLocalPentahoMetaStore( false );\n      } catch ( Exception e ) {\n        // If no local metastore we must ignore and proceed\n      }\n    } else {\n      // if we already have a metastore use it\n      metaStore = metastore;\n    }\n    NamedCluster c = getNamedCluster();\n    if ( c != null ) {\n      url = c.processURLsubstitution( url, metaStore, new Variables() );\n    }\n    return url;\n  }\n\n  @Override\n  public String getClusterName( final String url ) {\n    final NamedCluster cluster = getNamedCluster();\n    return cluster == null ? null : cluster.getName();\n  }\n\n\n  public NamedCluster getNamedCluster() {\n    NamedCluster cluster = namedClusterService.getNamedClusterByName( sourceConfigurationName, metaStore );\n    if ( cluster == null ) {\n      // Still no metastore, try to make a named cluster from the embedded xml\n      if ( namedClusterService.getClusterTemplate() != null ) {\n        cluster = namedClusterService.getClusterTemplate().fromXmlForEmbed( embeddedNamedClusterNode );\n      }\n    }\n    return cluster;\n  }\n\n  public String getUrlPath( String incomingURL ) {\n    return getProcessedUrl( null, incomingURL );\n  }\n\n  protected void saveSource( StringBuilder retVal, String fileName ) {\n    retVal.append( \"      \" ).append( XMLHandler.addTagValue( \"name\", fileName ) );\n    retVal.append( \"      \" ).append( XMLHandler.addTagValue( SOURCE_CONFIGURATION_NAME, sourceConfigurationName ) );\n  }\n\n  @Override\n  public String getXML() {\n    String xml = super.getXML();\n    NamedCluster c = namedClusterService.getNamedClusterByName( sourceConfigurationName, metaStore );\n    if ( c != null ) {\n      xml = xml + c.toXmlForEmbed( \"NamedCluster\" )  + Const.CR;\n    }\n    return xml;\n  }\n\n  // Receiving metaStore because RepositoryProxy.getMetaStore() returns a hard-coded null\n  protected String loadSourceRep( Repository rep, ObjectId id_step,  IMetaStore metaStore ) throws KettleException {\n    this.metaStore = metaStore;\n    String url = rep.getStepAttributeString( id_step, \"file_name\" );\n    sourceConfigurationName = rep.getStepAttributeString( id_step, SOURCE_CONFIGURATION_NAME );\n\n    return getProcessedUrl( metaStore, url );\n  }\n\n  protected void saveSourceRep( Repository rep, ObjectId id_transformation, ObjectId id_step, String fileName )\n    throws KettleException {\n    rep.saveStepAttribute( id_transformation, id_step, \"file_name\", fileName );\n    rep.saveStepAttribute( id_transformation, id_step, SOURCE_CONFIGURATION_NAME, sourceConfigurationName );\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  @Override\n  public String exportResources( Bowl executionBowl, Bowl globalManagementBowl, VariableSpace space,\n      Map<String, org.pentaho.di.resource.ResourceDefinition> definitions,\n      ResourceNamingInterface resourceNamingInterface, Repository repository, IMetaStore metaStore )\n      throws KettleException {\n    return null;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopInputFileSelectionAdapter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.pentaho.di.base.AbstractMeta;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterFileDialog;\nimport org.pentaho.di.ui.core.events.dialog.SelectionAdapterOptions;\nimport org.pentaho.di.ui.core.widget.TableView;\n\npublic class HadoopInputFileSelectionAdapter extends SelectionAdapterFileDialog<TableView> {\n\n  public HadoopInputFileSelectionAdapter( LogChannelInterface log, TableView textUiWidget, AbstractMeta meta,\n                                          SelectionAdapterOptions options ) {\n    super( log, textUiWidget, meta, options );\n  }\n\n  @Override\n  protected String getText() {\n    return this.getTextWidget().getActiveTableItem().getText( this.getTextWidget().getActiveTableColumn() );\n  }\n\n  @Override\n  protected void setText( String text ) {\n    this.getTextWidget().getActiveTableItem().setText( this.getTextWidget().getActiveTableColumn(), text );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopBaseStepAnalyzer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileMeta;\nimport org.pentaho.dictionary.DictionaryConst;\nimport org.pentaho.metaverse.api.IMetaverseNode;\nimport org.pentaho.metaverse.api.MetaverseException;\nimport org.pentaho.metaverse.api.StepField;\nimport org.pentaho.metaverse.api.analyzer.kettle.step.ExternalResourceStepAnalyzer;\nimport org.pentaho.metaverse.api.IMetaverseObjectFactory;\nimport org.pentaho.metaverse.api.model.IExternalResourceInfo;\n\nimport java.util.HashSet;\nimport java.util.Set;\n\n/**\n * Common functionality for Hadoop input and output step analyzers.\n */\npublic abstract class HadoopBaseStepAnalyzer<M extends BaseFileMeta> extends ExternalResourceStepAnalyzer<M> {\n\n  @Override protected boolean normalizeFilePath() {\n    return false;\n  }\n\n  @Override protected Set<StepField> getUsedFields( final M meta ) {\n    return null;\n  }\n\n  /**\n   * The Hadoop file input step supports local and remote files. Since we can have a mix of both, we intentionally\n   * use the generic \"File Field\" type, rather than the more specific \"Hadoop Field\" type.\n   */\n  @Override public String getResourceInputNodeType() {\n    return DictionaryConst.NODE_TYPE_FILE_FIELD;\n  }\n\n  @Override public String getResourceOutputNodeType() {\n    return DictionaryConst.NODE_TYPE_FILE_FIELD;\n  }\n\n  @Override\n  public Set<Class<? extends BaseStepMeta>> getSupportedSteps() {\n    return new HashSet<Class<? extends BaseStepMeta>>() {\n      {\n        add( getMetaClass() );\n      }\n    };\n  }\n\n  public abstract Class<M> getMetaClass();\n\n  // used for unit testing\n  protected void setObjectFactory( IMetaverseObjectFactory factory ) {\n    this.metaverseObjectFactory = factory;\n  }\n\n  @Override public IMetaverseNode createResourceNode( final IExternalResourceInfo resource ) throws MetaverseException {\n    return createFileNode( parentTransMeta.getBowl(), resource.getName(), descriptor );\n  }\n\n  @Override public IMetaverseNode createResourceNode( final M meta, final IExternalResourceInfo resource )\n    throws MetaverseException {\n\n    IMetaverseNode resourceNode = null;\n    if ( meta instanceof HadoopFileMeta ) {\n      resourceNode = createResourceNode( resource );\n      final HadoopFileMeta hMeta = (HadoopFileMeta) meta;\n      final String hostName = hMeta.getUrlHostName( resource.getName() );\n      if ( StringUtils.isNotBlank( hostName ) ) {\n        resourceNode.setProperty( DictionaryConst.PROPERTY_HOST_NAME, hostName );\n        // update the default \"File\" type to \"HDFS File\"\n        resourceNode.setProperty( DictionaryConst.PROPERTY_TYPE, DictionaryConst.NODE_TYPE_FILE );\n\n        final String clusterName = hMeta.getClusterName( resource.getName() );\n        if ( StringUtils.isNotBlank( clusterName ) ) {\n          resourceNode.setProperty( DictionaryConst.PROPERTY_CLUSTER, clusterName );\n        }\n      }\n    }\n    return resourceNode;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileInputExternalResourceConsumer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputMeta;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileInput;\nimport org.pentaho.metaverse.api.analyzer.kettle.step.BaseStepExternalResourceConsumer;\n\npublic class HadoopFileInputExternalResourceConsumer\n  extends BaseStepExternalResourceConsumer<TextFileInput, HadoopFileInputMeta> {\n\n  @Override\n  public Class<HadoopFileInputMeta> getMetaClass() {\n    return HadoopFileInputMeta.class;\n  }\n\n  @Override\n  public boolean isDataDriven( final HadoopFileInputMeta meta ) {\n    return meta.isAcceptingFilenames();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileInputStepAnalyzer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputMeta;\nimport org.pentaho.metaverse.api.IMetaverseNode;\nimport org.pentaho.metaverse.api.MetaverseAnalyzerException;\nimport org.pentaho.metaverse.api.analyzer.kettle.step.IClonableStepAnalyzer;\n\npublic class HadoopFileInputStepAnalyzer extends HadoopBaseStepAnalyzer<HadoopFileInputMeta> {\n\n  @Override\n  public Class<HadoopFileInputMeta> getMetaClass() {\n    return HadoopFileInputMeta.class;\n  }\n\n  @Override public boolean isOutput() {\n    return false;\n  }\n\n  @Override public boolean isInput() {\n    return true;\n  }\n\n  @Override\n  protected void customAnalyze( final HadoopFileInputMeta meta, final IMetaverseNode rootNode )\n    throws MetaverseAnalyzerException {\n    super.customAnalyze( meta, rootNode );\n    if ( meta.isAcceptingFilenames() ) {\n      rootNode.setProperty( \"fileNameStep\", meta.getAcceptingStepName() );\n      rootNode.setProperty( \"fileNameField\", meta.getAcceptingField() );\n      rootNode.setProperty( \"passingThruFields\", meta.inputFiles.passingThruFields );\n    }\n    rootNode.setProperty( \"fileType\", meta.content.fileType );\n    rootNode.setProperty( \"separator\", meta.content.separator );\n    rootNode.setProperty( \"enclosure\", meta.content.enclosure );\n    rootNode.setProperty( \"breakInEnclosureAllowed\", meta.content.breakInEnclosureAllowed );\n    rootNode.setProperty( \"escapeCharacter\", meta.content.escapeCharacter );\n    if ( meta.content.header ) {\n      rootNode.setProperty( \"nrHeaderLines\", meta.content.nrHeaderLines );\n    }\n    if ( meta.content.footer ) {\n      rootNode.setProperty( \"nrFooterLines\", meta.content.nrFooterLines );\n    }\n    if ( meta.content.lineWrapped ) {\n      rootNode.setProperty( \"nrWraps\", meta.content.nrWraps );\n    }\n    if ( meta.content.layoutPaged ) {\n      rootNode.setProperty( \"nrLinesPerPage\", meta.content.nrLinesPerPage );\n      rootNode.setProperty( \"nrLinesDocHeader\", meta.content.nrLinesDocHeader );\n    }\n    rootNode.setProperty( \"fileCompression\", meta.content.fileCompression );\n    rootNode.setProperty( \"noEmptyLines\", meta.content.noEmptyLines );\n    rootNode.setProperty( \"includeFilename\", meta.content.includeFilename );\n    if ( meta.content.includeFilename ) {\n      rootNode.setProperty( \"filenameField\", meta.content.filenameField );\n    }\n    rootNode.setProperty( \"includeRowNumber\", meta.content.includeRowNumber );\n    if ( meta.content.includeFilename ) {\n      rootNode.setProperty( \"rowNumberField\", meta.content.rowNumberField );\n      rootNode.setProperty( \"rowNumberByFile\", meta.content.rowNumberByFile );\n    }\n    rootNode.setProperty( \"fileFormat\", meta.content.fileFormat );\n    rootNode.setProperty( \"encoding\", meta.content.encoding );\n    rootNode.setProperty( \"rowLimit\", Long.toString( meta.content.rowLimit ) );\n    rootNode.setProperty( \"dateFormatLenient\", meta.content.dateFormatLenient );\n    rootNode.setProperty( \"dateFormatLocale\", meta.content.dateFormatLocale );\n    rootNode.setProperty( \"addFilenamesToResult\", meta.inputFiles.isaddresult );\n  }\n\n  @Override\n  public IClonableStepAnalyzer newInstance() {\n    return new HadoopFileInputStepAnalyzer();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileOutputExternalResourceConsumer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileOutputMeta;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileInput;\nimport org.pentaho.metaverse.api.analyzer.kettle.step.BaseStepExternalResourceConsumer;\n\npublic class HadoopFileOutputExternalResourceConsumer\n  extends BaseStepExternalResourceConsumer<TextFileInput, HadoopFileOutputMeta> {\n\n  @Override\n  public Class<HadoopFileOutputMeta> getMetaClass() {\n    return HadoopFileOutputMeta.class;\n  }\n\n  @Override\n  public boolean isDataDriven( final HadoopFileOutputMeta meta ) {\n    return meta.isFileNameInField();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileOutputStepAnalyzer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileOutputMeta;\nimport org.pentaho.metaverse.api.IMetaverseNode;\nimport org.pentaho.metaverse.api.MetaverseAnalyzerException;\nimport org.pentaho.metaverse.api.analyzer.kettle.step.IClonableStepAnalyzer;\n\npublic class HadoopFileOutputStepAnalyzer extends HadoopBaseStepAnalyzer<HadoopFileOutputMeta> {\n\n  @Override\n  public Class<HadoopFileOutputMeta> getMetaClass() {\n    return HadoopFileOutputMeta.class;\n  }\n\n  @Override public boolean isOutput() {\n    return true;\n  }\n\n  @Override public boolean isInput() {\n    return false;\n  }\n\n  @Override\n  protected void customAnalyze( final HadoopFileOutputMeta meta, final IMetaverseNode rootNode )\n    throws MetaverseAnalyzerException {\n    super.customAnalyze( meta, rootNode );\n    rootNode.setProperty( \"createParentFolder\", meta.isCreateParentFolder() );\n    rootNode.setProperty( \"doNotOpenNewFileInit\", meta.isDoNotOpenNewFileInit() );\n    if ( meta.isFileNameInField() ) {\n      rootNode.setProperty( \"fileNameField\", meta.getFileNameField() );\n    }\n    rootNode.setProperty( \"extension\", meta.getExtension() );\n    rootNode.setProperty( \"stepNrInFilename\", meta.isStepNrInFilename() );\n    rootNode.setProperty( \"partNrInFilename\", meta.isPartNrInFilename() );\n    rootNode.setProperty( \"dateInFilename\", meta.isDateInFilename() );\n    rootNode.setProperty( \"timeInFilename\", meta.isTimeInFilename() );\n    if ( meta.isSpecifyingFormat() ) {\n      rootNode.setProperty( \"dateTimeFormat\", meta.getDateTimeFormat() );\n    }\n    rootNode.setProperty( \"addFilenamesToResult\", meta.isAddToResultFiles() );\n    rootNode.setProperty( \"append\", meta.isFileAppended() );\n    rootNode.setProperty( \"separator\", meta.getSeparator() );\n    rootNode.setProperty( \"enclosure\", meta.getEnclosure() );\n    rootNode.setProperty( \"forceEnclosure\", meta.isEnclosureForced() );\n    rootNode.setProperty( \"addHeader\", meta.isHeaderEnabled() );\n    rootNode.setProperty( \"addFooter\", meta.isFooterEnabled() );\n    rootNode.setProperty( \"fileFormat\", meta.getFileFormat() );\n    rootNode.setProperty( \"fileCompression\", meta.getFileCompression() );\n    rootNode.setProperty( \"encoding\", meta.getEncoding() );\n    rootNode.setProperty( \"rightPadFields\", meta.isPadded() );\n    rootNode.setProperty( \"fastDataDump\", meta.isFastDump() );\n    rootNode.setProperty( \"splitEveryRows\", meta.getSplitEveryRows() );\n    rootNode.setProperty( \"endingLine\", meta.getEndedLine() );\n  }\n\n  @Override\n  public IClonableStepAnalyzer newInstance() {\n    return new HadoopFileOutputStepAnalyzer();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/HadoopVfsConnection.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.variables.VariableSpace;\n\n/**\n * @author Tatsiana_Kasiankova\n *\n */\npublic class HadoopVfsConnection {\n\n  private static final String COLON = \":\";\n\n  private static final String EMPTY = \"\";\n\n  private static final String SCHEME_NAME = \"hdfs\";\n\n  private String hostname;\n  private String port;\n  private String username;\n  private String password;\n\n  public HadoopVfsConnection( String ncHostname, String ncPort, String ncUsername, String ncPassword ) {\n    super();\n    this.hostname = ncHostname;\n    this.port = ncPort;\n    this.username = ncUsername;\n    this.password = ncPassword;\n  }\n\n  public HadoopVfsConnection() {\n    this( EMPTY, EMPTY, EMPTY, EMPTY );\n  }\n\n  public HadoopVfsConnection( NamedCluster nCluster, VariableSpace vs ) {\n    this( EMPTY, EMPTY, EMPTY, EMPTY );\n    loadNamedCluster( nCluster, vs );\n  }\n\n  /**\n   * Build an HDFS URL given a URL and Port provided by the user.\n   *\n   * @return a String containing the HDFS URL\n   */\n  public String getConnectionString( String schemeName ) {\n    if ( Schemes.MAPRFS_SCHEME.equals( schemeName ) ) {\n      return Schemes.MAPRFS_SCHEME.concat( \"://\" );\n    }\n    StringBuffer urlString =\n        new StringBuffer( !Utils.isEmpty( schemeName ) ? schemeName : SCHEME_NAME ).append( \"://\" );\n    if ( !Utils.isEmpty( getUsername() ) ) {\n      urlString.append( getUsername() ).append( COLON ).append( getPassword() ).append( \"@\" );\n    }\n\n    urlString.append( getHostname() );\n    if ( !Utils.isEmpty( getPort() ) ) {\n      urlString.append( COLON ).append( getPort() );\n    }\n    return urlString.toString();\n  }\n\n  private void loadNamedCluster( NamedCluster nCluster, VariableSpace vs ) {\n    if ( nCluster != null ) {\n      hostname = nCluster.getHdfsHost() != null ? nCluster.getHdfsHost() : EMPTY;\n      port = nCluster.getHdfsPort() != null ? nCluster.getHdfsPort() : EMPTY;\n      username = nCluster.getHdfsUsername() != null ? nCluster.getHdfsUsername() : EMPTY;\n      password = nCluster.getHdfsPassword() != null ? nCluster.decodePassword( nCluster.getHdfsPassword() ) : EMPTY;\n\n      hostname = vs.environmentSubstitute( hostname );\n      port = vs.environmentSubstitute( port );\n      username = vs.environmentSubstitute( username );\n      password = vs.environmentSubstitute( password );\n    }\n  }\n\n  public void setCustomParameters( Props pr ) {\n    pr.setCustomParameter( \"HadoopVfsFileChooserDialog.host\", getHostname() );\n    pr.setCustomParameter( \"HadoopVfsFileChooserDialog.port\", getPort() );\n    pr.setCustomParameter( \"HadoopVfsFileChooserDialog.user\", getUsername() );\n    pr.setCustomParameter( \"HadoopVfsFileChooserDialog.password\", getPassword() );\n  }\n\n  public String getHostname() {\n    return hostname;\n  }\n\n  public String getPort() {\n    return port;\n  }\n\n  public String getUsername() {\n    return username;\n  }\n\n  public String getPassword() {\n    return password;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/HadoopVfsFileChooserDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.apache.commons.vfs2.FileName;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\npublic class HadoopVfsFileChooserDialog extends CustomVfsUiPanel {\n\n  // for message resolution\n  private static final Class<?> PKG = HadoopVfsFileChooserDialog.class;\n\n  // for logging\n  private LogChannel log = new LogChannel( this );\n\n  // Default root file - used to avoid NPE when rootFile was not provided\n  // and the browser is resolved\n  FileObject defaultInitialFile = null;\n\n  // File objects to keep track of when the user selects the radio buttons\n  FileObject hadoopRootFile = null;\n  String hadoopOpenFromFolder = null;\n\n  FileObject rootFile = null;\n  FileObject initialFile = null;\n  VfsFileChooserDialog vfsFileChooserDialog = null;\n\n  String schemeName = \"hdfs\";\n\n  private NamedClusterWidgetImpl namedClusterWidget = null;\n  private String namedCluster = null;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  public HadoopVfsFileChooserDialog( String schemeName, String displayName, VfsFileChooserDialog vfsFileChooserDialog,\n                                     FileObject rootFile, FileObject initialFile,\n                                     NamedClusterService namedClusterService,\n                                     RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    super( schemeName, displayName, vfsFileChooserDialog, SWT.NONE );\n    this.schemeName = schemeName;\n    this.rootFile = rootFile;\n    this.initialFile = initialFile;\n    this.vfsFileChooserDialog = vfsFileChooserDialog;\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n\n    // Create the Hadoop panel\n    GridData gridData = new GridData( SWT.FILL, SWT.CENTER, true, false );\n    setLayoutData( gridData );\n    setLayout( new GridLayout( 1, false ) );\n\n    createConnectionPanel();\n  }\n\n  private void createConnectionPanel() {\n    // The Connection group\n    Group connectionGroup = new Group( this, SWT.SHADOW_ETCHED_IN );\n    connectionGroup.setText( BaseMessages.getString( PKG, \"HadoopVfsFileChooserDialog.ConnectionGroup.Label\" ) ); //$NON-NLS-1$ ;\n    GridLayout connectionGroupLayout = new GridLayout();\n    connectionGroupLayout.marginWidth = 5;\n    connectionGroupLayout.marginHeight = 5;\n    connectionGroupLayout.verticalSpacing = 5;\n    connectionGroupLayout.horizontalSpacing = 5;\n    GridData gData = new GridData( SWT.FILL, SWT.FILL, true, false );\n    connectionGroup.setLayoutData( gData );\n    connectionGroup.setLayout( connectionGroupLayout );\n\n    setNamedClusterWidget( new NamedClusterWidgetImpl( connectionGroup, true, namedClusterService, runtimeTestActionService, runtimeTester, false ) );\n    getNamedClusterWidget().addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent evt ) {\n        try {\n          connect();\n        } catch ( Exception e ) {\n          // To prevent errors from multiple event firings.\n        }\n      }\n    } );\n\n    // The composite we need in the group\n    Composite textFieldPanel = new Composite( connectionGroup, SWT.NONE );\n    GridData gridData = new GridData( SWT.FILL, SWT.FILL, true, false );\n    textFieldPanel.setLayoutData( gridData );\n    textFieldPanel.setLayout( new GridLayout( 5, false ) );\n  }\n\n  public void initializeConnectionPanel( FileObject file ) {\n    initialFile = file;\n    /*\n     * if ( initialFile != null && initialFile.getName().getScheme().equals( HadoopSpoonPlugin.HDFS_SCHEME ) ) { //TODO\n     * activate HDFS }\n     */\n  }\n\n  private void showMessageAndLog( String title, String message, String messageToLog ) {\n    MessageBox box = new MessageBox( this.getShell() );\n    box.setText( title ); // $NON-NLS-1$\n    box.setMessage( message );\n    log.logError( messageToLog );\n    box.open();\n  }\n\n  public VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n  public NamedClusterWidgetImpl getNamedClusterWidget() {\n    return namedClusterWidget;\n  }\n\n  protected void setNamedClusterWidget( NamedClusterWidgetImpl namedClusterWidget ) {\n    this.namedClusterWidget = namedClusterWidget;\n  }\n\n  public void setNamedCluster( String namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public void activate() {\n    vfsFileChooserDialog.setRootFile( null );\n    vfsFileChooserDialog.setInitialFile( null );\n    vfsFileChooserDialog.openFileCombo.setText( \"hdfs://\" );\n    vfsFileChooserDialog.vfsBrowser.fileSystemTree.removeAll();\n    getNamedClusterWidget().initiate();\n    getNamedClusterWidget().setSelectedNamedCluster( namedCluster );\n    super.activate();\n  }\n\n  public void connect() {\n    NamedCluster nc = getNamedClusterWidget().getSelectedNamedCluster();\n    // The Named Cluster may be hdfs, maprfs or wasb.  We need to detect it here since the named\n    // cluster was just selected.\n    schemeName = \"wasb\".equals( nc.getStorageScheme() ) ? \"wasb\" : \"hdfs\";\n\n    FileObject root = rootFile;\n    try {\n      Spoon spoon = Spoon.getInstance();\n      root = KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( nc.processURLsubstitution( FileName.ROOT_PATH, Spoon.getInstance().getMetaStore(),\n           getVariableSpace() ) );\n    } catch ( KettleFileException exc ) {\n      showMessageAndLog( BaseMessages.getString( PKG, \"HadoopVfsFileChooserDialog.error\" ), BaseMessages.getString( PKG,\n        \"HadoopVfsFileChooserDialog.Connection.error\" ), exc.getMessage() );\n    }\n\n    vfsFileChooserDialog.setRootFile( root );\n    vfsFileChooserDialog.setSelectedFile( root );\n    rootFile = root;\n  }\n\n  public FileObject resolveFile( String fileUri ) throws FileSystemException {\n    Spoon spoon = Spoon.getInstance();\n    try {\n      return KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( fileUri, getVariableSpace(), getFileSystemOptions() );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  public FileObject resolveFile( String fileUri, FileSystemOptions opts ) throws FileSystemException {\n    Spoon spoon = Spoon.getInstance();\n    try {\n      return KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileUri, getVariableSpace(), opts );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  protected FileSystemOptions getFileSystemOptions() throws FileSystemException {\n    FileSystemOptions opts = new FileSystemOptions();\n    return opts;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/MapRFSFileChooserDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.SWT;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\npublic class MapRFSFileChooserDialog extends CustomVfsUiPanel {\n\n  private VfsFileChooserDialog vfsFileChooserDialog;\n\n  public MapRFSFileChooserDialog( String schemeName, String displayName, VfsFileChooserDialog vfsFileChooserDialog ) {\n    super( schemeName, displayName, vfsFileChooserDialog, SWT.NONE );\n    this.vfsFileChooserDialog = vfsFileChooserDialog;\n  }\n\n  public void activate() {\n    vfsFileChooserDialog.setRootFile( null );\n    vfsFileChooserDialog.setInitialFile( null );\n    vfsFileChooserDialog.openFileCombo.setText( \"maprfs://\" );\n    vfsFileChooserDialog.vfsBrowser.fileSystemTree.removeAll();\n    super.activate();\n\n    try {\n      FileObject newRoot = resolveFile( vfsFileChooserDialog.openFileCombo.getText() );\n      vfsFileChooserDialog.vfsBrowser.resetVfsRoot( newRoot );\n    } catch ( FileSystemException ignored ) {\n      //ignored\n    }\n  }\n\n  public FileObject resolveFile( String fileUri ) throws FileSystemException {\n    Spoon spoon = Spoon.getInstance();\n    try {\n      return KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( fileUri, getVariableSpace(), getFileSystemOptions() );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  public FileObject resolveFile( String fileUri, FileSystemOptions opts ) throws FileSystemException {\n    Spoon spoon = Spoon.getInstance();\n    try {\n      return KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileUri, getVariableSpace(), opts );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  protected FileSystemOptions getFileSystemOptions() throws FileSystemException {\n    FileSystemOptions opts = new FileSystemOptions();\n    return opts;\n  }\n\n  private VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/NamedClusterVfsFileChooserDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.layout.GridData;\nimport org.eclipse.swt.layout.GridLayout;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\npublic class NamedClusterVfsFileChooserDialog extends CustomVfsUiPanel {\n\n  // for message resolution\n  private static final Class<?> PKG = NamedClusterVfsFileChooserDialog.class;\n\n  // for logging\n  private LogChannel log = new LogChannel( this );\n\n  // Default root file - used to avoid NPE when rootFile was not provided\n  // and the browser is resolved\n  FileObject defaultInitialFile = null;\n\n  // File objects to keep track of when the user selects the radio buttons\n  FileObject hadoopRootFile = null;\n  String hadoopOpenFromFolder = null;\n\n  FileObject rootFile = null;\n  FileObject initialFile = null;\n  VfsFileChooserDialog vfsFileChooserDialog = null;\n\n  String schemeName = Schemes.NAMED_CLUSTER_SCHEME;\n\n  private NamedClusterWidgetImpl namedClusterWidget = null;\n  private String namedCluster = null;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  public NamedClusterVfsFileChooserDialog( String schemeName, String displayName,\n                                           VfsFileChooserDialog vfsFileChooserDialog,\n                                           FileObject rootFile, FileObject initialFile,\n                                           NamedClusterService namedClusterService,\n                                           RuntimeTestActionService runtimeTestActionService,\n                                           RuntimeTester runtimeTester ) {\n    super( schemeName, displayName, vfsFileChooserDialog, SWT.NONE );\n    this.schemeName = schemeName;\n    this.rootFile = rootFile;\n    this.initialFile = initialFile;\n    this.vfsFileChooserDialog = vfsFileChooserDialog;\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n\n    // Create the Hadoop panel\n    GridData gridData = new GridData( SWT.FILL, SWT.CENTER, true, false );\n    setLayoutData( gridData );\n    setLayout( new GridLayout( 1, false ) );\n\n    createConnectionPanel();\n  }\n\n  private void createConnectionPanel() {\n    // The Connection group\n    Group connectionGroup = new Group( this, SWT.SHADOW_ETCHED_IN );\n    connectionGroup\n      .setText( BaseMessages.getString( PKG, \"HadoopVfsFileChooserDialog.ConnectionGroup.Label\" ) ); //$NON-NLS-1$ ;\n    GridLayout connectionGroupLayout = new GridLayout();\n    connectionGroupLayout.marginWidth = 5;\n    connectionGroupLayout.marginHeight = 5;\n    connectionGroupLayout.verticalSpacing = 5;\n    connectionGroupLayout.horizontalSpacing = 5;\n    GridData gData = new GridData( SWT.FILL, SWT.FILL, true, false );\n    connectionGroup.setLayoutData( gData );\n    connectionGroup.setLayout( connectionGroupLayout );\n\n    setNamedClusterWidget(\n      new NamedClusterWidgetImpl( connectionGroup, true, namedClusterService, runtimeTestActionService,\n        runtimeTester, false ) );\n    getNamedClusterWidget().addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent evt ) {\n        try {\n          connect();\n        } catch ( Exception e ) {\n          // To prevent errors from multiple event firings.\n          log.logDebug( e.getMessage() );\n        }\n      }\n    } );\n\n    // The composite we need in the group\n    Composite textFieldPanel = new Composite( connectionGroup, SWT.NONE );\n    GridData gridData = new GridData( SWT.FILL, SWT.FILL, true, false );\n    textFieldPanel.setLayoutData( gridData );\n    textFieldPanel.setLayout( new GridLayout( 5, false ) );\n  }\n\n  public void initializeConnectionPanel( FileObject file ) {\n    initialFile = file;\n    /*\n     * if ( initialFile != null && initialFile.getName().getScheme().equals( HadoopSpoonPlugin.HDFS_SCHEME ) ) { //TODO\n     * activate HDFS }\n     */\n  }\n\n  private void showMessageAndLog( String title, String message, String messageToLog ) {\n    MessageBox box = new MessageBox( this.getShell() );\n    box.setText( title ); // $NON-NLS-1$\n    box.setMessage( message );\n    log.logError( messageToLog );\n    box.open();\n  }\n\n  public VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n  public NamedClusterWidgetImpl getNamedClusterWidget() {\n    return namedClusterWidget;\n  }\n\n  protected void setNamedClusterWidget( NamedClusterWidgetImpl namedClusterWidget ) {\n    this.namedClusterWidget = namedClusterWidget;\n  }\n\n  public void setNamedCluster( String namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  @Override\n  public void activate() {\n    vfsFileChooserDialog.setRootFile( null );\n    vfsFileChooserDialog.setInitialFile( null );\n    vfsFileChooserDialog.openFileCombo.setText(  Schemes.NAMED_CLUSTER_SCHEME + \"://\" );\n    vfsFileChooserDialog.vfsBrowser.fileSystemTree.removeAll();\n    getNamedClusterWidget().initiate();\n    getNamedClusterWidget().setSelectedNamedCluster( namedCluster );\n    super.activate();\n  }\n\n  public void connect() {\n    NamedCluster nc = getNamedClusterWidget().getSelectedNamedCluster();\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( nc, getVariableSpace() );\n    hdfsConnection.setCustomParameters( Props.getInstance() );\n    // The Named Cluster may be hdfs, maprfs or wasb.  We need to detect it here since the named\n    // cluster was just selected.\n    //schemeName = \"wasb\".equals( nc.getStorageScheme() ) ? \"wasb\" : \"hdfs\";\n    String connectionString = Schemes.NAMED_CLUSTER_SCHEME + \"://\" + nc.getName();\n    FileSystemOptions fsoptions = new FileSystemOptions();\n    FileObject root = rootFile;\n    try {\n      Spoon spoon = Spoon.getInstance();\n      root = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( connectionString, fsoptions );\n    } catch ( KettleFileException exc ) {\n      showMessageAndLog( BaseMessages.getString( PKG, \"HadoopVfsFileChooserDialog.error\" ), BaseMessages.getString( PKG,\n        \"HadoopVfsFileChooserDialog.Connection.error\" ), exc.getMessage() );\n    }\n    vfsFileChooserDialog.setRootFile( root );\n    vfsFileChooserDialog.setSelectedFile( root );\n    rootFile = root;\n  }\n\n  /**\n   * resolve file with <b>new</b> File SystemOptions.\n   */\n  @Override\n  public FileObject resolveFile( String fileUri ) throws FileSystemException {\n    try {\n      Spoon spoon = Spoon.getInstance();\n      //should we use new instance of FileSystemOptions? should it be depdrecated?\n      return KettleVFS.getInstance( spoon.getExecutionBowl() )\n        .getFileObject( fileUri, getVariableSpace(), getFileSystemOptions() );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  @Override\n  public FileObject resolveFile( String fileUri, FileSystemOptions opts ) throws FileSystemException {\n    try {\n      Spoon spoon = Spoon.getInstance();\n      return KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileUri, getVariableSpace(), opts );\n    } catch ( KettleFileException e ) {\n      throw new FileSystemException( e );\n    }\n  }\n\n  /**\n   * @return <b>new</b>FileSystem Options\n   * @throws FileSystemException\n   */\n  protected FileSystemOptions getFileSystemOptions() throws FileSystemException {\n    FileSystemOptions opts = new FileSystemOptions();\n    return opts;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/Schemes.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\n/**\n * Created by bryan on 11/23/15.\n */\npublic class Schemes {\n  public static final String HDFS_SCHEME = \"hdfs\";\n  public static final String HDFS_SCHEME_DISPLAY_NAME = \"HDFS\";\n  public static final String MAPRFS_SCHEME = \"maprfs\";\n  public static final String MAPRFS_SCHEME_DISPLAY_NAME = \"MapRFS\";\n  public static final String NAMED_CLUSTER_SCHEME = \"hc\";\n  public static final String NAMED_CLUSTER_SCHEME_DISPLAY_NAME = \"Hadoop Cluster\";\n  public static final String S3_SCHEME = \"s3\";\n  public static final String S3N_SCHEME = \"s3n\";\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xmlns:pen=\"http://www.pentaho.com/xml/schemas/pentaho-blueprint\"\n           xsi:schemaLocation=\"\n            http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n            http://www.pentaho.com/xml/schemas/pentaho-blueprint http://www.pentaho.com/xml/schemas/pentaho-blueprint.xsd\">\n\n  <bean id=\"jobEntryHadoopCopyFiles\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.job.JobEntryHadoopCopyFiles\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.JobEntryPluginType\"/>\n  </bean>\n  <bean id=\"hadoopFileInputMeta\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"hadoopFileOutputMeta\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileOutputMeta\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n  <bean id=\"hdfsLifecycleListener\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.HdfsLifecycleListener\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.LifecyclePluginType\"/>\n  </bean>\n\n  <reference id=\"namedClusterService\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n  <reference id=\"runtimeTester\" interface=\"org.pentaho.runtime.test.RuntimeTester\"/>\n  <reference id=\"runtimeTestActionService\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionService\"/>\n\n\n  <bean id=\"MetaverseGraphImpl\" class=\"org.pentaho.metaverse.api.model.BaseSynchronizedGraphFactory\" factory-method=\"open\">\n    <argument>\n      <map>\n        <entry key=\"blueprints.graph\" value=\"com.tinkerpop.blueprints.impls.tg.TinkerGraph\"/>\n      </map>\n    </argument>\n  </bean>\n\n  <bean id=\"IMetaverseBuilder\" class=\"org.pentaho.metaverse.api.model.BaseMetaverseBuilder\" scope=\"singleton\">\n    <argument ref=\"MetaverseGraphImpl\"/>\n  </bean>\n\n  <!-- HadoopFileInput External Resource Consumer -->\n  <bean id=\"hadoopFileInputERC\" scope=\"singleton\"\n        class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer.HadoopFileInputExternalResourceConsumer\"/>\n\n  <!-- HadoopFileOutput External Resource Consumer -->\n  <bean id=\"hadoopFileOutputERC\" scope=\"singleton\"\n        class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer.HadoopFileOutputExternalResourceConsumer\"/>\n\n  <!-- HadoopFileInput Step Analyzer -->\n  <bean id=\"HadoopFileInputStepAnalyzer\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer.HadoopFileInputStepAnalyzer\">\n    <property name=\"externalResourceConsumer\" ref=\"hadoopFileInputERC\"/>\n  </bean>\n  <service id=\"hadoopFileInputStepAnalyzerService\"\n           interface=\"org.pentaho.metaverse.api.analyzer.kettle.step.IStepAnalyzer\"\n           ref=\"HadoopFileInputStepAnalyzer\"/>\n\n  <!-- HadoopFileOutput Step Analyzer -->\n  <bean id=\"HadoopFileOutputStepAnalyzer\" class=\"org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer.HadoopFileOutputStepAnalyzer\">\n    <property name=\"externalResourceConsumer\" ref=\"hadoopFileOutputERC\"/>\n  </bean>\n  <service id=\"hadoopFileOutputStepAnalyzerService\"\n           interface=\"org.pentaho.metaverse.api.analyzer.kettle.step.IStepAnalyzer\"\n           ref=\"HadoopFileOutputStepAnalyzer\"/>\n</blueprint>\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/graph.properties",
    "content": "blueprints.graph=com.tinkerpop.blueprints.impls.tg.TinkerGraph\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/job/messages/messages_en_US.properties",
    "content": "\nHadoopCopyFilesPlugin.Name=Hadoop copy files\nHadoopCopyFilesPlugin.Description=Copy files to and from HDFS \n\nJobCopyFiles.Browse.Label=Browse...\nJobHadoopCopyFiles.Log.ArgFromPrevious.Found=found [{0}] argument(s) from previous result\nJobHadoopCopyFiles.Tab.General.Label=General\nJobHadoopCopyFiles.BrowseFiles.Label=File...\nJobHadoopCopyFiles.Error.Exception.CopyProcessError=There was an error copying file [{0}] to [{1}] \\: [{2}]\nJobHadoopCopyFiles.Fields.Label=Files/Folders\\: \nJobHadoopCopyFiles.Filetype.All=All files\nJobHadoopCopyFiles.SourceFileFolder.Tooltip=Enter here the file or folder to copy\\n If it's a folder, PDI will fetch only if ''Include subfolders'' is checked\\!\nJobHadoopCopyFiles.Fields.Wildcard.Tooltip=Specify here the regular expressions wildcard to match.\\n Only files that match the wildcard will be copied.\nJobHadoopCopyFiles.DestinationFileFolder.Tooltip=Enter here the destination folder to hit.\\n If you selected file as source,you can define a file as destination.\nJobHadoopCopyFiles.Error.DestinationFolderNotFound=Destination folder does not exist\\!\nJobHadoopCopyFiles.Log.FileExistsInfos=File exists\\!\nJobHadoopCopyFiles.Log.FileOverwrite=File [{0}] was overwritten\nJobHadoopCopyFiles.DestinationIsAFile.Tooltip=PDI will consider that destination is a file.\nJobHadoopCopyFiles.Error.Exception.CopyProcess=Can not copy file/folder [{0}] to [{1}]. Exception \\: [{2}]\nJobHadoopCopyFiles.FilenameDelete.Button=&Delete\nJobHadoopCopyFiles.Log.FolderCopied=Folder [{0}] was copied to [{1}]\nJobHadoopCopyFiles.Fields.DestinationFileFolder.Label=File/Folder destination\nJobHadoopCopyFiles.Log.FileRemoved=File [{0}] was deleted\nJobHadoopCopyFiles.Error.CanNotRemoveFile=Can not delete file\nJobHadoopCopyFiles.Log.FileCopied=File [{0}] was copied to [{1}] \nJobHadoopCopyFiles.FilenameAdd.Button=&Add\nJobHadoopCopyFiles.Title=Hadoop copy files\nJobHadoopCopyFiles.Log.FetchFolder=Fetching \\: [{0}]\nJobHadoopCopyFiles.Error.SourceFileNotExists=File/folder [{0}] does not exists\\!\nJobHadoopCopyFiles.FilenameEdit.Button=&Edit\nJobHadoopCopyFiles.Log.FolderExistsInfos=Folder exists\\!\nJobHadoopCopyFiles.RemoveSourceFiles.Tooltip=Remove source files after copy process\\nOnly files will be removed.\nJobHadoopCopyFiles.CreateDestinationFolder.Label=Create destination folder\nJobHadoopCopyFiles.Log.ProcessingRow=Processing row source File/folder source \\: [{0}] ... destination file/folder \\: [{1}]... wildcard \\: [{2}]\nJobHadoopCopyFiles.Log.FileOverwriteInfos=File\nJobHadoopCopyFiles.Log.FileFolderRemoved=File/folder [{0}] was deleted\nJobHadoopCopyFiles.Log.CanNotCopyFolderToFile=Can not copy folder [{0}] to file [{1}]\nJobHadoopCopyFiles.IncludeSubfolders.Label=Include Subfolders\nJobHadoopCopyFiles.Log.Forbidden=FORBIDDEN\nJobHadoopCopyFiles.AddFileToResult.Label=Add files to result files name\nJobHadoopCopyFiles.FilenameEdit.Tooltip=Edit selected files\nJobHadoopCopyFiles.Log.Starting=Starting ...\nJobHadoopCopyFiles.Log.FolderOverwriteInfos=Folder\nJobHadoopCopyFiles.Log.FileRemovedInfos=file deleted\nJobHadoopCopyFiles.CopyEmptyFolders.Label=Copy empty folders\nJobHadoopCopyFiles.DestinationIsAFile.Label=Destination is a file\nJobHadoopCopyFiles.RemoveSourceFiles.Label=Remove source files\nJobHadoopCopyFiles.Fields.Wildcard.Label=Wildcard (RegExp)\nJobHadoopCopyFiles.OverwriteFiles.Tooltip=When the destination file exists,If you want to replace it check this option.\\nOtherwise, PDI will ignore it.\nJobHadoopCopyFiles.Log.FileAddedToResultFilesName=File [{0}] was added to result filesname\nJobHadoopCopyFiles.Log.FileCopiedInfos=File copied\nJobHadoopCopyFiles.Error.Exception.CanRemoveFileFolder=Can not delete file/folder [{0}]\nJobHadoopCopyFiles.CopyEmptyFolders.Tooltip=Copy empty folders\\n Will work only when no wildcard was specified and ''Include subfolders'' is checked\\!\nJobHadoopCopyFiles.FileResult.Group.Label=Result files name\nJobHadoopCopyFiles.Error.Exception.UnableSaveRep=Unable to save job entry of type ''copyfiles'' to the repository for id_job\\=\nJobHadoopCopyFiles.Log.Error=Error\nJobHadoopCopyFiles.FilenameDelete.Tooltip=Remove selected files from the grid\nJobHadoopCopyFiles.OverwriteFiles.Label=Replace existing files\nJobHadoopCopyFiles.Log.FolderOverwrite=Folder [{0}] was overwritten\nJobHadoopCopyFiles.CreateDestinationFolder.Tooltip=Create destination folder if necessary.\\nIf destination is a file, parent folder will be created if necessary.\nJobHadoopCopyFiles.Wildcard.Tooltip=Specify here the regular expressions wildcard to match.\\n Only files that match the wildcard will be copied.\nJobHadoopCopyFiles.Fields.DestinationFileFolder.Tooltip=Enter here the destination folder to hit.\\n If you selected file as source,you can define a file as destination.\nJobHadoopCopyFiles.Previous.Tooltip=Check this to pass the results of the previous entry to the arguments of this entry.\\nBe careful, arguments must be in the same order that arguments\\!\\n ie \\: (1) source folder/file, (2) destination folder/file, (3) wildcard\nJobHadoopCopyFiles.Settings.Label=Settings\nJobHadoopCopyFiles.Log.FolderCopiedInfos=Folder copied\nJobHadoopCopyFiles.Name.Label=Entry name\\: \nJobHadoopCopyFiles.Error.Exception.UnableLoadXML=Unable to load job entry of type ''copyfiles'' from XML node\nJobHadoopCopyFiles.Name.Default=HDFS Copy files\nJobHadoopCopyFiles.Log.FolderExists=Folder [{0}] exists\\!\nJobHadoopCopyFiles.Previous.Label=Copy previous results to args\nJobHadoopCopyFiles.Log.FileExists=file [{0}] exists\\!\nJobHadoopCopyFiles.Tab.AddResultFilesName.Label=Result files name\nJobHadoopCopyFiles.Error.Exception.UnableLoadRep=Unable to load job entry of type ''copyfiles'' from the repository for id_jobentry\\=\nJobHadoopCopyFiles.Fields.SourceFileFolder.Tooltip=Enter here the file or folder to copy\\n If it's a folder, PDI will fetch only if ''Include subfolders'' is checked\\!\nJobHadoopCopyFiles.Log.IgnoringRow=Ignoring row with source or destination is NULL. Source File/folder source \\: [{0}], destination file/folder \\: [{1}], wildcard \\: [{2}]\nJobHadoopCopyFiles.DestinationFileFolder.Label=File/Folder destination\nJobHadoopCopyFiles.Log.FileFolderRemovedInfos=File/folder deleted\nJobHadoopCopyFiles.BrowseFolders.Label=Folder...\nJobHadoopCopyFiles.Log.ResultFilesName=Result filesname\nJobHadoopCopyFiles.IncludeSubfolders.Tooltip=Check this if you want to fetch also sub folders\\nThis option will work only when the source is a folder.\nJobHadoopCopyFiles.Wildcard.Label=Wildcard (RegExp)\nJobHadoopCopyFiles.SourceFileFolder.Label=File/Folder source\nJobHadoopCopyFiles.AddFileToResult.Tooltip=Add destination files to result files name.\\nIt is helpful if you want to attach theses files to an email thanks to send mail job entry.\nJobHadoopCopyFiles.Fields.SourceFileFolder.Label=File/Folder source\nJobHadoopCopyFiles.Connection.Error.title=Unable to Connect\nJobHadoopCopyFiles.Connection.error=You don''t seem to be getting a connection to the Hadoop Cluster.  Check the cluster configuration you''re using."
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/job/messages/messages_ko_KR.properties",
    "content": "\nJobCopyFiles.Browse.Label=\\uCC3E\\uC544\\uBCF4\\uAE30...\n\nJobHadoopCopyFiles.AddFileToResult.Label=\\uD30C\\uC77C\\uC744 \\uACB0\\uACFC \\uD30C\\uC77C \\uC774\\uB984\\uC5D0 \\uCD94\\uAC00\nJobHadoopCopyFiles.BrowseFiles.Label    =\\uD30C\\uC77C...\nJobHadoopCopyFiles.BrowseFolders.Label  =\\uD3F4\\uB354...\nJobHadoopCopyFiles.CopyEmptyFolders.Label=\\uBE48 \\uD3F4\\uB354 \\uBCF5\\uC0AC\nJobHadoopCopyFiles.CreateDestinationFolder.Label=\\uB300\\uC0C1 \\uD3F4\\uB354 \\uC0DD\\uC131\nJobHadoopCopyFiles.DestinationFileFolder.Label=\\uB300\\uC0C1 \\uD30C\\uC77C/\\uD3F4\\uB354\nJobHadoopCopyFiles.DestinationIsAFile.Label=\\uB300\\uC0C1\\uC740 \\uD30C\\uC77C\nJobHadoopCopyFiles.Error.CanNotRemoveFile=\\uD30C\\uC77C\\uC744 \\uC0AD\\uC81C\\uD560 \\uC218 \\uC5C6\\uC2B5\\uB2C8\\uB2E4\nJobHadoopCopyFiles.Fields.DestinationFileFolder.Label=\\uB300\\uC0C1 \\uD30C\\uC77C/\\uD3F4\\uB354\nJobHadoopCopyFiles.Fields.Label         =\\uD30C\\uC77C/\\uD3F4\\uB354:\nJobHadoopCopyFiles.Fields.SourceFileFolder.Label=\\uD30C\\uC77C/\\uD3F4\\uB354 \\uC18C\\uC2A4\nJobHadoopCopyFiles.Fields.Wildcard.Label=\\uC640\\uC77C\\uB4DC\\uCE74\\uB4DC (\\uC815\\uADDC\\uD45C\\uD604\\uC2DD)\nJobHadoopCopyFiles.FileResult.Group.Label=\\uACB0\\uACFC \\uD30C\\uC77C \\uC774\\uB984\nJobHadoopCopyFiles.FilenameAdd.Button   =\\uCD94\\uAC00(&A)\nJobHadoopCopyFiles.FilenameDelete.Button=\\uC0AD\\uC81C(&D)\nJobHadoopCopyFiles.FilenameEdit.Button  =\\uD3B8\\uC9D1(&E)\nJobHadoopCopyFiles.Filetype.All         =\\uBAA8\\uB4E0 \\uD30C\\uC77C\nJobHadoopCopyFiles.Log.Error            =\\uC624\\uB958\nJobHadoopCopyFiles.Log.Starting         =\\uC2DC\\uC791 ...\nJobHadoopCopyFiles.Name.Label           =Job \\uC5D4\\uD2B8\\uB9AC \\uC774\\uB984: \nJobHadoopCopyFiles.Settings.Label       =\\uC124\\uC815\nJobHadoopCopyFiles.SourceFileFolder.Label=\\uD30C\\uC77C/\\uD3F4\\uB354 \\uC18C\\uC2A4\nJobHadoopCopyFiles.Tab.General.Label    =\\uC77C\\uBC18\nJobHadoopCopyFiles.Wildcard.Label       =\\uC640\\uC77C\\uB4DC\\uCE74\\uB4DC (\\uC815\\uADDC\\uD45C\\uD604\\uC2DD)\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/trans/messages/messages_en_US.properties",
    "content": "HadoopFileOutputPlugin.Name=Hadoop file output\nHadoopFileOutputPlugin.Description=Create files in an HDFS location \n\nHadoopFileOutputDialog.DialogTitle=Hadoop file output\nHadoopFileOutput.MethodNotSupportedException.Message=Method not supported\nHadoopFileOutputDialog.Filename.Label=Folder/File\n\nHadoopFileOutputDialog.Connection.Error.title=Unable to Connect\nHadoopFileOutputDialog.Connection.error=You don''t seem to be getting a connection to the Hadoop Cluster.  Check the cluster configuration you''re using.\n\nHadoopFileInputPlugin.Name=Hadoop file input\nHadoopFileInputPlugin.Description=Process files from an HDFS location\n\nHadoopFileInputDialog.DialogTitle=Hadoop file input\nHadoopFileInputDialog.Environment=Environment\nHadoopFileInputDialog.FileFolderColumn.Column=File/Folder\n\nHadoopFileInputDialog.Connection.Error.title=Unable to Connect\nHadoopFileInputDialog.Connection.error=You don''t seem to be getting a connection to the Hadoop Cluster.  Check the cluster configuration you''re using.\n\n#File Tab\nHadoopFileInput.Injection.ENVIRONMENT=The environment of the selected file/folder.\nHadoopFileOutput.Injection.FILENAME=The name of the file to write to.\nHadoopFileOutput.Injection.CREATE_PARENT_FOLDER=This option indicates whether a parent folder should be created for the file when it''s created.\nHadoopFileOutput.Injection.DO_NOT_CREATE_FILE_AT_STARTUP=This option will not write empty files if no rows are processed.\nHadoopFileOutput.Injection.FILENAME_IN_FIELD=This option allows you to specify the file name(s) in a field in the input stream.\nHadoopFileOutput.Injection.FILENAME_FIELD=When \"Accept File name from field?\" is enabled, this option lets you specify the field that contains the file name(s).\nHadoopFileOutput.Injection.EXTENSION=This option allows you to specify the extension at the end of the file name (.txt).\nHadoopFileOutput.Injection.INC_STEPNR_IN_FILENAME=This option will include the copy number before the extension if you run multiple copies of the step (_0).\nHadoopFileOutput.Injection.INC_PARTNR_IN_FILENAME=This option will include the data partition number in the file name.\nHadoopFileOutput.Injection.INC_DATE_IN_FILENAME=This option will include the system date in the file name.\nHadoopFileOutput.Injection.INC_TIME_IN_FILENAME=This option will include the system time in the file name.\nHadoopFileOutput.Injection.SPECIFY_DATE_FORMAT=This option will allow you to specify the date and time format for the file name.\nHadoopFileOutput.Injection.DATE_FORMAT=Specify which date & time format you want to go into each file name.\nHadoopFileOutput.Injection.ADD_TO_RESULT=This option will let you add the file name to the internal file result set.\n\n#Content Tab\nHadoopFileOutput.Injection.APPEND=This option will cause lines to be appended to the specified file.\nHadoopFileOutput.Injection.SEPARATOR=This option will let you specify the character that will separate the fields in a single line of text.\nHadoopFileOutput.Injection.ENCLOSURE=This is an optional property that will let you specify the character that will enclose fields.\nHadoopFileOutput.Injection.FORCE_ENCLOSURE=This option forces all field names to be enclosed with the character specified in the Enclosure property.\nHadoopFileOutput.Injection.DISABLE_ENCLOSURE_FIX=This option enables backwards compatibility for a bug fix around Date formats that affected enclosures.\nHadoopFileOutput.Injection.HEADER=This option will include a header row in the text file.\nHadoopFileOutput.Injection.FOOTER=This option will include a footer row in the text file.\nHadoopFileOutput.Injection.FORMAT=Specify the line terminator format (DOS/UNIX/CR/None).\nHadoopFileOutput.Injection.COMPRESSION=This option will let you specify the type of compression to use on the file output.\nHadoopFileOutput.Injection.ENCODING=This option will let you specify the text file encoding. \nHadoopFileOutput.Injection.RIGHT_PAD_FIELDS=This option will pad fields to their specified length.\nHadoopFileOutput.Injection.FAST_DATA_DUMP=This option lets you improve the performance by not including any formatting information.\nHadoopFileOutput.Injection.SPLIT_EVERY=Split the data every (x) rows into additional output files.\nHadoopFileOutput.Injection.ADD_ENDING_LINE=This option lets you specify an ending line to the output file.\n\n#Fields Tab\nHadoopFileOutput.Injection.OUTPUT_FIELDS=The fields to include in the output.\nHadoopFileOutput.Injection.OUTPUT_FIELDNAME=The name of the field.\nHadoopFileOutput.Injection.OUTPUT_TYPE=This option will let you specify the type of field (string, date, number).\nHadoopFileOutput.Injection.OUTPUT_FORMAT=The format mask to convert with.\nHadoopFileOutput.Injection.OUTPUT_LENGTH=This option indicates the length of the field.\nHadoopFileOutput.Injection.OUTPUT_PRECISION=This option indicates the amount of precision to use on numeric values.\nHadoopFileOutput.Injection.OUTPUT_CURRENCY=The currenty symbol that is used.\nHadoopFileOutput.Injection.OUTPUT_DECIMAL=The decimal symbol that is used.\nHadoopFileOutput.Injection.OUTPUT_GROUP=The grouping symbol that is used.\nHadoopFileOutput.Injection.OUTPUT_TRIM=The trimming method to apply on the string (none, left, both, right).\nHadoopFileOutput.Injection.OUTPUT_NULL=The string to insert into the text file if the value of the field is null.\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/trans/messages/messages_ko_KR.properties",
    "content": "\nHadoopFileOutput.MethodNotSupportedException.Message=\\uBA54\\uC18C\\uB4DC\\uB97C \\uC9C0\\uC6D0\\uD558\\uC9C0 \\uC54A\\uC2B5\\uB2C8\\uB2E4\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/vfs/messages/messages_en_US.properties",
    "content": "HadoopVfsFileChooserDialog.openFile=Open File\nHadoopVfsFileChooserDialog.SaveAs=Save as\nHadoopVfsFileChooserDialog.FileSystemChoice.Label=Look in\nHadoopVfsFileChooserDialog.FileSystemChoice.Hadoop.Label=Hadoop\nHadoopVfsFileChooserDialog.FileSystemChoice.Local.Label=Local\nHadoopVfsFileChooserDialog.ConnectionGroup.Label=Connection\nHadoopVfsFileChooserDialog.URL.Label=Server:\nHadoopVfsFileChooserDialog.Port.Label=Port:\nHadoopVfsFileChooserDialog.UserID.Label=User ID:\nHadoopVfsFileChooserDialog.Password.Label=Password:\nHadoopVfsFileChooserDialog.ConnectionButton.Label=Connect\nHadoopVfsFileChooserDialog.warning=Warning\nHadoopVfsFileChooserDialog.noWriteSupport=This file system does not support write operations.\nHadoopVfsFileChooserDialog.error=Error\nHadoopVfsFileChooserDialog.FileSystem.error=A file system error occurred.  See log for details.\nHadoopVfsFileChooserDialog.Connection.Error.title=Unable to Connect\nHadoopVfsFileChooserDialog.Connection.error=You don''t seem to be getting a connection to the Hadoop Cluster.  Check the cluster configuration you''re using.\nHadoopVfsFileChooserDialog.Connection.schemeError=The file system scheme is not supported by the {0} Hadoop configuration."
  },
  {
    "path": "kettle-plugins/hdfs/core/src/main/resources/org/pentaho/big/data/kettle/plugins/hdfs/vfs/messages/messages_ko_KR.properties",
    "content": "\nHadoopVfsFileChooserDialog.Connection.error=HDFS \\uC11C\\uBC84\\uC5D0 \\uC5F0\\uACB0\\uD560 \\uC218 \\uC5C6\\uC2B5\\uB2C8\\uB2E4.\nHadoopVfsFileChooserDialog.ConnectionButton.Label=\\uC5F0\\uACB0\nHadoopVfsFileChooserDialog.ConnectionGroup.Label=\\uC5F0\\uACB0\nHadoopVfsFileChooserDialog.FileSystem.error=\\uD30C\\uC77C \\uC2DC\\uC2A4\\uD15C \\uC624\\uB958\\uAC00 \\uBC1C\\uC0DD\\uD558\\uC600\\uC2B5\\uB2C8\\uB2E4. \\uC790\\uC138\\uD55C \\uB0B4\\uC6A9\\uC740 \\uB85C\\uADF8\\uB97C \\uCC38\\uACE0\\uD558\\uC2ED\\uC2DC\\uC624.\nHadoopVfsFileChooserDialog.FileSystemChoice.Local.Label=\\uB85C\\uCEEC\nHadoopVfsFileChooserDialog.Password.Label  =\\uC554\\uD638:\nHadoopVfsFileChooserDialog.Port.Label      =\\uD3EC\\uD2B8:\nHadoopVfsFileChooserDialog.SaveAs          =\\uB2E4\\uB978 \\uC774\\uB984\\uC73C\\uB85C \\uC800\\uC7A5\nHadoopVfsFileChooserDialog.URL.Label       =\\uC11C\\uBC84:\nHadoopVfsFileChooserDialog.UserID.Label    =\\uC0AC\\uC6A9\\uC790 ID:\nHadoopVfsFileChooserDialog.error           =\\uC624\\uB958\nHadoopVfsFileChooserDialog.noWriteSupport  =\\uD30C\\uC77C \\uC2DC\\uC2A4\\uD15C\\uC774 \\uC4F0\\uAE30 \\uC5F0\\uC0B0\\uC744 \\uC9C0\\uC6D0\\uD558\\uC9C0 \\uC54A\\uC2B5\\uB2C8\\uB2E4.\nHadoopVfsFileChooserDialog.openFile        =\\uD30C\\uC77C \\uC5F4\\uAE30\nHadoopVfsFileChooserDialog.warning         =\\uACBD\\uACE0\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/job/JobEntryHadoopCopyFilesLoadSaveTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.job;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.impl.cluster.NamedClusterImpl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.job.entry.loadSave.LoadSaveTester;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\npublic class JobEntryHadoopCopyFilesLoadSaveTest {\n\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n\n  @Before\n  public void setup() throws ClusterInitializationException {\n    namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( new NamedClusterImpl() );\n    mock( NamedClusterServiceLocator.class );\n    runtimeTester = mock( RuntimeTester.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    new JobEntryHadoopCopyFiles( namedClusterService, runtimeTestActionService, runtimeTester );\n  }\n\n  @Test\n  public void testLoadSave() throws KettleException {\n    List<String> commonAttributes = Arrays.asList( \"copy_empty_folders\", \"arg_from_previous\", \"overwrite_files\",\n      \"include_subfolders\", \"remove_source_files\", \"add_result_filesname\", \"destination_is_a_file\",\n      \"create_destination_folder\" );\n\n    Map<String, String> getterMap = new HashMap<String, String>();\n    getterMap.put( \"copy_empty_folders\", \"isCopyEmptyFolders\" );\n    getterMap.put( \"arg_from_previous\", \"isArgFromPrevious\" );\n    getterMap.put( \"overwrite_files\", \"isoverwrite_files\" );\n    getterMap.put( \"include_subfolders\", \"isIncludeSubfolders\" );\n    getterMap.put( \"remove_source_files\", \"isRemoveSourceFiles\" );\n    getterMap.put( \"add_result_filesname\", \"isAddresultfilesname\" );\n    getterMap.put( \"destination_is_a_file\", \"isDestinationIsAFile\" );\n    getterMap.put( \"create_destination_folder\", \"isCreateDestinationFolder\" );\n\n    Map<String, String> setterMap = new HashMap<String, String>();\n    setterMap.put( \"copy_empty_folders\", \"setCopyEmptyFolders\" );\n    setterMap.put( \"arg_from_previous\", \"setArgFromPrevious\" );\n    setterMap.put( \"overwrite_files\", \"setoverwrite_files\" );\n    setterMap.put( \"include_subfolders\", \"setIncludeSubfolders\" );\n    setterMap.put( \"remove_source_files\", \"setRemoveSourceFiles\" );\n    setterMap.put( \"add_result_filesname\", \"setAddresultfilesname\" );\n    setterMap.put( \"destination_is_a_file\", \"setDestinationIsAFile\" );\n    setterMap.put( \"create_destination_folder\", \"setCreateDestinationFolder\" );\n\n    LoadSaveTester<JobEntryHadoopCopyFiles> tester =\n      new LoadSaveTester<JobEntryHadoopCopyFiles>( JobEntryHadoopCopyFiles.class, commonAttributes,\n        getterMap, setterMap ) {\n        @Override public JobEntryHadoopCopyFiles createMeta() {\n          return new JobEntryHadoopCopyFiles( namedClusterService, runtimeTestActionService, runtimeTester );\n        }\n      };\n\n    tester.testSerialization();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/job/JobEntryHadoopCopyFilesTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.job;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.job.entries.copyfiles.JobEntryCopyFiles;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.hadoop.HadoopSpoonPlugin;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertNotNull;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 11/23/15.\n */\npublic class JobEntryHadoopCopyFilesTest {\n  private JobEntryHadoopCopyFiles jobEntryHadoopCopyFiles;\n  private String testName;\n  private NamedClusterService namedClusterManager;\n  private String testUrl;\n  private String testNcName;\n  private IMetaStore metaStore;\n  private Map mappings;\n  private NamedCluster namedCluster;\n\n  @Before\n  public void setup() {\n    testName = \"testName\";\n    namedClusterManager = mock( NamedClusterService.class );\n    jobEntryHadoopCopyFiles =\n        new JobEntryHadoopCopyFiles( namedClusterManager, mock( RuntimeTestActionService.class ), mock(\n            RuntimeTester.class ) );\n    jobEntryHadoopCopyFiles.setName( testName );\n    testUrl = \"testUrl\";\n    testNcName = \"testNcName\";\n    metaStore = mock( IMetaStore.class );\n    mappings = mock( Map.class );\n    namedCluster = mock( NamedCluster.class );\n  }\n\n  @Test\n  public void testLoadUrlNullNcName() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( null );\n    String loadURL = jobEntryHadoopCopyFiles.loadURL( testUrl, null, metaStore, mappings );\n    assertNotNull( loadURL );\n    verifyNoMoreInteractions( mappings );\n  }\n\n  @Test\n  public void testLoadUrlNull() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( null );\n    String loadURL = jobEntryHadoopCopyFiles.loadURL( null, null, metaStore, mappings );\n    assertNull( loadURL );\n    verifyNoMoreInteractions( mappings );\n  }\n\n  @Test\n  public void testLoadUrlNotNullForNotCluster() {\n    testNcName = \"LOCAL-SOURCE-FILE-1\";\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( null );\n    String loadURL = jobEntryHadoopCopyFiles.loadURL( testUrl, testNcName, metaStore, mappings );\n    assertNotNull( loadURL );\n    assertEquals( testUrl, loadURL );\n    verify( mappings ).put( testUrl, testNcName );\n  }\n\n  @Test\n  public void testLoadUrlMapRNull() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( true );\n    assertNull( jobEntryHadoopCopyFiles.loadURL( testUrl, testNcName, metaStore, mappings ) );\n    verifyNoMoreInteractions( mappings );\n  }\n\n  @Test\n  public void testLoadUrlMapRNotNullNoPrefix() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( true );\n    String testNewUrl = \"testNewUrl\";\n    when( namedCluster.processURLsubstitution( testUrl, metaStore, jobEntryHadoopCopyFiles.getVariables() ) )\n        .thenReturn( testNewUrl );\n    assertEquals( testNewUrl, jobEntryHadoopCopyFiles.loadURL( testUrl, testNcName, metaStore, mappings ) );\n    verify( mappings ).put( testNewUrl, testNcName );\n    assertEquals( testUrl, jobEntryHadoopCopyFiles.fileFolderUrlMappings.get( testNewUrl ) );\n  }\n\n  @Test\n  public void testLoadUrlMapRNotNullPrefix() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( true );\n    String testNewUrl = HadoopSpoonPlugin.MAPRFS_SCHEME + \"://\" + \"testNewUrl\";\n    when( namedCluster.processURLsubstitution( testUrl, metaStore, jobEntryHadoopCopyFiles.getVariables() ) )\n        .thenReturn( testNewUrl );\n    assertEquals( testNewUrl, jobEntryHadoopCopyFiles.loadURL( testUrl, testNcName, metaStore, mappings ) );\n    verify( mappings ).put( testNewUrl, testNcName );\n    assertEquals( testUrl, jobEntryHadoopCopyFiles.fileFolderUrlMappings.get( testNewUrl ) );\n  }\n\n  @Test\n  public void testLoadUrlNotMapR() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( false );\n    String testNewUrl = HadoopSpoonPlugin.HDFS_SCHEME + \"://\" + \"testNewUrl\";\n    when( namedCluster.processURLsubstitution( testUrl, metaStore, jobEntryHadoopCopyFiles.getVariables() ) )\n        .thenReturn( testNewUrl );\n    assertEquals( testNewUrl, jobEntryHadoopCopyFiles.loadURL( testUrl, testNcName, metaStore, mappings ) );\n    verify( mappings ).put( testNewUrl, testNcName );\n    assertEquals( testUrl, jobEntryHadoopCopyFiles.fileFolderUrlMappings.get( testNewUrl ) );\n  }\n\n  @Test\n  public void testLoadUrlHdfsEMPTY_SOURCE_URL() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( false );\n    String testNewUrl = HadoopSpoonPlugin.HDFS_SCHEME + \"://\" + \"testNewUrl\";\n    when( namedCluster.processURLsubstitution( testUrl, metaStore, jobEntryHadoopCopyFiles.getVariables() ) )\n      .thenReturn( testNewUrl );\n    String prefixUrlSource = JobEntryCopyFiles.SOURCE_URL + 8 + \"-\";\n    String testPrefixSourceUrl = prefixUrlSource + testUrl;\n    String expectedPrefixSourceLoadUrl = prefixUrlSource + testNewUrl;\n    assertEquals( expectedPrefixSourceLoadUrl, jobEntryHadoopCopyFiles.loadURL( testPrefixSourceUrl, testNcName, metaStore, mappings ) );\n    verify( mappings ).put( expectedPrefixSourceLoadUrl, testNcName );\n    assertEquals( testPrefixSourceUrl, jobEntryHadoopCopyFiles.fileFolderUrlMappings.get( expectedPrefixSourceLoadUrl ) );\n  }\n\n  @Test\n  public void testLoadUrlHdfsEMPTY_DEST_URL() {\n    when( namedClusterManager.getNamedClusterByName( testNcName, metaStore ) ).thenReturn( namedCluster );\n    when( namedCluster.isMapr() ).thenReturn( false );\n    String testNewUrl = HadoopSpoonPlugin.HDFS_SCHEME + \"://\" + \"testNewUrl\";\n    when( namedCluster.processURLsubstitution( testUrl, metaStore, jobEntryHadoopCopyFiles.getVariables() ) )\n      .thenReturn( testNewUrl );\n    String prefixUrlDest = JobEntryCopyFiles.DEST_URL + 5 + \"-\";\n    String testPrefixDestUrl = prefixUrlDest + testUrl;\n    String expectedPrefixDestLoadUrl = prefixUrlDest + testNewUrl;\n    assertEquals( expectedPrefixDestLoadUrl, jobEntryHadoopCopyFiles.loadURL( testPrefixDestUrl, testNcName, metaStore, mappings ) );\n    verify( mappings ).put( expectedPrefixDestLoadUrl, testNcName );\n    assertEquals( testPrefixDestUrl, jobEntryHadoopCopyFiles.fileFolderUrlMappings.get( expectedPrefixDestLoadUrl ) );\n  }\n\n  @Test\n  public void testSaveUrlMappingsKeyMisses() {\n    String testUrl = \"/src/path/\";\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.clear();\n    // populating with other values\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"KeyA\", \"ValueA\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"KeyB\", \"ValueB\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"/src\", \"ValueC\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"/src/path/anotherPath\", \"ValueD\" );\n    assertEquals( testUrl, jobEntryHadoopCopyFiles.saveURL( testUrl, testNcName, metaStore, mappings ) );\n\n    assertNull( testUrl, jobEntryHadoopCopyFiles.saveURL( null, testNcName, metaStore, mappings ) );\n  }\n\n  @Test\n  public void testSaveUrlMappingsKeyHits() {\n    String testUrl = \"/src/path/\";\n    String testUrlSubstituted = \"hdfs://someHostname/src/path\";\n    // populating with other values\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"KeyA\", \"ValueA\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"KeyB\", \"ValueB\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"/src\", \"ValueC\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( \"/src/path/anotherPath\", \"ValueD\" );\n    jobEntryHadoopCopyFiles.fileFolderUrlMappings.put( testUrlSubstituted, testUrl );\n    assertEquals( testUrl, jobEntryHadoopCopyFiles.saveURL( testUrl, testNcName, metaStore, mappings ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileInputDialogTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\npublic class HadoopFileInputDialogTest {\n\n  @Test\n  public void getFriendlyURIsUnsecure() {\n    HadoopFileInputDialog dialog = mock( HadoopFileInputDialog.class );\n    when( dialog.getFriendlyURIs( any() ) ).thenCallRealMethod();\n\n    String[] files = new String[] {\n      \"hdfs://clouderaserver01.pentaho.com:8020/wordcount/parse/weblogsA.txt\",\n      \"hdfs://clouderaserver02.pentaho.com:8020/wordcount/parse/weblogsB.txt\",\n      \"hdfs://clouderaserver03.pentaho.com:8020/wordcount/parse/weblogsC.txt\"\n    };\n\n    String[] friendly = dialog.getFriendlyURIs( files );\n\n    assertEquals( files[0], friendly[0] );\n    assertEquals( files[1], friendly[1] );\n    assertEquals( files[2], friendly[2] );\n  }\n\n  @Test\n  public void getFriendlyURIsSecure() {\n    HadoopFileInputDialog dialog = mock( HadoopFileInputDialog.class );\n    when( dialog.getFriendlyURIs( any() ) ).thenCallRealMethod();\n\n    String[] files = new String[] {\n      \"hdfs://user01:pwd01@clouderaserver01.pentaho.com:8020/wordcount/parse/weblogsA.txt\",\n      \"hdfs://user02@clouderaserver02.pentaho.com:8020/wordcount/parse/weblogsB.txt\",\n      \"hdfs://user03:pwd03@clouderaserver03.pentaho.com:8020/wordcount/parse/weblogsC.txt\"\n    };\n\n    String[] friendly = dialog.getFriendlyURIs( files );\n\n    assertEquals( \"hdfs://user01:***@clouderaserver01.pentaho.com:8020/wordcount/parse/weblogsA.txt\", friendly[0] );\n    assertEquals( files[1], friendly[1] );\n    assertEquals( \"hdfs://user03:***@clouderaserver03.pentaho.com:8020/wordcount/parse/weblogsC.txt\", friendly[2] );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileInputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.vfs2.provider.URLFileName;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.jdom.Document;\nimport org.jdom.Element;\nimport org.jdom.input.SAXBuilder;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.di.core.KettleEnvironment;\nimport org.pentaho.di.core.fileinput.FileInputList;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileField;\nimport org.pentaho.di.trans.steps.fileinput.text.TextFileFilter;\nimport org.pentaho.di.trans.steps.named.cluster.NamedClusterEmbedManager;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.net.URI;\nimport java.net.URL;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Locale;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.anyInt;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.times;\n\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n/**\n * @author Vasilina Terehova\n */\npublic class HadoopFileInputMetaTest {\n\n  public static final String TEST_CLUSTER_NAME = \"TEST-CLUSTER-NAME\";\n  public static final String SAMPLE_HADOOP_FILE_INPUT_STEP = \"sample-hadoop-file-input-step.xml\";\n  public static final String TEST_FILE_NAME = \"test-file-name\";\n  public static final String TEST_FOLDER_NAME = \"test-folder-name\";\n  private static Logger logger = LogManager.getLogger( HadoopFileInputMetaTest.class );\n  // for message resolution\n  private NamedClusterService namedClusterService;\n  private HadoopFileSystemLocator hadoopFileSystemLocator;\n\n  @Before\n  public void setUp() throws Exception {\n    namedClusterService = mock( NamedClusterService.class );\n    hadoopFileSystemLocator = mock( HadoopFileSystemLocator.class );\n  }\n\n  /**\n   * BACKLOG-7972 - Hadoop File Output: Hadoop Clusters dropdown doesn't preserve selected cluster after reopen a\n   * transformation after changing signature of loadSource in , saveSource in HadoopFileOutputMeta wasn't called\n   *\n   * @throws Exception\n   */\n  @Test\n  public void testSaveSourceCalledFromGetXml() throws Exception {\n    HadoopFileInputMeta hadoopFileInputMeta = new HadoopFileInputMeta( namedClusterService, hadoopFileSystemLocator );\n    hadoopFileInputMeta.allocateFiles( 1 );\n    //create spy to check whether saveSource now is called\n    HadoopFileInputMeta spy = initHadoopMetaInput( hadoopFileInputMeta );\n    HashMap<String, String> mappings = new HashMap<>();\n    mappings.put( TEST_FILE_NAME, HadoopFileOutputMetaTest.TEST_CLUSTER_NAME );\n    spy.setNamedClusterURLMapping( mappings );\n    StepMeta parentStepMeta = mock( StepMeta.class );\n    TransMeta parentTransMeta = mock( TransMeta.class );\n    when( parentStepMeta.getParentTransMeta() ).thenReturn( parentTransMeta );\n    NamedClusterEmbedManager embedManager = mock( NamedClusterEmbedManager.class );\n    when( parentTransMeta.getNamedClusterEmbedManager() ).thenReturn( embedManager );\n    spy.setParentStepMeta( parentStepMeta );\n    String xml = spy.getXML();\n    Document hadoopOutputMetaStep = HadoopFileOutputMetaTest.getDocumentFromString( xml, new SAXBuilder() );\n    Element fileElement = HadoopFileOutputMetaTest.getChildElementByTagName( hadoopOutputMetaStep.getRootElement(), \"file\" );\n    //getting from file node cluster attribute value\n    Element clusterNameElement =\n      HadoopFileOutputMetaTest.getChildElementByTagName( fileElement, HadoopFileInputMeta.SOURCE_CONFIGURATION_NAME );\n    assertEquals( TEST_CLUSTER_NAME, clusterNameElement.getValue() );\n    //check that saveSource is called from TextFileOutputMeta\n    verify( spy, times( 1 ) ).saveSource( any( StringBuilder.class ), any( String.class ) );\n    verify( embedManager ).registerUrl( \"test-file-name\" );\n  }\n\n  private HadoopFileInputMeta initHadoopMetaInput( HadoopFileInputMeta hadoopFileInputMeta ) {\n    HadoopFileInputMeta spy = Mockito.spy( hadoopFileInputMeta );\n    when( spy.getFileName() ).thenReturn( new String[] {} );\n    spy.setFileName( new String[] { TEST_FILE_NAME } );\n    spy.setFilter( new TextFileFilter[] {} );\n    spy.inputFields = new BaseFileField[] {};\n    spy.inputFiles.fileMask = new String[] { TEST_FILE_NAME };\n    spy.inputFiles.fileRequired = new String[] { TEST_FILE_NAME };\n    spy.inputFiles.includeSubFolders = new String[] { TEST_FOLDER_NAME };\n    spy.content.dateFormatLocale = Locale.getDefault();\n    return spy;\n  }\n\n  public Node loadNodeFromXml( String fileName ) throws Exception {\n    URL resource = getClass().getClassLoader().getResource( fileName );\n    if ( resource == null ) {\n      logger.error( \"no file \" + fileName + \" found in resources\" );\n      throw new IllegalArgumentException( \"no file \" + fileName + \" found in resources\" );\n    } else {\n      return XMLHandler.getSubNode( XMLHandler.loadXMLFile( resource ), \"entry\" );\n    }\n  }\n\n  @Test\n  public void testLoadSourceCalledFromLoadXml() throws Exception {\n    HadoopFileInputMeta hadoopFileInputMeta = new HadoopFileInputMeta( namedClusterService, hadoopFileSystemLocator );\n    //set required data for step - empty\n    HadoopFileInputMeta spy = Mockito.spy( hadoopFileInputMeta );\n    Node node = loadNodeFromXml( SAMPLE_HADOOP_FILE_INPUT_STEP );\n    //create spy to check whether saveSource now is called\n    IMetaStore metaStore = mock( IMetaStore.class );\n    spy.loadXML( node, Collections.emptyList(), metaStore );\n    assertEquals( TEST_CLUSTER_NAME, hadoopFileInputMeta.getNamedClusterURLMapping().get( TEST_FILE_NAME ) );\n    verify( spy, times( 1 ) ).loadSource( any( Node.class ), any( Node.class ), anyInt(), any( IMetaStore.class ) );\n  }\n\n  @Test\n  public void testLoadSourceRepForUrlRefresh() throws Exception {\n    final String URL_FROM_CLUSTER = \"urlFromCluster\";\n    IMetaStore mockMetaStore = mock( IMetaStore.class );\n    NamedCluster mockNamedCluster = mock( NamedCluster.class );\n    when( mockNamedCluster.processURLsubstitution( any(), eq( mockMetaStore ), any() ) ).thenReturn( URL_FROM_CLUSTER );\n    when( namedClusterService.getNamedClusterByName( TEST_CLUSTER_NAME, mockMetaStore ) ).thenReturn( mockNamedCluster );\n    Repository mockRep = mock( Repository.class );\n    when( mockRep.getJobEntryAttributeString( any(), eq( 0 ), eq( \"source_configuration_name\" ) ) ).thenReturn( TEST_CLUSTER_NAME );\n    HadoopFileInputMeta hadoopFileInputMeta =  new HadoopFileInputMeta( namedClusterService, hadoopFileSystemLocator );\n    when( mockRep.getStepAttributeString( any(), eq( 0 ), eq( \"file_name\" ) ) ).thenReturn( URL_FROM_CLUSTER );\n\n    assertEquals( URL_FROM_CLUSTER, hadoopFileInputMeta.loadSourceRep( mockRep, null, 0, mockMetaStore ) );\n  }\n\n  @Test\n  public void testGetFileInputList() {\n    KettleLogStore.init();\n    final String URL_FROM_CLUSTER = \"urlFromCluster\";\n    StepMeta parentStepMeta = mock( StepMeta.class );\n    IMetaStore mockMetaStore = mock( IMetaStore.class );\n    NamedCluster mockNamedCluster = mock( NamedCluster.class );\n    TransMeta parentTransMeta = mock( TransMeta.class );\n    when( parentStepMeta.getParentTransMeta() ).thenReturn( parentTransMeta );\n    when( parentTransMeta.getMetaStore() ).thenReturn( mockMetaStore );\n    when( mockNamedCluster.processURLsubstitution( any(), eq( mockMetaStore ), any() ) ).thenReturn( URL_FROM_CLUSTER );\n    when( namedClusterService.getNamedClusterByName( TEST_CLUSTER_NAME, mockMetaStore ) ).thenReturn( mockNamedCluster );\n    HadoopFileInputMeta hadoopFileInputMetaSpy =  initHadoopMetaInput( new HadoopFileInputMeta( namedClusterService, hadoopFileSystemLocator ) );\n    hadoopFileInputMetaSpy.environment = new String[] { TEST_CLUSTER_NAME };\n    hadoopFileInputMetaSpy.setParentStepMeta( parentStepMeta );\n    doReturn( new FileInputList() ).when( hadoopFileInputMetaSpy ).createFileList( any( VariableSpace.class ) );\n    hadoopFileInputMetaSpy.getFileInputList( new Variables() );\n\n    assertEquals( \"urlFromCluster\", hadoopFileInputMetaSpy.inputFiles.fileName[0] );\n  }\n\n  @Test\n  public void testGetUrl() {\n    final HadoopFileInputMeta meta = Mockito.mock( HadoopFileInputMeta.class );\n    final URLFileName mockFileName = Mockito.mock( URLFileName.class );\n    final String scheme = \"hdfs\";\n    final String hostName = \"svqxbdcn6cdh512n1.pentahoqa.com\";\n    final String rootUrl = scheme + \"://\" + hostName + \":8020/\";\n    final String path = \"wordcount/input\";\n    final String url = rootUrl + path;\n\n    Mockito.doReturn( hostName ).when( mockFileName ).getHostName();\n    Mockito.doReturn( scheme ).when( mockFileName ).getScheme();\n\n    Mockito.doReturn( mockFileName ).when( meta ).getUrlFileName( url );\n    Mockito.doReturn( rootUrl ).when( mockFileName ).getRootURI();\n    Mockito.doCallRealMethod().when( meta ).getUrlHostName( url );\n    Mockito.doCallRealMethod().when( meta ).getUrlPath( url );\n\n    Assert.assertEquals( hostName, meta.getUrlHostName( url ) );\n    Assert.assertEquals( \"/\" + path, meta.getUrlPath( url ) );\n  }\n\n  @Test\n  public void testEncryption() throws Exception {\n    KettleEnvironment.init();\n    HadoopFileInputMeta meta = new HadoopFileInputMeta();\n    String url = \"hdfs://user:password@myhost:8020/myfile\";\n    String encrypted = meta.encryptDecryptPassword( url, HadoopFileInputMeta.EncryptDirection.ENCRYPT );\n    assertTrue( !encrypted.contains( \"password\" ) );\n    assertEquals( url, meta.encryptDecryptPassword( encrypted, HadoopFileInputMeta.EncryptDirection.DECRYPT ) );\n  }\n\n  @Test\n  public void testNoPassword() throws Exception {\n    KettleEnvironment.init();\n    HadoopFileInputMeta meta = new HadoopFileInputMeta();\n    String url = \"hdfs://user@myhost:8020/myfile\";\n    String encrypted = meta.encryptDecryptPassword( url, HadoopFileInputMeta.EncryptDirection.ENCRYPT );\n    assertEquals( url, meta.encryptDecryptPassword( encrypted, HadoopFileInputMeta.EncryptDirection.DECRYPT ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileOutputDialogTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.commons.vfs2.VFS;\nimport org.apache.commons.vfs2.impl.StandardFileSystemManager;\nimport org.apache.commons.vfs2.provider.UriParser;\nimport org.eclipse.swt.custom.CCombo;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.di.core.Const;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.any;\nimport static org.mockito.internal.verification.VerificationModeFactory.times;\nimport static org.mockito.ArgumentMatchers.eq;\n\n/**\n * Created by bryan on 11/23/15.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class HadoopFileOutputDialogTest {\n\n  private static final String HDFS_PREFIX = \"hdfs\";\n  private static final String MY_HOST_URL = \"//myhost:8020\";\n  private StandardFileSystemManager fsm;\n  private MockedStatic<UriParser> uriParserMockedStatic;\n  private MockedStatic<VFS> vfsMockedStatic;\n\n  @Before\n  public void setUp() throws Exception {\n    uriParserMockedStatic = Mockito.mockStatic( UriParser.class );\n    vfsMockedStatic = Mockito.mockStatic( VFS.class );\n    fsm = mock( StandardFileSystemManager.class );\n    vfsMockedStatic.when( VFS::getManager ).thenReturn( fsm );\n  }\n\n  @After\n  public void cleanup() {\n    vfsMockedStatic.close();\n    uriParserMockedStatic.close();\n    Mockito.validateMockitoUsage();\n  }\n\n  @Test\n  public void testGetUrlPathHdfsPrefix() {\n    String prefix = HDFS_PREFIX;\n    String pathBase = MY_HOST_URL;\n    String expected = \"/path/to/file\";\n    String fullPath = prefix + \":\" + pathBase + expected;\n\n    buildExtractSchemeMocks( prefix, fullPath, pathBase + expected );\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( fullPath ) );\n  }\n\n  @Test\n  public void testGetUrlPathMapRPRefix() {\n    String prefix = \"maprfs\";\n    String pathBase = \"//\";\n    String expected = \"/path/to/file\";\n    String fullPath = prefix + \":\" + pathBase + expected;\n\n    buildExtractSchemeMocks( prefix, fullPath, pathBase + expected );\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( fullPath ) );\n  }\n\n  @Test\n  public void testGetUrlPathSpecialPrefix() {\n    String prefix = \"mySpecialPrefix\";\n    String pathBase = \"//host\";\n    String expected = \"/path/to/file\";\n    String fullPath = prefix + \":\" + pathBase + expected;\n\n    buildExtractSchemeMocks( prefix, fullPath, pathBase + expected );\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( fullPath ) );\n  }\n\n  @Test\n  public void testGetUrlPathNoPrefix() {\n    String expected = \"/path/to/file\";\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( expected ) );\n  }\n\n  @Test\n  public void testGetUrlPathVariablePrefix() {\n    String expected = \"${myTestVar}\";\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( expected ) );\n  }\n\n  @Test\n  public void testGetUrlPathRootPath() {\n    String prefix = HDFS_PREFIX;\n    String pathBase = MY_HOST_URL;\n    String expected = \"/\";\n    String fullPath = prefix + \":\" + pathBase + expected;\n    buildExtractSchemeMocks( prefix, fullPath, pathBase + expected );\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( fullPath ) );\n  }\n\n  @Test\n  public void testGetUrlPathRootPathWithoutSlash() {\n    String prefix = HDFS_PREFIX;\n    String pathBase = MY_HOST_URL;\n    String expected = \"/\";\n    String fullPath = prefix + \":\" + pathBase;\n    buildExtractSchemeMocks( prefix, fullPath, pathBase );\n    assertEquals( expected, HadoopFileOutputDialog.getUrlPath( fullPath ) );\n  }\n\n  @Test\n  public void testFillWithSupportedDateFormats() {\n    HadoopFileOutputDialog dialog = mock( HadoopFileOutputDialog.class );\n    CCombo combo = mock( CCombo.class );\n\n    String[] dates = Const.getDateFormats();\n    assertEquals( 20, dates.length );\n\n    // currently there are 20 date formats, 10 of which contain ':' characters which are illegal in hadoop filenames\n    // if the formats returned change, the numbers on this test should be adjusted\n\n    doCallRealMethod().when( dialog ).fillWithSupportedDateFormats( any(), any() );\n    dialog.fillWithSupportedDateFormats( combo, dates );\n\n    verify( combo, times( 10 ) ).add( any() );\n  }\n\n  private Answer buildSchemeAnswer( String prefix, String buildPath ) {\n\n    return invocation -> {\n      Object[] args = invocation.getArguments();\n      ( (StringBuilder) args[2] ).append( buildPath );\n      return prefix;\n    };\n  }\n\n  private void buildExtractSchemeMocks( String prefix, String fullPath, String pathWithoutPrefix ) {\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( any( String[].class ), eq( fullPath ) ) ).thenReturn( prefix );\n    uriParserMockedStatic.when( () -> UriParser.extractScheme( any( String[].class ), eq( fullPath ),\n      any( StringBuilder.class ) ) ).thenAnswer( buildSchemeAnswer( prefix, pathWithoutPrefix ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/HadoopFileOutputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans;\n\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.jdom.Document;\nimport org.jdom.Element;\nimport org.jdom.JDOMException;\nimport org.jdom.filter.ElementFilter;\nimport org.jdom.input.SAXBuilder;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.Mockito;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.metastore.MetaStoreConst;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.steps.textfileoutput.TextFileField;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.w3c.dom.Node;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.IOException;\nimport java.net.URL;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.assertEquals;\n\nimport static org.mockito.ArgumentMatchers.isNull;\nimport static org.mockito.AdditionalMatchers.or;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.times;\n\n/**\n * Created by bryan on 11/23/15.\n */\npublic class HadoopFileOutputMetaTest {\n\n  public static final String TEST_CLUSTER_NAME = \"TEST-CLUSTER-NAME\";\n  public static final String SAMPLE_HADOOP_FILE_OUTPUT_STEP = \"sample-hadoop-file-output-step.xml\";\n  public static final String ENTRY_TAG_NAME = \"entry\";\n  public static final String EMBEDDED_XML = \"embed\";\n  public static final String NAMED_CLUSTER_TAG = \"NamedCluster\";\n  private static final Logger logger = LogManager.getLogger( HadoopFileOutputMetaTest.class );\n  // for message resolution\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n\n\n  @Before\n  public void setUp() throws Exception {\n    namedClusterService = mock( NamedClusterService.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    runtimeTester = mock( RuntimeTester.class );\n    MetaStoreConst.disableMetaStore = false;\n  }\n\n  @Test\n  public void testProcessedUrl() {\n    String sourceConfigurationName = \"scName\";\n    String desiredUrl = \"desiredUrl\";\n    String url = \"url\";\n    HadoopFileOutputMeta hadoopFileOutputMeta = new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService,\n      runtimeTester );\n    IMetaStore metaStore = mock( IMetaStore.class );\n    assertTrue( null == hadoopFileOutputMeta.getProcessedUrl( metaStore, null ) );\n    hadoopFileOutputMeta.setSourceConfigurationName( sourceConfigurationName );\n    NamedCluster nc = mock( NamedCluster.class );\n    when( namedClusterService.getNamedClusterByName( eq( sourceConfigurationName ), any()) )\n      .thenReturn( null );\n    assertEquals( url, hadoopFileOutputMeta.getProcessedUrl( metaStore, url ) );\n    when( namedClusterService.getNamedClusterByName( eq( sourceConfigurationName ), any()) )\n      .thenReturn( nc );\n    when( nc.processURLsubstitution( eq( url ), any(), any()) )\n      .thenReturn( desiredUrl );\n    assertEquals( desiredUrl, hadoopFileOutputMeta.getProcessedUrl( metaStore, url ) );\n  }\n\n  @Test\n  public void testProcessedUrlUsingEmbeddedCluster() {\n    String desiredUrl = \"desiredUrl\";\n    String url = \"url\";\n    HadoopFileOutputMeta hadoopFileOutputMeta = new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService,\n      runtimeTester );\n    NamedCluster nc = mock( NamedCluster.class );\n    NamedCluster nc2 = mock( NamedCluster.class );\n    MetaStoreConst.disableMetaStore = true;\n\n    when( namedClusterService.getClusterTemplate() ).thenReturn( nc );\n    when( nc.fromXmlForEmbed( any() ) ).thenReturn( nc2 );\n    when( nc2.processURLsubstitution( eq( url ), any(), any() ) ).thenReturn( desiredUrl );\n    assertEquals( desiredUrl, hadoopFileOutputMeta.getProcessedUrl( null, url ) );\n  }\n\n  /**\n   * BACKLOG-7972 - Hadoop File Output: Hadoop Clusters dropdown doesn't preserve selected cluster after reopen a\n   * transformation after changing signature of loadSource in , saveSource in HadoopFileOutputMeta wasn't called\n   *\n   * @throws Exception\n   */\n  @Test\n  public void testSaveSourceCalledFromGetXml() throws Exception {\n    HadoopFileOutputMeta hadoopFileOutputMeta = new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService,\n      runtimeTester );\n    hadoopFileOutputMeta.setSourceConfigurationName( TEST_CLUSTER_NAME );\n    //set required data for step - empty\n    hadoopFileOutputMeta.setOutputFields( new TextFileField[] {} );\n    //create spy to check whether saveSource now is called\n    HadoopFileOutputMeta spy = Mockito.spy( hadoopFileOutputMeta );\n    //getting from structure file node\n    Document hadoopOutputMetaStep = getDocumentFromString( spy.getXML(), new SAXBuilder() );\n    Element fileElement = getChildElementByTagName( hadoopOutputMetaStep.getRootElement(), \"file\" );\n    //getting from file node cluster attribute value\n    Element clusterNameElement = getChildElementByTagName( fileElement, HadoopFileInputMeta.SOURCE_CONFIGURATION_NAME );\n    assertEquals( TEST_CLUSTER_NAME, clusterNameElement.getValue() );\n    //check that saveSource is called from TextFileOutputMeta\n    verify( spy, times( 1 ) ).saveSource( any( StringBuilder.class ), or( any( String.class ), isNull() ) );\n  }\n\n  public Node getChildElementByTagName( String fileName ) throws Exception {\n    URL resource = getClass().getClassLoader().getResource( fileName );\n    if ( resource == null ) {\n      logger.error( \"no file \" + fileName + \" found in resources\" );\n      throw new IllegalArgumentException( \"no file \" + fileName + \" found in resources\" );\n    } else {\n      return XMLHandler.getSubNode( XMLHandler.loadXMLFile( resource ), \"entry\" );\n    }\n  }\n\n  public static Element getChildElementByTagName( Element element, String tagName ) {\n    return (Element) element.getContent( new ElementFilter( tagName ) ).get( 0 );\n  }\n\n  @Test\n  public void testLoadSourceCalledFromReadData() throws Exception {\n    HadoopFileOutputMeta hadoopFileOutputMeta = new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService,\n      runtimeTester );\n    hadoopFileOutputMeta.setSourceConfigurationName( TEST_CLUSTER_NAME );\n    //set required data for step - empty\n    hadoopFileOutputMeta.setOutputFields( new TextFileField[] {} );\n    HadoopFileOutputMeta spy = Mockito.spy( hadoopFileOutputMeta );\n    Node node = getChildElementByTagName( SAMPLE_HADOOP_FILE_OUTPUT_STEP );\n    //create spy to check whether saveSource now is called from readData\n    spy.readData( node );\n    assertEquals( TEST_CLUSTER_NAME, hadoopFileOutputMeta.getSourceConfigurationName() );\n    verify( spy, times( 1 ) ).loadSource( any( Node.class ), or( any( IMetaStore.class ), isNull() ) );\n  }\n\n  @Test\n  public void testLoadSourceRepForUrlRefresh() throws Exception {\n    final String URL_FROM_CLUSTER = \"urlFromCluster\";\n    IMetaStore mockMetaStore = mock( IMetaStore.class );\n    NamedCluster mockNamedCluster = mock( NamedCluster.class );\n    when( mockNamedCluster.processURLsubstitution( any(), eq( mockMetaStore ), any() ) ).thenReturn( URL_FROM_CLUSTER );\n    when( namedClusterService.getNamedClusterByName( TEST_CLUSTER_NAME, mockMetaStore ) )\n      .thenReturn( mockNamedCluster );\n    Repository mockRep = mock( Repository.class );\n    when( mockRep.getStepAttributeString( any(), eq( \"source_configuration_name\" ) ) ).thenReturn(\n      TEST_CLUSTER_NAME );\n    HadoopFileOutputMeta hadoopFileOutputMeta =\n      new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService, runtimeTester );\n    hadoopFileOutputMeta.setSourceConfigurationName( TEST_CLUSTER_NAME );\n    when( mockRep.getStepAttributeString( any(), eq( \"file_name\" ) ) ).thenReturn( \"Bad Url In Repo\" );\n\n    assertEquals( URL_FROM_CLUSTER, hadoopFileOutputMeta.loadSourceRep( mockRep, null, mockMetaStore ) );\n  }\n\n  @Test\n  public void testSaveSourceCalledFromGetXmlWithEmbeddedCluster() throws Exception {\n    HadoopFileOutputMeta hadoopFileOutputMeta =\n      new HadoopFileOutputMeta( namedClusterService, runtimeTestActionService, runtimeTester );\n    hadoopFileOutputMeta.setSourceConfigurationName( TEST_CLUSTER_NAME );\n    // set required data for step - empty\n    hadoopFileOutputMeta.setOutputFields( new TextFileField[] {} );\n    // create spy to check whether saveSource now is called\n    HadoopFileOutputMeta spy = Mockito.spy( hadoopFileOutputMeta );\n    // getting from structure file node\n    NamedCluster mockNamedCluster = mock( NamedCluster.class );\n    when( namedClusterService.getNamedClusterByName( eq( TEST_CLUSTER_NAME ), any() ) ).thenReturn( mockNamedCluster );\n    when( mockNamedCluster.toXmlForEmbed( NAMED_CLUSTER_TAG ) ).thenReturn(\n      \"<\" + NAMED_CLUSTER_TAG + \">\" + EMBEDDED_XML + \"</\" + NAMED_CLUSTER_TAG + \">\" );\n\n    Document hadoopOutputMetaStep = getDocumentFromString( spy.getXML(), new SAXBuilder() );\n    Element clusterElement = getChildElementByTagName( hadoopOutputMetaStep.getRootElement(), NAMED_CLUSTER_TAG );\n    // getting from file node cluster attribute value\n    assertEquals( EMBEDDED_XML, clusterElement.getValue() );\n    // check that saveSource is called from TextFileOutputMeta\n    verify( spy, times( 1 ) ).saveSource( any( StringBuilder.class ), or( any( String.class ), isNull() ) );\n  }\n\n\n  public static Document getDocumentFromString( String xmlStep, SAXBuilder jdomBuilder )\n    throws JDOMException, IOException {\n    String xml = XMLHandler.openTag( ENTRY_TAG_NAME ) + xmlStep + XMLHandler.closeTag( ENTRY_TAG_NAME );\n    return jdomBuilder.build( new ByteArrayInputStream( xml.getBytes() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopBaseStepAnalyzerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.junit.Before;\nimport org.junit.Ignore;\nimport org.junit.Test;\nimport org.mockito.Mock;\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileMeta;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.file.BaseFileMeta;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.dictionary.DictionaryConst;\nimport org.pentaho.metaverse.api.IComponentDescriptor;\nimport org.pentaho.metaverse.api.IMetaverseBuilder;\nimport org.pentaho.metaverse.api.IMetaverseNode;\nimport org.pentaho.metaverse.api.INamespace;\nimport org.pentaho.metaverse.api.MetaverseComponentDescriptor;\nimport org.pentaho.metaverse.api.MetaverseObjectFactory;\nimport org.pentaho.metaverse.api.model.IExternalResourceInfo;\n\nimport java.util.Set;\n\nimport static org.junit.Assert.*;\nimport static org.mockito.Mockito.*;\n\npublic abstract class HadoopBaseStepAnalyzerTest<A extends HadoopBaseStepAnalyzer, M extends BaseFileMeta> {\n\n  protected A analyzer;\n\n  @Mock\n  private INamespace mockNamespace;\n  private IComponentDescriptor descriptor;\n  private M meta;\n  @Mock\n  private TransMeta transMeta;\n\n  @Before\n  public void setUp() throws Exception {\n    // commented out since testCreateResourceNode is now in ignore state, fix may be related to service\n//    when( mockNamespace.getParentNamespace() ).thenReturn( mockNamespace );\n    descriptor = new MetaverseComponentDescriptor( \"test\", DictionaryConst.NODE_TYPE_TRANS_STEP, mockNamespace );\n    analyzer = spy( getAnalyzer() );\n    analyzer.setDescriptor( descriptor );\n    IMetaverseBuilder builder = mock( IMetaverseBuilder.class );\n    analyzer.setMetaverseBuilder( builder );\n    analyzer.setObjectFactory( new MetaverseObjectFactory() );\n\n    meta = getMetaMock();\n    StepMeta mockStepMeta = mock( StepMeta.class );\n    lenient().when( meta.getParentStepMeta() ).thenReturn( mockStepMeta );\n\n    lenient().when( transMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n    lenient().when( mockStepMeta.getParentTransMeta() ).thenReturn( transMeta );\n  }\n\n  protected abstract A getAnalyzer();\n\n  protected abstract M getMetaMock();\n\n  @Test\n  public void testGetUsedFields() throws Exception {\n    assertNull( analyzer.getUsedFields( getMetaMock() ) );\n  }\n\n  @Test\n  public void testGetResourceInputNodeType() throws Exception {\n    assertEquals( DictionaryConst.NODE_TYPE_FILE_FIELD, analyzer.getResourceInputNodeType() );\n  }\n\n  @Test\n  public void testGetResourceOutputNodeType() throws Exception {\n    assertEquals( DictionaryConst.NODE_TYPE_FILE_FIELD, analyzer.getResourceOutputNodeType() );\n  }\n\n  @Test\n  public void testGetSupportedSteps() {\n    Set<Class<? extends BaseStepMeta>> types = analyzer.getSupportedSteps();\n    assertNotNull( types );\n    assertEquals( types.size(), 1 );\n    assertTrue( types.contains( getMetaClass() ) );\n  }\n\n  protected abstract Class<M> getMetaClass();\n\n  @Ignore\n  @Test\n  public void testCreateResourceNode() throws Exception {\n    // local\n    IExternalResourceInfo localResource = mock( IExternalResourceInfo.class );\n    when( localResource.getName() ).thenReturn( \"file:///Users/home/tmp/xyz.ktr\" );\n    analyzer.validateState( descriptor, getMetaMock() );\n    IMetaverseNode resourceNode = analyzer.createResourceNode( getMetaMock(), localResource );\n    assertNotNull( resourceNode );\n    assertEquals( DictionaryConst.NODE_TYPE_FILE, resourceNode.getType() );\n\n    // remote\n    final HadoopFileMeta hMeta = ( HadoopFileMeta ) getMetaMock();\n    IExternalResourceInfo remoteResource = mock( IExternalResourceInfo.class );\n    final String hostName = \"foo.com\";\n    final String filePath = \"hdfs://\" + hostName + \"/file.csv\";\n    when( remoteResource.getName() ).thenReturn( filePath );\n    when( hMeta.getUrlHostName( filePath ) ).thenReturn( hostName );\n    resourceNode = analyzer.createResourceNode( getMetaMock(), remoteResource );\n    assertNotNull( resourceNode );\n    assertEquals( DictionaryConst.NODE_TYPE_FILE, resourceNode.getType() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileInputStepAnalyzerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileInputMeta;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n@RunWith(MockitoJUnitRunner.Silent.class)\npublic class HadoopFileInputStepAnalyzerTest\n  extends HadoopBaseStepAnalyzerTest<HadoopFileInputStepAnalyzer, HadoopFileInputMeta> {\n\n  @Mock\n  private HadoopFileInputMeta meta;\n\n  @Override\n  protected HadoopFileInputStepAnalyzer getAnalyzer() {\n    return new HadoopFileInputStepAnalyzer();\n  }\n\n  @Override\n  protected HadoopFileInputMeta getMetaMock() {\n    return meta;\n  }\n\n  @Override\n  protected Class<HadoopFileInputMeta> getMetaClass() {\n    return HadoopFileInputMeta.class;\n  }\n\n  @Test\n  public void testIsOutput() {\n    assertFalse( analyzer.isOutput() );\n  }\n\n  @Test\n  public void testIsInput() {\n    assertTrue( analyzer.isInput() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/trans/analyzer/HadoopFileOutputStepAnalyzerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.trans.analyzer;\n\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.kettle.plugins.hdfs.trans.HadoopFileOutputMeta;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n@RunWith(MockitoJUnitRunner.Silent.class)\npublic class HadoopFileOutputStepAnalyzerTest extends HadoopBaseStepAnalyzerTest<HadoopFileOutputStepAnalyzer,\n  HadoopFileOutputMeta> {\n\n  @Mock\n  private HadoopFileOutputMeta meta;\n\n  @Override\n  protected HadoopFileOutputStepAnalyzer getAnalyzer() {\n    return new HadoopFileOutputStepAnalyzer();\n  }\n\n  @Override\n  protected HadoopFileOutputMeta getMetaMock() {\n    return meta;\n  }\n\n  @Override\n  protected Class<HadoopFileOutputMeta> getMetaClass() {\n    return HadoopFileOutputMeta.class;\n  }\n\n  @Test\n  public void testIsOutput() {\n    assertTrue( analyzer.isOutput() );\n  }\n\n  @Test\n  public void testIsInput() {\n    assertFalse( analyzer.isInput() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/HadoopVfsConnectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.variables.Variables;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 11/23/15.\n */\npublic class HadoopVfsConnectionTest {\n\n  /**\n   *\n   */\n  private static final String DEFAULT_VALUE = \"default\";\n\n  /**\n   *\n   */\n  private static final String EXPECTED_URL = \"hdfs://testUser:testPassword@testHost:testPort\";\n\n  private static final String TEST_PASSWORD = \"testPassword\";\n\n  private static final String TEST_USER = \"testUser\";\n\n  private static final String TEST_PORT = \"testPort\";\n\n  private static final String TEST_HOST = \"testHost\";\n\n  private static final String EMPTY = \"\";\n\n  @Test\n  public void testDefaultConstructor() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection();\n    assertEquals( EMPTY, hdfsConnection.getHostname() );\n    assertEquals( EMPTY, hdfsConnection.getPassword() );\n    assertEquals( EMPTY, hdfsConnection.getPort() );\n    assertEquals( EMPTY, hdfsConnection.getUsername() );\n  }\n\n  @Test\n  public void testConstructorWithParameters() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( TEST_HOST, TEST_PORT, TEST_USER, TEST_PASSWORD );\n    assertEquals( TEST_HOST, hdfsConnection.getHostname() );\n    assertEquals( TEST_PORT, hdfsConnection.getPort() );\n    assertEquals( TEST_USER, hdfsConnection.getUsername() );\n    assertEquals( TEST_PASSWORD, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterAsParameter() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( getTestNamedCluster(), new Variables() );\n    assertEquals( TEST_HOST, hdfsConnection.getHostname() );\n    assertEquals( TEST_PORT, hdfsConnection.getPort() );\n    assertEquals( TEST_USER, hdfsConnection.getUsername() );\n    assertEquals( TEST_PASSWORD, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterAsParameter_HostNameNull() {\n    NamedCluster testNamedCluster = mock( NamedCluster.class );\n    when( testNamedCluster.getHdfsHost() ).thenReturn( null );\n    when( testNamedCluster.getHdfsPort() ).thenReturn( TEST_PORT );\n    when( testNamedCluster.getHdfsUsername() ).thenReturn( TEST_USER );\n    when( testNamedCluster.getHdfsPassword() ).thenReturn( TEST_PASSWORD );\n    when( testNamedCluster.decodePassword( TEST_PASSWORD ) ).thenReturn( TEST_PASSWORD );\n\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( testNamedCluster, new Variables() );\n    assertEquals( EMPTY, hdfsConnection.getHostname() );\n    assertEquals( TEST_PORT, hdfsConnection.getPort() );\n    assertEquals( TEST_USER, hdfsConnection.getUsername() );\n    assertEquals( TEST_PASSWORD, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterAsParameter_PortNull() {\n    NamedCluster testNamedCluster = mock( NamedCluster.class );\n    when( testNamedCluster.getHdfsHost() ).thenReturn( TEST_HOST );\n    when( testNamedCluster.getHdfsPort() ).thenReturn( null );\n    when( testNamedCluster.getHdfsUsername() ).thenReturn( TEST_USER );\n    when( testNamedCluster.getHdfsPassword() ).thenReturn( TEST_PASSWORD );\n    when( testNamedCluster.decodePassword( TEST_PASSWORD ) ).thenReturn( TEST_PASSWORD );\n\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( testNamedCluster, new Variables() );\n    assertEquals( TEST_HOST, hdfsConnection.getHostname() );\n    assertEquals( EMPTY, hdfsConnection.getPort() );\n    assertEquals( TEST_USER, hdfsConnection.getUsername() );\n    assertEquals( TEST_PASSWORD, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterAsParameter_UserNull() {\n    NamedCluster testNamedCluster = mock( NamedCluster.class );\n    when( testNamedCluster.getHdfsHost() ).thenReturn( TEST_HOST );\n    when( testNamedCluster.getHdfsPort() ).thenReturn( TEST_PORT );\n    when( testNamedCluster.getHdfsUsername() ).thenReturn( null );\n    when( testNamedCluster.getHdfsPassword() ).thenReturn( TEST_PASSWORD );\n    when( testNamedCluster.decodePassword( TEST_PASSWORD ) ).thenReturn( TEST_PASSWORD );\n\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( testNamedCluster, new Variables() );\n    assertEquals( TEST_HOST, hdfsConnection.getHostname() );\n    assertEquals( TEST_PORT, hdfsConnection.getPort() );\n    assertEquals( EMPTY, hdfsConnection.getUsername() );\n    assertEquals( TEST_PASSWORD, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterAsParameter_PasswordNull() {\n    NamedCluster testNamedCluster = mock( NamedCluster.class );\n    when( testNamedCluster.getHdfsHost() ).thenReturn( TEST_HOST );\n    when( testNamedCluster.getHdfsPort() ).thenReturn( TEST_PORT );\n    when( testNamedCluster.getHdfsUsername() ).thenReturn( TEST_USER );\n    when( testNamedCluster.getHdfsPassword() ).thenReturn( null );\n\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( testNamedCluster, new Variables() );\n    assertEquals( TEST_HOST, hdfsConnection.getHostname() );\n    assertEquals( TEST_PORT, hdfsConnection.getPort() );\n    assertEquals( TEST_USER, hdfsConnection.getUsername() );\n    assertEquals( EMPTY, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testConstructorWithNamedClusterNullAsParameter() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( null, new Variables() );\n    assertEquals( EMPTY, hdfsConnection.getHostname() );\n    assertEquals( EMPTY, hdfsConnection.getPort() );\n    assertEquals( EMPTY, hdfsConnection.getUsername() );\n    assertEquals( EMPTY, hdfsConnection.getPassword() );\n  }\n\n  @Test\n  public void testGetConnectionStringForHDFSScheme() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( getTestNamedCluster(), new Variables() );\n    assertEquals( EXPECTED_URL, hdfsConnection.getConnectionString( \"hdfs\" ) );\n  }\n\n  @Test\n  public void testGetConnectionStringForNullInputScheme() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( getTestNamedCluster(), new Variables() );\n    assertEquals( EXPECTED_URL, hdfsConnection.getConnectionString( null ) );\n  }\n\n  @Test\n  public void testGetConnectionStringForEmptyInputScheme() {\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( getTestNamedCluster(), new Variables() );\n    assertEquals( EXPECTED_URL, hdfsConnection.getConnectionString( EMPTY ) );\n  }\n\n  private NamedCluster getTestNamedCluster() {\n    NamedCluster nCluster = mock( NamedCluster.class );\n    when( nCluster.getHdfsHost() ).thenReturn( TEST_HOST );\n    when( nCluster.getHdfsPort() ).thenReturn( TEST_PORT );\n    when( nCluster.getHdfsUsername() ).thenReturn( TEST_USER );\n    when( nCluster.getHdfsPassword() ).thenReturn( TEST_PASSWORD );\n    when( nCluster.decodePassword( TEST_PASSWORD ) ).thenReturn( TEST_PASSWORD );\n    return nCluster;\n  }\n\n  @Test\n  public void tesSetSustomParameters() throws KettleFileException {\n    Props.init( 0 );\n    HadoopVfsConnection hdfsConnection = new HadoopVfsConnection( getTestNamedCluster(), new Variables() );\n    hdfsConnection.setCustomParameters( Props.getInstance() );\n    assertEquals( TEST_HOST, Props.getInstance().getCustomParameter( \"HadoopVfsFileChooserDialog.host\",\n      DEFAULT_VALUE ) );\n    assertEquals( TEST_PORT, Props.getInstance().getCustomParameter( \"HadoopVfsFileChooserDialog.port\",\n      DEFAULT_VALUE ) );\n    assertEquals( TEST_USER, Props.getInstance().getCustomParameter( \"HadoopVfsFileChooserDialog.user\",\n      DEFAULT_VALUE ) );\n    assertEquals( TEST_PASSWORD, Props.getInstance().getCustomParameter( \"HadoopVfsFileChooserDialog.password\",\n      DEFAULT_VALUE ) );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/java/org/pentaho/big/data/kettle/plugins/hdfs/vfs/HadoopVfsFileChooserDialogTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hdfs.vfs;\n\nimport org.eclipse.swt.widgets.Combo;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Tree;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.vfs.ui.VfsBrowser;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.doNothing;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\n\npublic class HadoopVfsFileChooserDialogTest {\n\n  private HadoopVfsFileChooserDialog hadoopVfsFileChooserDialog = null;\n  private static final Integer SELECTED_INDEX = -1;\n  private static final String[] NAMED_CLUSTER_NAMES = {\"name1\", \"name2\", \"name3\"};\n\n  @Before\n  public void Initialization() {\n    hadoopVfsFileChooserDialog = mock( HadoopVfsFileChooserDialog.class );\n  }\n\n  @After\n  public void finalize() {\n    hadoopVfsFileChooserDialog = null;\n  }\n\n  @Test\n  public void testActivate() {\n    doCallRealMethod().when( hadoopVfsFileChooserDialog ).activate();\n\n    VfsFileChooserDialog vfsFileChooserDialog = mock( VfsFileChooserDialog.class );\n    Combo combo = mock( Combo.class );\n    Tree tree = mock( Tree.class );\n    VfsBrowser vfsBrowser = mock( VfsBrowser.class );\n    doNothing().when( combo ).setText( anyString() );\n    vfsFileChooserDialog.openFileCombo = combo;\n    doNothing().when( tree ).removeAll();\n    vfsBrowser.fileSystemTree = tree;\n    vfsFileChooserDialog.vfsBrowser = vfsBrowser;\n    doCallRealMethod().when( vfsFileChooserDialog ).setRootFile( null );\n    doCallRealMethod().when( vfsFileChooserDialog ).setInitialFile( null );\n    hadoopVfsFileChooserDialog.vfsFileChooserDialog = vfsFileChooserDialog;\n\n    NamedClusterWidgetImplExtend namedClusterWidgetImpl = mock( NamedClusterWidgetImplExtend.class );\n    Combo namedClusterCombo = mock( Combo.class );\n    when( namedClusterCombo.getSelectionIndex() ).thenReturn( SELECTED_INDEX );\n    doNothing().when( namedClusterCombo ).removeAll();\n    doNothing().when( namedClusterCombo ).setItems( any() );\n    doNothing().when( namedClusterCombo ).select( SELECTED_INDEX );\n    when( namedClusterWidgetImpl.getNameClusterCombo() ).thenReturn( namedClusterCombo );\n    when( namedClusterWidgetImpl.getNamedClusterNames() ).thenReturn( NAMED_CLUSTER_NAMES );\n    doCallRealMethod().when( namedClusterWidgetImpl ).initiate();\n    doNothing().when( namedClusterWidgetImpl ).setSelectedNamedCluster( anyString() );\n    when( hadoopVfsFileChooserDialog.getNamedClusterWidget() ).thenReturn( namedClusterWidgetImpl );\n\n    hadoopVfsFileChooserDialog.activate();\n    verify( namedClusterWidgetImpl, times( 1 ) ).initiate();\n  }\n\n  private class NamedClusterWidgetImplExtend extends NamedClusterWidgetImpl {\n    public NamedClusterWidgetImplExtend( Composite parent, boolean showLabel, NamedClusterService namedClusterService, RuntimeTestActionService runtimeTestActionService, RuntimeTester clusterTester ) {\n      super( parent, showLabel, namedClusterService, runtimeTestActionService, clusterTester, false );\n    }\n\n    /*Overriding for visibility change only*/\n    @Override\n    public String[] getNamedClusterNames() {\n      return super.getNamedClusterNames();\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/resources/graph.properties",
    "content": "blueprints.graph=com.tinkerpop.blueprints.impls.tg.TinkerGraph"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/resources/sample-hadoop-file-input-step.xml",
    "content": "<entry>\n  <accept_filenames>N</accept_filenames>\n  <passing_through_fields>N</passing_through_fields>\n  <accept_field/>\n  <accept_stepname/>\n  <separator/>\n  <enclosure/>\n  <enclosure_breaks>N</enclosure_breaks>\n  <escapechar/>\n  <header>N</header>\n  <nr_headerlines>0</nr_headerlines>\n  <footer>N</footer>\n  <nr_footerlines>0</nr_footerlines>\n  <line_wrapped>N</line_wrapped>\n  <nr_wraps>0</nr_wraps>\n  <layout_paged>N</layout_paged>\n  <nr_lines_per_page>0</nr_lines_per_page>\n  <nr_lines_doc_header>0</nr_lines_doc_header>\n  <noempty>N</noempty>\n  <include>N</include>\n  <include_field/>\n  <rownum>N</rownum>\n  <rownumByFile>N</rownumByFile>\n  <rownum_field/>\n  <format/>\n  <encoding/>\n  <add_to_result_filenames>N</add_to_result_filenames>\n  <file>\n    <name>test-file-name</name>\n    <source_configuration_name>TEST-CLUSTER-NAME</source_configuration_name>\n    <filemask>test-file-mask</filemask>\n    <exclude_filemask/>\n    <file_required>N</file_required>\n    <include_subfolders>N</include_subfolders>\n    <type/>\n    <compression>None</compression>\n  </file>\n  <filters>\n  </filters>\n  <fields>\n  </fields>\n  <limit>0</limit>\n  <error_ignored>N</error_ignored>\n  <skip_bad_files>N</skip_bad_files>\n  <file_error_field/>\n  <file_error_message_field/>\n  <error_line_skipped>N</error_line_skipped>\n  <error_count_field/>\n  <error_fields_field/>\n  <error_text_field/>\n  <bad_line_files_destination_directory/>\n  <bad_line_files_extension/>\n  <error_line_files_destination_directory/>\n  <error_line_files_extension/>\n  <line_number_files_destination_directory/>\n  <line_number_files_extension/>\n  <date_format_lenient>N</date_format_lenient>\n  <date_format_locale>en_US</date_format_locale>\n  <shortFileFieldName/>\n  <pathFieldName/>\n  <hiddenFieldName/>\n  <lastModificationTimeFieldName/>\n  <uriNameFieldName/>\n  <rootUriNameFieldName/>\n  <extensionFieldName/>\n  <sizeFieldName/>\n</entry>"
  },
  {
    "path": "kettle-plugins/hdfs/core/src/test/resources/sample-hadoop-file-output-step.xml",
    "content": "<entry>\n  <separator/>\n  <enclosure/>\n  <enclosure_forced>N</enclosure_forced>\n  <enclosure_fix_disabled>N</enclosure_fix_disabled>\n  <header>N</header>\n  <footer>N</footer>\n  <format/>\n  <compression/>\n  <encoding/>\n  <endedLine/>\n  <fileNameInField>N</fileNameInField>\n  <fileNameField/>\n  <create_parent_folder>Y</create_parent_folder>\n  <file>\n    <name/>\n    <source_configuration_name>TEST-CLUSTER-NAME</source_configuration_name>\n    <is_command>N</is_command>\n    <servlet_output>N</servlet_output>\n    <do_not_open_new_file_init>N</do_not_open_new_file_init>\n    <extention/>\n    <append>N</append>\n    <split>N</split>\n    <haspartno>N</haspartno>\n    <add_date>N</add_date>\n    <add_time>N</add_time>\n    <SpecifyFormat>N</SpecifyFormat>\n    <date_time_format/>\n    <add_to_result_filenames>N</add_to_result_filenames>\n    <pad>N</pad>\n    <fast_dump>N</fast_dump>\n    <splitevery>0</splitevery>\n  </file>\n  <fields>\n  </fields>\n</entry>"
  },
  {
    "path": "kettle-plugins/hdfs/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-hdfs</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hive/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>hive-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-hive-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Hive Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hive-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hive/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-hive-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-hive-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-hive-core:*</exclude>\n      </excludes>\n      <includes>\n        <include>org.apache.hive:hive</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/hive/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/hive/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hive</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>hive-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Hive Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/hive/core/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-hive</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <!--groupId>pentaho</groupId-->\n  <artifactId>pdi-hive-core</artifactId>\n  <name>PDI Hive Core</name>\n  <!--version>11.0.0.0-SNAPSHOT</version-->\n  <!--packaging>jar</packaging-->\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n  <DynamicImport-Package>com.cloudera.impala.jdbc41</DynamicImport-Package>\n  <Import-Package>!com.pentaho.big.data.bundles.impl.shim.hbase,!org.pentaho.hbase.shim.api,!org.pentaho.hbase.shim.spi,!org.apache.hadoop.*,!org.pentaho.di.core.hadoop.*,!org.pentaho.hadoop.*,!org.pentaho.yarn.*,!org.apache.logging.log4j,!org.pentaho.hadoop.hbase.factory,!org.pentaho.hbase.*,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.jdbc,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n  <Bundle-Activator>org.pentaho.big.data.kettle.plugins.hive.Activator</Bundle-Activator>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>commons-database-model</artifactId>\n      <version>${commons-database.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.di.plugins</groupId>\n      <artifactId>pentaho-metastore-locator-api</artifactId>\n      <version>${pdi.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.osgi</groupId>\n      <artifactId>osgi.core</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.hamcrest</groupId>\n      <artifactId>java-hamcrest</artifactId>\n      <version>2.0.0.0</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n</project>\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hadoop/hive/jdbc/HiveDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hadoop.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when Hive2DatabaseMeta is loaded.  See DummyDriver.\n */\npublic class HiveDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hive/jdbc/HiveDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when HiveDatabaseMeta is loaded.  See DummyDriver.\n */\npublic class HiveDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hive/jdbc/HiveSimbaDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when HiveSimbaDatabaseMeta is loaded.  See DummyDriver.\n */\npublic class HiveSimbaDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hive/jdbc/ImpalaDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when ImpalaDatabaseMeta is loaded.  See DummyDriver.\n */\npublic class ImpalaDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hive/jdbc/ImpalaSimbaDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when ImpalaSimbaDatabaseMeta is loaded.  See DummyDriver.\n */\npublic class ImpalaSimbaDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/apache/hive/jdbc/SparkSqlSimbaDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * DummyDriver implementation to avoid CNF exception\n * when SparkSqlSimbaDriver is loaded.  See DummyDriver.\n */\npublic class SparkSqlSimbaDriver extends DummyDriver {\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/Activator.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.osgi.framework.BundleActivator;\nimport org.osgi.framework.BundleContext;\nimport org.osgi.framework.ServiceRegistration;\nimport org.pentaho.database.IDatabaseDialect;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.function.Supplier;\nimport java.util.stream.Collectors;\n\n/**\n * Created by bryan on 5/6/16.\n */\npublic class Activator implements BundleActivator {\n  private static final String I_DATABASE_DIALECT_CANONICAL_NAME = IDatabaseDialect.class.getCanonicalName();\n\n  private final List<ServiceRegistration> serviceRegistrations = new ArrayList<>();\n\n  private final List<Supplier<IDatabaseDialect>> databaseDialectSuppliers = Collections.unmodifiableList( Arrays\n    .asList( Hive2DatabaseDialect::new, ImpalaDatabaseDialect::new,\n      ImpalaSimbaDatabaseDialect::new, SparkSimbaDatabaseDialect::new ) );\n\n  @Override public void start( BundleContext context ) throws Exception {\n    serviceRegistrations.addAll( databaseDialectSuppliers.stream()\n      .map( supplier -> (ServiceRegistration) context\n        .registerService( I_DATABASE_DIALECT_CANONICAL_NAME, supplier.get(), null ) )\n      .collect( Collectors.toList() ) );\n  }\n\n  @Override public void stop( BundleContext context ) throws Exception {\n    serviceRegistrations.forEach( ServiceRegistration::unregister );\n    serviceRegistrations.clear();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/BaseSimbaDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport static com.google.common.base.Strings.isNullOrEmpty;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_HOST_FQDN;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_SERVICE_NAME;\n\nabstract class BaseSimbaDatabaseMeta extends Hive2DatabaseMeta {\n\n  @VisibleForTesting static final String URL_IS_CONFIGURED_THROUGH_JNDI = \"Url is configured through JNDI\";\n\n  BaseSimbaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService,\n                         MetastoreLocator metastoreLocator ) {\n    super( driverLocator, namedClusterService, metastoreLocator );\n  }\n\n  BaseSimbaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n\n  protected abstract String getJdbcPrefix();\n\n  @Override\n  public abstract String getDriverClass();\n\n  @Override public int[] getAccessTypeList() {\n    return new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE, DatabaseMeta.TYPE_ACCESS_JNDI };\n  }\n\n  @Override public String getURL( String hostname, String port, String databaseName ) {\n    return SimbaUrl.Builder.create()\n      .withAccessType( getAccessType() )\n      .withDatabaseName( databaseName )\n      .withPort( port )\n      .withDefaultPort( getDefaultDatabasePort() )\n      .withHostname( hostname )\n      .withJdbcPrefix( getJdbcPrefix() )\n      .withUsername( getUsername() )\n      .withPassword( getPassword() )\n      .withIsKerberos( isKerberos() )\n      .build()\n      .getURL();\n  }\n\n  private String getExtraProperty( String key ) {\n    return getAttributes().getProperty( ATTRIBUTE_PREFIX_EXTRA_OPTION + getPluginId() + \".\" + key );\n  }\n\n  private String getProperty( String key ) {\n    return getAttributes().getProperty( key );\n  }\n\n  /**\n   * This method assumes that Hive has no concept of primary and technical keys and auto increment columns. We are\n   * ignoring the tk, pk and useAutoinc parameters.\n   */\n  @Override\n  public String getFieldDefinition( ValueMetaInterface v, String tk, String pk, boolean useAutoinc,\n                                    boolean addFieldname, boolean addCr ) {\n    StringBuilder retval = new StringBuilder();\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( addFieldname ) {\n      retval.append( fieldname ).append( ' ' );\n    }\n    int type = v.getType();\n    switch ( type ) {\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        retval.append( \"BOOLEAN\" );\n        break;\n\n      case ValueMetaInterface.TYPE_DATE:\n        retval.append( \"DATE\" );\n        break;\n\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        retval.append( \"TIMESTAMP\" );\n        break;\n\n      case ValueMetaInterface.TYPE_STRING:\n        retval.append( \"VARCHAR\" );\n        break;\n\n      case ValueMetaInterface.TYPE_NUMBER:\n      case ValueMetaInterface.TYPE_INTEGER:\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        // Integer values...\n        if ( precision == 0 ) {\n          if ( length > 9 ) {\n            if ( length < 19 ) {\n              // can hold signed values between -9223372036854775808 and 9223372036854775807\n              // 18 significant digits\n              retval.append( \"BIGINT\" );\n            } else {\n              retval.append( \"FLOAT\" );\n            }\n          } else {\n            retval.append( \"INT\" );\n          }\n        } else {\n          // Floating point values...\n          if ( length > 15 ) {\n            retval.append( \"FLOAT\" );\n          } else {\n            // A double-precision floating-point number is accurate to approximately 15 decimal places.\n            // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n            retval.append( \"DOUBLE\" );\n          }\n        }\n        break;\n    }\n    return retval.toString();\n  }\n\n  /**\n   * Assume kerberos if any of the kerb props have been set.\n   */\n  private boolean isKerberos() {\n    return !( isNullOrEmpty( getProperty( KRB_HOST_FQDN ) )\n      && isNullOrEmpty( getExtraProperty( KRB_HOST_FQDN ) )\n      && isNullOrEmpty( getProperty( KRB_SERVICE_NAME ) )\n      && isNullOrEmpty( getExtraProperty( KRB_SERVICE_NAME ) ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/DatabaseMetaWithVersion.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.database.BaseDatabaseMeta;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.platform.api.data.DBDatasourceServiceException;\nimport org.pentaho.platform.api.data.IDBDatasourceService;\nimport org.pentaho.platform.engine.core.system.PentahoSystem;\n\nimport javax.sql.DataSource;\nimport java.sql.Connection;\nimport java.sql.DatabaseMetaData;\nimport java.sql.Driver;\nimport java.sql.SQLException;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic abstract class DatabaseMetaWithVersion extends BaseDatabaseMeta {\n  private static final Logger logger = LogManager.getLogger( DatabaseMetaWithVersion.class );\n  private final DriverLocator driverLocator;\n\n  protected DatabaseMetaWithVersion( DriverLocator driverLocator ) {\n    this.driverLocator = driverLocator;\n  }\n\n  @Override public abstract String getURL( String hostname, String port, String databaseName );\n\n  /**\n   * Check that the version of the driver being used is at least the driver you want. If you do not care about the minor\n   * version, pass in a 0 (The assumption being that the minor version will ALWAYS be 0 or greater)\n   *\n   * @return true: the version being used is equal to or newer than the one you requested false: the version being used\n   * is older than the one you requested\n   */\n  protected boolean isDriverVersion( int majorVersion, int minorVersion ) {\n    int driverMajorVersion;\n    int driverMinorVersion;\n\n    // If it is a JNDI connection\n    if ( getAccessType() == DatabaseMeta.TYPE_ACCESS_JNDI ) {\n      IDBDatasourceService dss = PentahoSystem.get( IDBDatasourceService.class );\n\n      DataSource dataSource = null;\n      try {\n        dataSource = dss.getDataSource( this.getDatabaseName() );\n      } catch ( DBDatasourceServiceException e ) {\n        logger.error( e.getMessage(), e );\n      }\n\n      DatabaseMetaData meta = null;\n\n      try ( Connection connection = dataSource.getConnection() ) {\n        meta = connection.getMetaData();\n      } catch ( SQLException e ) {\n        logger.error( e.getMessage(), e );\n      }\n\n      driverMajorVersion = meta.getDriverMajorVersion();\n      driverMinorVersion = meta.getDriverMinorVersion();\n    // if it is a JDBC or ODBC connection\n    } else {\n      String url = getURL( \"localhost\", \"10000\", \"default\" );\n\n      Driver driver = driverLocator.getDriver( url );\n      driverMajorVersion = driver.getMajorVersion();\n      driverMinorVersion = driver.getMinorVersion();\n    }\n\n    return driverMajorVersion > majorVersion || ( driverMajorVersion == majorVersion\n      && driverMinorVersion >= minorVersion );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/DummyDriver.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport java.sql.Connection;\nimport java.sql.Driver;\nimport java.sql.DriverPropertyInfo;\nimport java.sql.SQLException;\nimport java.sql.SQLFeatureNotSupportedException;\nimport java.util.Properties;\nimport java.util.logging.Logger;\n\n/**\n * DummyDriver is a bare Driver implementation used as a way\n * to avoid ClassNotFoundException when kettle attempts to load\n * the class associated with each meta.\n *\n * The classes which extend DummyDriver have the same unique\n * names as the name exposed by .getDriverClass() in the\n * DatabaseMeta implementation.\n *\n * Created by bryan on 3/30/16.\n */\npublic class DummyDriver implements Driver {\n  @Override public Connection connect( String url, Properties info ) throws SQLException {\n    return null;\n  }\n\n  @Override public boolean acceptsURL( String url ) throws SQLException {\n    return false;\n  }\n\n  @Override public DriverPropertyInfo[] getPropertyInfo( String url, Properties info ) throws SQLException {\n    return new DriverPropertyInfo[ 0 ];\n  }\n\n  @Override public int getMajorVersion() {\n    return 0;\n  }\n\n  @Override public int getMinorVersion() {\n    return 0;\n  }\n\n  @Override public boolean jdbcCompliant() {\n    return false;\n  }\n\n  @Override public Logger getParentLogger() throws SQLFeatureNotSupportedException {\n    return null;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/Hive2DatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.IValueMeta;\nimport org.pentaho.database.dialect.AbstractDatabaseDialect;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class Hive2DatabaseDialect extends AbstractDatabaseDialect {\n\n  public Hive2DatabaseDialect() {\n    super();\n  }\n\n  /**\n   * UID for serialization\n   */\n  private static final long serialVersionUID = -8456961348836455937L;\n\n  protected static final int DEFAULT_PORT = 10000;\n\n  private static final IDatabaseType DBTYPE =\n    new DatabaseType( \"Hadoop Hive 2\", \"HIVE2\", DatabaseAccessType.getList( DatabaseAccessType.NATIVE,\n      DatabaseAccessType.JNDI ), DEFAULT_PORT,\n      \"http://www.cloudera.com/content/support/en/documentation/cloudera-impala/cloudera-impala-documentation-v1\"\n        + \"-latest.html\" );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n  @Override\n  public String getNativeDriver() {\n    return \"org.apache.hive.jdbc.HiveDriver\";\n  }\n\n  @Override\n  public String getURL( IDatabaseConnection connection ) throws DatabaseDialectException {\n    StringBuffer urlBuffer = new StringBuffer( getNativeJdbcPre() );\n    /*\n     * String username = connection.getUsername(); if(username != null && !\"\".equals(username)) {\n     * urlBuffer.append(username); String password = connection.getPassword(); if(password != null &&\n     * !\"\".equals(password)) { urlBuffer.append(\":\"); urlBuffer.append(password); } urlBuffer.append(\"@\"); }\n     */\n    urlBuffer.append( connection.getHostname() );\n    urlBuffer.append( \":\" );\n    urlBuffer.append( connection.getDatabasePort() );\n    urlBuffer.append( \"/\" );\n    urlBuffer.append( connection.getDatabaseName() );\n    return urlBuffer.toString();\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return \"jdbc:hive2://\";\n  }\n\n  /**\n   * Generates the SQL statement to add a column to the specified table\n   *\n   * @param tablename   The table to add\n   * @param v           The column defined as a value\n   * @param tk          the name of the technical key field\n   * @param use_autoinc whether or not this field uses auto increment\n   * @param pk          the name of the primary key field\n   * @param semicolon   whether or not to add a semi-colon behind the statement.\n   * @return the SQL statement to add a column to the specified table\n   */\n  @Override\n  public String getAddColumnStatement( String tablename, IValueMeta v, String tk, boolean use_autoinc, String pk,\n                                       boolean semicolon ) {\n    return \"ALTER TABLE \" + tablename + \" ADD \" + getFieldDefinition( v, tk, pk, use_autoinc, true, false );\n  }\n\n  /**\n   * Generates the SQL statement to modify a column in the specified table\n   *\n   * @param tablename   The table to add\n   * @param v           The column defined as a value\n   * @param tk          the name of the technical key field\n   * @param use_autoinc whether or not this field uses auto increment\n   * @param pk          the name of the primary key field\n   * @param semicolon   whether or not to add a semi-colon behind the statement.\n   * @return the SQL statement to modify a column in the specified table\n   */\n  @Override\n  public String getModifyColumnStatement( String tablename, IValueMeta v, String tk, boolean use_autoinc, String pk,\n                                          boolean semicolon ) {\n    return \"ALTER TABLE \" + tablename + \" MODIFY \" + getFieldDefinition( v, tk, pk, use_autoinc, true, false );\n  }\n\n  @Override\n  public String getFieldDefinition( IValueMeta v, String tk, String pk, boolean use_autoinc, boolean add_fieldname,\n                                    boolean add_cr ) {\n    String retval = \"\";\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( add_fieldname ) {\n      retval += fieldname + \" \";\n    }\n    int type = v.getType();\n    switch ( type ) {\n      case IValueMeta.TYPE_DATE:\n        retval += \"DATETIME\";\n        break;\n      case IValueMeta.TYPE_BOOLEAN:\n        if ( supportsBooleanDataType() ) {\n          retval += \"BOOLEAN\";\n        } else {\n          retval += \"CHAR(1)\";\n        }\n        break;\n\n      case IValueMeta.TYPE_NUMBER:\n      case IValueMeta.TYPE_INTEGER:\n      case IValueMeta.TYPE_BIGNUMBER:\n        if ( fieldname.equalsIgnoreCase( tk ) || // Technical key\n          fieldname.equalsIgnoreCase( pk ) // Primary key\n        ) {\n          if ( use_autoinc ) {\n            retval += \"BIGINT AUTO_INCREMENT NOT NULL PRIMARY KEY\";\n          } else {\n            retval += \"BIGINT NOT NULL PRIMARY KEY\";\n          }\n        } else {\n          // Integer values...\n          if ( precision == 0 ) {\n            if ( length > 9 ) {\n              if ( length < 19 ) {\n                // can hold signed values between -9223372036854775808 and 9223372036854775807\n                // 18 significant digits\n                retval += \"BIGINT\";\n              } else {\n                retval += \"DECIMAL(\" + length + \")\";\n              }\n            } else {\n              retval += \"INT\";\n            }\n          } else {\n            // Floating point values...\n            if ( length > 15 ) {\n              retval += \"DECIMAL(\" + length;\n              if ( precision > 0 ) {\n                retval += \", \" + precision;\n              }\n              retval += \")\";\n            } else {\n              // A double-precision floating-point number is accurate to approximately 15 decimal places.\n              // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n              retval += \"DOUBLE\";\n            }\n          }\n        }\n        break;\n      case IValueMeta.TYPE_STRING:\n        if ( length > 0 ) {\n          if ( length == 1 ) {\n            retval += \"CHAR(1)\";\n          } else if ( length < 256 ) {\n            retval += \"VARCHAR(\" + length + \")\";\n          } else if ( length < 65536 ) {\n            retval += \"TEXT\";\n          } else if ( length < 16777215 ) {\n            retval += \"MEDIUMTEXT\";\n          } else {\n            retval += \"LONGTEXT\";\n          }\n        } else {\n          retval += \"TINYTEXT\";\n        }\n        break;\n      case IValueMeta.TYPE_BINARY:\n        retval += \"LONGBLOB\";\n        break;\n      default:\n        retval += \" UNKNOWN\";\n        break;\n    }\n\n    if ( add_cr ) {\n      retval += CR;\n    }\n\n    return retval;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { \"pentaho-hadoop-hive-jdbc-shim-1.4-SNAPSHOT.jar\" };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  /*\n   * (non-Javadoc)\n   * \n   * @see org.pentaho.database.dialect.AbstractDatabaseDialect#supportsSchemas()\n   */\n  @Override\n  public boolean supportsSchemas() {\n    return false;\n  }\n\n  @Override public boolean initialize( String classname ) {\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/Hive2DatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.Map;\n\n@DatabaseMetaPlugin( type = \"HIVE2\", typeDescription = \"Hadoop Hive 2/3\" )\npublic class Hive2DatabaseMeta extends DatabaseMetaWithVersion {\n  public static final String URL_PREFIX = \"jdbc:hive2://\";\n  public static final String SELECT_COUNT_1_FROM = \"select count(1) from \";\n  public static final String ALIAS_SUFFIX = \"_col\";\n  public static final String VIEW = \"VIEW\";\n  public static final String VIRTUAL_VIEW = \"VIRTUAL_VIEW\";\n  public static final String TRUNCATE_TABLE = \"TRUNCATE TABLE \";\n  public static final int[] ACCESS_TYPE_LIST = new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE };\n  protected static final String JAR_FILE = \"hive-jdbc-0.10.0-pentaho.jar\";\n  protected static final String DRIVER_CLASS_NAME = \"org.apache.hive.jdbc.HiveDriver\";\n  protected NamedClusterService namedClusterService;\n  protected MetastoreLocator metastoreLocator;\n  private final Logger logger = LogManager.getLogger( Hive2DatabaseMeta.class );\n\n  public Hive2DatabaseMeta(){\n    this( DriverLocatorImpl.getInstance() );\n  }\n\n  public Hive2DatabaseMeta( DriverLocator driverLocator ) {\n    this( driverLocator, NamedClusterManager.getInstance() );\n  }\n  //OSGi constructor\n  public Hive2DatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator );\n    this.namedClusterService = namedClusterService;\n  }\n\n  public synchronized MetastoreLocator getMetastoreLocator() {\n    if ( this.metastoreLocator == null ) {\n      try {\n        Collection<MetastoreLocator> metastoreLocators = PluginServiceLoader.loadServices( MetastoreLocator.class );\n        this.metastoreLocator = metastoreLocators.stream().findFirst().get();\n      } catch ( Exception e ) {\n        logger.error( \"Error getting metastore locator\", e );\n      }\n    }\n    return this.metastoreLocator;\n  }\n\n  @VisibleForTesting\n  protected Hive2DatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService,\n                            MetastoreLocator metastoreLocator ) {\n    super( driverLocator );\n    this.namedClusterService = namedClusterService;\n    this.metastoreLocator = metastoreLocator;\n  }\n\n  @Override\n  public int[] getAccessTypeList() {\n    return ACCESS_TYPE_LIST;\n  }\n\n  @Override\n  public String getAddColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                       String pk, boolean semicolon ) {\n\n    return \"ALTER TABLE \" + tablename + \" ADD \" + getFieldDefinition( v, tk, pk, useAutoinc, true, false );\n\n  }\n\n  @Override\n  public String getDriverClass() {\n\n    //  !!!  We will probably have to change this if we are providing our own driver,\n    //  i.e., before our code is committed to the Hadoop Hive project.\n    return DRIVER_CLASS_NAME;\n  }\n\n  /**\n   * This method assumes that Hive has no concept of primary and technical keys and auto increment columns. We are\n   * ignoring the tk, pk and useAutoinc parameters.\n   */\n  @Override\n  public String getFieldDefinition( ValueMetaInterface v, String tk, String pk, boolean useAutoinc,\n                                    boolean addFieldname, boolean addCr ) {\n\n    String retval = \"\";\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( addFieldname ) {\n      retval += fieldname + \" \";\n    }\n\n    int type = v.getType();\n    switch ( type ) {\n\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        retval += \"BOOLEAN\";\n        break;\n\n      case ValueMetaInterface.TYPE_DATE:\n        retval += \"DATE\";\n        break;\n\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        retval += \"TIMESTAMP\";\n        break;\n\n      case ValueMetaInterface.TYPE_STRING:\n        retval += \"STRING\";\n        break;\n\n      case ValueMetaInterface.TYPE_NUMBER:\n      case ValueMetaInterface.TYPE_INTEGER:\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        // Integer values...\n        if ( precision == 0 ) {\n          if ( length > 9 ) {\n            if ( length < 19 ) {\n              // can hold signed values between -9223372036854775808 and 9223372036854775807\n              // 18 significant digits\n              retval += \"BIGINT\";\n            } else {\n              retval += \"FLOAT\";\n            }\n          } else {\n            retval += \"INT\";\n          }\n        } else {\n          // Floating point values...\n          if ( length > 15 ) {\n            retval += \"FLOAT\";\n          } else {\n            // A double-precision floating-point number is accurate to approximately 15 decimal places.\n            // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n            retval += \"DOUBLE\";\n          }\n        }\n\n        break;\n    }\n\n    return retval;\n  }\n\n  @Override\n  public String getModifyColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                          String pk, boolean semicolon ) {\n\n    return \"ALTER TABLE \" + tablename + \" MODIFY \" + getFieldDefinition( v, tk, pk, useAutoinc, true, false );\n  }\n\n  @Override\n  public String getURL( String hostname, String port, String databaseName ) {\n\n    return URL_PREFIX + hostname + \":\" + port + \"/\" + databaseName;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n\n    return new String[] { JAR_FILE };\n  }\n\n  /**\n   * Build the SQL to count the number of rows in the passed table.\n   *\n   * @param tableName\n   * @return\n   */\n  @Override\n  public String getSelectCountStatement( String tableName ) {\n    return SELECT_COUNT_1_FROM + tableName;\n  }\n\n  @Override\n  public String generateColumnAlias( int columnIndex, String suggestedName ) {\n    return suggestedName;\n  }\n\n  /**\n   * Quotes around table names are not valid Hive QL\n   * <p/>\n   * return an empty string for the start quote\n   */\n  public String getStartQuote() {\n    return \"\";\n  }\n\n  /**\n   * Quotes around table names are not valid Hive QL\n   * <p/>\n   * return an empty string for the end quote\n   */\n  public String getEndQuote() {\n    return \"\";\n  }\n\n  /**\n   * @return a list of table types to retrieve tables for the database\n   */\n  @Override\n  public String[] getTableTypes() {\n    return null;\n  }\n\n  /**\n   * @return a list of table types to retrieve views for the database\n   */\n  @Override\n  public String[] getViewTypes() {\n    return new String[] { VIEW, VIRTUAL_VIEW };\n  }\n\n  /**\n   * @param tableName The table to be truncated.\n   * @return The SQL statement to truncate a table: remove all rows from it without a transaction\n   */\n  @Override\n  public String getTruncateTableStatement( String tableName ) {\n    return TRUNCATE_TABLE + tableName;\n  }\n\n  @Override\n  public boolean supportsSetCharacterStream() {\n    return false;\n  }\n\n  @Override\n  public boolean supportsBatchUpdates() {\n    return false;\n  }\n\n  @Override\n  public boolean supportsTimeStampToDateConversion() {\n    return false;\n  }\n\n  @Override public List<String> getNamedClusterList() {\n    try {\n      return namedClusterService.listNames( getMetastoreLocator().getMetastore() );\n    } catch ( MetaStoreException e ) {\n      e.printStackTrace();\n      return null;\n    }\n  }\n\n  @Override\n  public void putOptionalOptions( Map<String, String> extraOptions ) {\n    if ( getNamedCluster() != null && getNamedCluster().trim().length() > 0 ) {\n      extraOptions.put( getPluginId() + \".pentahoNamedCluster\", getNamedCluster() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/Hive2SimbaDatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\nimport static com.google.common.base.Strings.isNullOrEmpty;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_HOST_FQDN;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_SERVICE_NAME;\n\n/**\n * User: Dzmitry Stsiapanau Date: 8/28/2015 Time: 10:23\n */\npublic class Hive2SimbaDatabaseDialect extends Hive2DatabaseDialect {\n  public static final String SOCKET_TIMEOUT_OPTION = \"SocketTimeout\";\n  public static final String DEFAULT_SOCKET_TIMEOUT = \"10\";\n\n  public Hive2SimbaDatabaseDialect() {\n    super();\n  }\n\n  /**\n   * UID for serialization\n   */\n  private static final long serialVersionUID = -8456961348836455937L;\n\n  private static final IDatabaseType DBTYPE =\n    new DatabaseType( \"Hadoop Hive 2 (Simba)\", \"HIVE2SIMBA\",\n      DatabaseAccessType.getList( DatabaseAccessType.NATIVE,\n        DatabaseAccessType.JNDI ), DEFAULT_PORT,\n      \"http://www.simba.com/connectors/apache-hadoop-hive-driver\" );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n  @Override\n  public String getNativeDriver() {\n    return \"org.apache.hive.jdbc.HiveSimbaDriver\";\n  }\n\n  @Override\n  public String getURL( IDatabaseConnection databaseConnection ) throws DatabaseDialectException {\n    return SimbaUrl.Builder.create()\n      .withAccessType( databaseConnection.getAccessType().ordinal() )\n      .withDatabaseName( databaseConnection.getDatabaseName() )\n      .withPort( databaseConnection.getDatabasePort() )\n      .withDefaultPort( getDefaultDatabasePort() )\n      .withHostname( databaseConnection.getHostname() )\n      .withJdbcPrefix( getNativeJdbcPre() )\n      .withUsername( databaseConnection.getUsername() )\n      .withPassword( databaseConnection.getPassword() )\n      .withIsKerberos( isKerberos( databaseConnection ) )\n      .build()\n      .getURL();\n  }\n\n  private String getExtraProperty( String key, IDatabaseConnection databaseConnection ) {\n    return databaseConnection.getAttributes()\n      .get( DatabaseConnection.ATTRIBUTE_PREFIX_EXTRA_OPTION + getDatabaseType().getShortName() + \".\" + key );\n  }\n\n  private String getProperty( String key, IDatabaseConnection databaseConnection ) {\n    return databaseConnection.getExtraOptions().get( getDatabaseType().getShortName() + \".\" + key );\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return \"jdbc:hive2://\";\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { \"HiveJDBC41.jar\" };\n  }\n\n  @Override public boolean initialize( String classname ) {\n    return true;\n  }\n\n  public boolean isKerberos( IDatabaseConnection databaseConnection ) {\n    return !( isNullOrEmpty( getProperty( KRB_HOST_FQDN, databaseConnection ) )\n      && isNullOrEmpty( getExtraProperty( KRB_HOST_FQDN, databaseConnection ) )\n      && isNullOrEmpty( getProperty( KRB_SERVICE_NAME, databaseConnection ) )\n      && isNullOrEmpty( getExtraProperty( KRB_SERVICE_NAME, databaseConnection ) ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/Hive2SimbaDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\n\n// Intenionally disabled.  The Simba Hive driver is currently unsupported.\n//@DatabaseMetaPlugin( type = \"HIVE2SIMBA\", typeDescription = \"Hadoop Hive 2 with Simba Driver\" )\npublic class Hive2SimbaDatabaseMeta extends BaseSimbaDatabaseMeta {\n\n  @VisibleForTesting static final String JAR_FILE = \"HiveJDBC41.jar\";\n  @VisibleForTesting static final String DRIVER_CLASS_NAME = \"org.apache.hive.jdbc.HiveSimbaDriver\";\n  @VisibleForTesting static final String JDBC_URL_PREFIX = \"jdbc:hive2://\";\n  @VisibleForTesting static final int DEFAULT_PORT = 10000;\n\n\n  public Hive2SimbaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n\n  @Override protected String getJdbcPrefix() {\n    return JDBC_URL_PREFIX;\n  }\n\n  @Override\n  public String getDriverClass() {\n    return DRIVER_CLASS_NAME;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { JAR_FILE };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/HiveDatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.IValueMeta;\nimport org.pentaho.database.dialect.AbstractDatabaseDialect;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class HiveDatabaseDialect extends AbstractDatabaseDialect {\n\n  public HiveDatabaseDialect() {\n    super();\n  }\n\n  /**\n   * UID for serialization\n   */\n  private static final long serialVersionUID = -8456961348836455937L;\n\n  private static final int DEFAULT_PORT = 10000;\n\n  private static final IDatabaseType DBTYPE = new DatabaseType( \"Hadoop Hive (deprecated)\", \"HIVE\", DatabaseAccessType.getList(\n      DatabaseAccessType.NATIVE, DatabaseAccessType.JNDI ), DEFAULT_PORT,\n      \"https://cwiki.apache.org/Hive/hiveclient.html\" );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n  @Override\n  public String getNativeDriver() {\n    return \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n  }\n\n  @Override\n  public String getURL( IDatabaseConnection connection ) throws DatabaseDialectException {\n    StringBuffer urlBuffer = new StringBuffer( getNativeJdbcPre() );\n    /*\n     * String username = connection.getUsername(); if(username != null && !\"\".equals(username)) {\n     * urlBuffer.append(username); String password = connection.getPassword(); if(password != null &&\n     * !\"\".equals(password)) { urlBuffer.append(\":\"); urlBuffer.append(password); } urlBuffer.append(\"@\"); }\n     */\n    urlBuffer.append( connection.getHostname() );\n    urlBuffer.append( \":\" );\n    urlBuffer.append( connection.getDatabasePort() );\n    urlBuffer.append( \"/\" );\n    urlBuffer.append( connection.getDatabaseName() );\n    return urlBuffer.toString();\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return \"jdbc:hive://\";\n  }\n\n  /**\n   * Generates the SQL statement to add a column to the specified table\n   * \n   * @param tablename\n   *          The table to add\n   * @param v\n   *          The column defined as a value\n   * @param tk\n   *          the name of the technical key field\n   * @param use_autoinc\n   *          whether or not this field uses auto increment\n   * @param pk\n   *          the name of the primary key field\n   * @param semicolon\n   *          whether or not to add a semi-colon behind the statement.\n   * @return the SQL statement to add a column to the specified table\n   */\n  @Override\n  public String getAddColumnStatement( String tablename, IValueMeta v, String tk, boolean use_autoinc, String pk,\n      boolean semicolon ) {\n    return \"ALTER TABLE \" + tablename + \" ADD \" + getFieldDefinition( v, tk, pk, use_autoinc, true, false );\n  }\n\n  /**\n   * Generates the SQL statement to modify a column in the specified table\n   * \n   * @param tablename\n   *          The table to add\n   * @param v\n   *          The column defined as a value\n   * @param tk\n   *          the name of the technical key field\n   * @param use_autoinc\n   *          whether or not this field uses auto increment\n   * @param pk\n   *          the name of the primary key field\n   * @param semicolon\n   *          whether or not to add a semi-colon behind the statement.\n   * @return the SQL statement to modify a column in the specified table\n   */\n  @Override\n  public String getModifyColumnStatement( String tablename, IValueMeta v, String tk, boolean use_autoinc, String pk,\n      boolean semicolon ) {\n    return \"ALTER TABLE \" + tablename + \" MODIFY \" + getFieldDefinition( v, tk, pk, use_autoinc, true, false );\n  }\n\n  @Override\n  public String getFieldDefinition( IValueMeta v, String tk, String pk, boolean use_autoinc, boolean add_fieldname,\n      boolean add_cr ) {\n    String retval = \"\";\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( add_fieldname ) {\n      retval += fieldname + \" \";\n    }\n\n    int type = v.getType();\n    switch ( type ) {\n      case IValueMeta.TYPE_DATE:\n        retval += \"DATETIME\";\n        break;\n      case IValueMeta.TYPE_BOOLEAN:\n        if ( supportsBooleanDataType() ) {\n          retval += \"BOOLEAN\";\n        } else {\n          retval += \"CHAR(1)\";\n        }\n        break;\n\n      case IValueMeta.TYPE_NUMBER:\n      case IValueMeta.TYPE_INTEGER:\n      case IValueMeta.TYPE_BIGNUMBER:\n        if ( fieldname.equalsIgnoreCase( tk ) || // Technical key\n            fieldname.equalsIgnoreCase( pk ) // Primary key\n        ) {\n          if ( use_autoinc ) {\n            retval += \"BIGINT AUTO_INCREMENT NOT NULL PRIMARY KEY\";\n          } else {\n            retval += \"BIGINT NOT NULL PRIMARY KEY\";\n          }\n        } else {\n          // Integer values...\n          if ( precision == 0 ) {\n            if ( length > 9 ) {\n              if ( length < 19 ) {\n                // can hold signed values between -9223372036854775808 and 9223372036854775807\n                // 18 significant digits\n                retval += \"BIGINT\";\n              } else {\n                retval += \"DECIMAL(\" + length + \")\";\n              }\n            } else {\n              retval += \"INT\";\n            }\n          } else {\n            // Floating point values...\n            if ( length > 15 ) {\n              retval += \"DECIMAL(\" + length;\n              if ( precision > 0 ) {\n                retval += \", \" + precision;\n              }\n              retval += \")\";\n            } else {\n              // A double-precision floating-point number is accurate to approximately 15 decimal places.\n              // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n              retval += \"DOUBLE\";\n            }\n          }\n        }\n        break;\n      case IValueMeta.TYPE_STRING:\n        if ( length > 0 ) {\n          if ( length == 1 ) {\n            retval += \"CHAR(1)\";\n          } else if ( length < 256 ) {\n            retval += \"VARCHAR(\" + length + \")\";\n          } else if ( length < 65536 ) {\n            retval += \"TEXT\";\n          } else if ( length < 16777215 ) {\n            retval += \"MEDIUMTEXT\";\n          } else {\n            retval += \"LONGTEXT\";\n          }\n        } else {\n          retval += \"TINYTEXT\";\n        }\n        break;\n      case IValueMeta.TYPE_BINARY:\n        retval += \"LONGBLOB\";\n        break;\n      default:\n        retval += \" UNKNOWN\";\n        break;\n    }\n\n    if ( add_cr ) {\n      retval += CR;\n    }\n\n    return retval;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { \"pentaho-hadoop-hive-jdbc-shim-1.4-SNAPSHOT.jar\" };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  /*\n   * (non-Javadoc)\n   * \n   * @see org.pentaho.database.dialect.AbstractDatabaseDialect#supportsSchemas()\n   */\n  @Override\n  public boolean supportsSchemas() {\n    return false;\n  }\n\n  @Override public boolean initialize( String classname ) {\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/HiveDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.di.core.row.ValueMetaInterface;\n\n@DatabaseMetaPlugin( type = \"HIVE\", typeDescription = \"Hadoop Hive (deprecated)\" )\npublic class HiveDatabaseMeta extends DatabaseMetaWithVersion {\n\n  public static final String URL_PREFIX = \"jdbc:hive://\";\n  public static final String SELECT_COUNT_1_FROM = \"select count(1) from \";\n  public static final String VIEW = \"VIEW\";\n  public static final String VIRTUAL_VIEW = \"VIRTUAL_VIEW\";\n  public static final String TRUNCATE_TABLE = \"TRUNCATE TABLE \";\n  protected static final String JAR_FILE = \"hive-jdbc-cdh4.2.0-release-pentaho.jar\";\n  protected static final String DRIVER_CLASS_NAME = \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n  protected static final int DEFAULT_PORT = 10000;\n\n  public HiveDatabaseMeta() {\n    this( DriverLocatorImpl.getInstance() );\n  }\n  public HiveDatabaseMeta( DriverLocator driverLocator ) {\n    super( driverLocator );\n  }\n\n  @Override\n  public int[] getAccessTypeList() {\n    return new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE };\n  }\n\n  @Override\n  public String getAddColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                       String pk, boolean semicolon ) {\n\n    return \"ALTER TABLE \" + tablename + \" ADD \" + getFieldDefinition( v, tk, pk, useAutoinc, true, false );\n\n  }\n\n  @Override\n  public String getDriverClass() {\n\n    //  !!!  We will probably have to change this if we are providing our own driver,\n    //  i.e., before our code is committed to the Hadoop Hive project.\n    return DRIVER_CLASS_NAME;\n  }\n\n  /**\n   * This method assumes that Hive has no concept of primary and technical keys and auto increment columns. We are\n   * ignoring the tk, pk and useAutoinc parameters.\n   */\n  @Override\n  public String getFieldDefinition( ValueMetaInterface v, String tk, String pk, boolean useAutoinc,\n                                    boolean addFieldname, boolean addCr ) {\n\n    String retval = \"\";\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( addFieldname ) {\n      retval += fieldname + \" \";\n    }\n\n    int type = v.getType();\n    switch ( type ) {\n\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        retval += \"BOOLEAN\";\n        break;\n\n      // Hive does not support DATE until 0.12\n      case ValueMetaInterface.TYPE_DATE:\n        if ( isDriverVersion( 0, 12 ) ) {\n          retval += \"DATE\";\n        } else {\n          throw new IllegalArgumentException( \"Date types not supported in this version of Hive\" );\n        }\n        break;\n\n      // Hive does not support DATE until 0.8\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        if ( isDriverVersion( 0, 8 ) ) {\n          retval += \"TIMESTAMP\";\n        } else {\n          throw new IllegalArgumentException( \"Timestamp types not supported in this version of Hive\" );\n        }\n        break;\n\n      case ValueMetaInterface.TYPE_STRING:\n        retval += \"STRING\";\n        break;\n\n      case ValueMetaInterface.TYPE_NUMBER:\n      case ValueMetaInterface.TYPE_INTEGER:\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        // Integer values...\n        if ( precision == 0 ) {\n          if ( length > 9 ) {\n            if ( length < 19 ) {\n              // can hold signed values between -9223372036854775808 and 9223372036854775807\n              // 18 significant digits\n              retval += \"BIGINT\";\n            } else {\n              retval += \"FLOAT\";\n            }\n          } else {\n            retval += \"INT\";\n          }\n        } else {\n          // Floating point values...\n          if ( length > 15 ) {\n            retval += \"FLOAT\";\n          } else {\n            // A double-precision floating-point number is accurate to approximately 15 decimal places.\n            // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n            retval += \"DOUBLE\";\n          }\n        }\n\n        break;\n    }\n\n    return retval;\n  }\n\n  @Override\n  public String getModifyColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                          String pk, boolean semicolon ) {\n\n    return \"ALTER TABLE \" + tablename + \" MODIFY \" + getFieldDefinition( v, tk, pk, useAutoinc, true, false );\n  }\n\n  @Override\n  public String getURL( String hostname, String port, String databaseName ) {\n    if ( Const.isEmpty( port ) ) {\n      port = Integer.toString( getDefaultDatabasePort() );\n    }\n\n    return URL_PREFIX + hostname + \":\" + port + \"/\" + databaseName;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n\n    return new String[] { JAR_FILE };\n  }\n\n  /**\n   * Build the SQL to count the number of rows in the passed table.\n   *\n   * @param tableName\n   * @return\n   */\n  @Override\n  public String getSelectCountStatement( String tableName ) {\n    return SELECT_COUNT_1_FROM + tableName;\n  }\n\n  @Override\n  public String generateColumnAlias( int columnIndex, String suggestedName ) {\n    if ( isDriverVersion( 0, 6 ) ) {\n      return suggestedName;\n    } else {\n      // For version 0.5 and prior:\n      // Column aliases are currently not supported in Hive.  The default column alias\n      // generated is in the format '_col##' where ## = column index.  Use this format\n      // so the result can be mapped back correctly.\n      return \"_col\" + String.valueOf( columnIndex ); //$NON-NLS-1$\n    }\n  }\n\n  /**\n   * Quotes around table names are not valid Hive QL\n   * <p/>\n   * return an empty string for the start quote\n   */\n  public String getStartQuote() {\n    return \"\";\n  }\n\n  /**\n   * Quotes around table names are not valid Hive QL\n   * <p/>\n   * return an empty string for the end quote\n   */\n  public String getEndQuote() {\n    return \"\";\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  /**\n   * @return a list of table types to retrieve tables for the database\n   */\n  @Override\n  public String[] getTableTypes() {\n    return null;\n  }\n\n  /**\n   * @return a list of table types to retrieve views for the database\n   */\n  @Override\n  public String[] getViewTypes() {\n    return new String[] { VIEW, VIRTUAL_VIEW };\n  }\n\n  /**\n   * @param tableName The table to be truncated.\n   * @return The SQL statement to truncate a table: remove all rows from it without a transaction\n   */\n  @Override\n  public String getTruncateTableStatement( String tableName ) {\n    if ( isDriverVersion( 0, 11 ) ) {\n      return TRUNCATE_TABLE + tableName;\n    }\n    return null;\n  }\n\n  @Override\n  public boolean supportsSetCharacterStream() {\n    return false;\n  }\n\n  @Override\n  public boolean supportsBatchUpdates() {\n    return false;\n  }\n\n  @Override\n  public boolean supportsTimeStampToDateConversion() {\n    return false;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/HiveWarehouseDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\n\n@DatabaseMetaPlugin( type = \"HIVEWAREHOUSE\", typeDescription = \"Hive Warehouse Connector\" )\npublic class HiveWarehouseDatabaseMeta extends Hive2DatabaseMeta {\n  public HiveWarehouseDatabaseMeta() {\n    this( DriverLocatorImpl.getInstance(), NamedClusterManager.getInstance() );\n  }\n  public HiveWarehouseDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaDatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class ImpalaDatabaseDialect extends Hive2DatabaseDialect {\n\n  public ImpalaDatabaseDialect() {\n    super();\n  }\n\n  /**\n   * UID for serialization\n   */\n  private static final long serialVersionUID = -6685869374136347923L;\n\n  private static final int DEFAULT_PORT = 21050;\n\n  private static final IDatabaseType DBTYPE =\n    new DatabaseType( \"Impala\", \"IMPALA\", DatabaseAccessType.getList( DatabaseAccessType.NATIVE,\n      DatabaseAccessType.JNDI ), DEFAULT_PORT,\n      \"http://www.cloudera.com/content/support/en/documentation/cloudera-impala/cloudera-impala-documentation-v1\"\n        + \"-latest.html\" );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n  @Override\n  public String getNativeDriver() {\n    return \"org.apache.hive.jdbc.ImpalaDriver\";\n  }\n\n  @Override\n  public String getURL( IDatabaseConnection connection ) throws DatabaseDialectException {\n    StringBuffer urlBuffer = new StringBuffer( getNativeJdbcPre() );\n    /*\n     * String username = connection.getUsername(); if(username != null && !\"\".equals(username)) {\n     * urlBuffer.append(username); String password = connection.getPassword(); if(password != null &&\n     * !\"\".equals(password)) { urlBuffer.append(\":\"); urlBuffer.append(password); } urlBuffer.append(\"@\"); }\n     */\n    urlBuffer.append( connection.getHostname() );\n    urlBuffer.append( \":\" );\n    urlBuffer.append( connection.getDatabasePort() );\n    urlBuffer.append( \"/\" );\n    urlBuffer.append( connection.getDatabaseName() );\n\n    String principalPropertyName = getDatabaseType().getShortName() + \".principal\";\n    String principal = connection.getExtraOptions().get( principalPropertyName );\n    String extraPrincipal =\n      connection.getAttributes().get( DatabaseConnection.ATTRIBUTE_PREFIX_EXTRA_OPTION + principalPropertyName );\n    urlBuffer.append( \";impala_db=true\" );\n    if ( principal != null || extraPrincipal != null ) {\n      return urlBuffer.toString();\n    }\n\n    urlBuffer.append( \";auth=noSasl\" );\n    return urlBuffer.toString();\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return \"jdbc:hive2://\";\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { \"pentaho-hadoop-hive-jdbc-shim-1.4-SNAPSHOT.jar\" };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  @Override public boolean initialize( String classname ) {\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.DatabaseInterface;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\n\n@DatabaseMetaPlugin( type = \"IMPALA\", typeDescription = \"Impala\" )\npublic class ImpalaDatabaseMeta extends Hive2DatabaseMeta implements DatabaseInterface {\n\n  public static final String AUTH_NO_SASL = \";auth=noSasl\";\n  protected static final String JAR_FILE = \"hive-jdbc-cdh4.2.0-release-pentaho.jar\";\n  protected static final String DRIVER_CLASS_NAME = \"org.apache.hive.jdbc.HiveDriver\";\n  protected static final int DEFAULT_PORT = 21050;\n\n  private static final Logger logChannel = LogManager.getLogger( ImpalaDatabaseMeta.class );\n\n  @VisibleForTesting\n  ImpalaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService,\n                             MetastoreLocator metastoreLocator ) {\n    super( driverLocator, namedClusterService,  metastoreLocator );\n  }\n\n  public ImpalaDatabaseMeta() {\n    this( DriverLocatorImpl.getInstance(), NamedClusterManager.getInstance() );\n  }\n\n  // OSGi constructor\n  public ImpalaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n\n  @Override\n  public int[] getAccessTypeList() {\n    return new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE };\n  }\n\n  @Override\n  public String getDriverClass() {\n\n    //  !!!  We will probably have to change this if we are providing our own driver,\n    //  i.e., before our code is committed to the Hadoop Hive project.\n    return DRIVER_CLASS_NAME;\n  }\n\n  /**\n   * This method assumes that Hive has no concept of primary and technical keys and auto increment columns. We are\n   * ignoring the tk, pk and useAutoinc parameters.\n   */\n  @Override\n  public String getFieldDefinition( ValueMetaInterface v, String tk, String pk, boolean useAutoinc,\n                                    boolean addFieldname, boolean addCr ) {\n\n    String retval = \"\";\n\n    String fieldname = v.getName();\n    int length = v.getLength();\n    int precision = v.getPrecision();\n\n    if ( addFieldname ) {\n      retval += fieldname + \" \";\n    }\n\n    int type = v.getType();\n    switch ( type ) {\n\n      case ValueMetaInterface.TYPE_BOOLEAN:\n        retval += \"BOOLEAN\";\n        break;\n\n      // Hive does not support DATE until 0.12 - check Impala version against Hive\n      case ValueMetaInterface.TYPE_DATE:\n      case ValueMetaInterface.TYPE_TIMESTAMP:\n        if ( isDriverVersion( 0, 8 ) ) {\n          retval += \"TIMESTAMP\";\n        } else {\n          throw new IllegalArgumentException( \"Timestamp types not supported in this version of Impala\" );\n        }\n        break;\n\n      case ValueMetaInterface.TYPE_STRING:\n        retval += \"STRING\";\n        break;\n\n      case ValueMetaInterface.TYPE_NUMBER:\n      case ValueMetaInterface.TYPE_INTEGER:\n      case ValueMetaInterface.TYPE_BIGNUMBER:\n        // Integer values...\n        if ( precision == 0 ) {\n          if ( length > 9 ) {\n            if ( length < 19 ) {\n              // can hold signed values between -9223372036854775808 and 9223372036854775807\n              // 18 significant digits\n              retval += \"BIGINT\";\n            } else {\n              retval += \"FLOAT\";\n            }\n          } else {\n            retval += \"INT\";\n          }\n        } else {\n          // Floating point values...\n          if ( length > 15 ) {\n            retval += \"FLOAT\";\n          } else {\n            // A double-precision floating-point number is accurate to approximately 15 decimal places.\n            // http://mysql.mirrors-r-us.net/doc/refman/5.1/en/numeric-type-overview.html\n            retval += \"DOUBLE\";\n          }\n        }\n\n        break;\n    }\n\n    return retval;\n  }\n\n  @Override\n  public String getURL( String hostname, String port, String databaseName ) {\n    StringBuilder urlBuffer = new StringBuilder();\n    if ( Const.isEmpty( port ) ) {\n      port = Integer.toString( getDefaultDatabasePort() );\n    }\n    String principal = getAttributes().getProperty( \"principal\" );\n    String extraPrincipal =\n      getAttributes().getProperty( ATTRIBUTE_PREFIX_EXTRA_OPTION + getPluginId() + \".principal\" );\n    urlBuffer.append( \"jdbc:hive2://\" ).append( hostname ).append( \":\" ).append( port ).append( \"/\" )\n      .append( databaseName );\n    if ( principal == null && extraPrincipal == null ) {\n      urlBuffer.append( AUTH_NO_SASL );\n    }\n    urlBuffer.append( \";impala_db=true\" );\n    return urlBuffer.toString();\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n\n    return new String[] { JAR_FILE };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  @Override public List<String> getNamedClusterList() {\n    try {\n      return namedClusterService.listNames( metastoreLocator.getMetastore() );\n    } catch ( MetaStoreException e ) {\n      logChannel.error( e.getMessage(), e );\n      return Collections.emptyList();\n    }\n  }\n\n  @Override\n  public void putOptionalOptions( Map<String, String> extraOptions ) {\n    if ( getNamedCluster() != null && getNamedCluster().trim().length() > 0 ) {\n      extraOptions.put( getPluginId() + \".pentahoNamedCluster\", getNamedCluster() );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaSimbaDatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.base.Joiner;\nimport com.google.common.collect.ImmutableMap;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseType;\n\n/**\n * User: Dzmitry Stsiapanau Date: 8/28/2015 Time: 10:23\n */\npublic class ImpalaSimbaDatabaseDialect extends Hive2SimbaDatabaseDialect {\n  public static final String DB_TYPE_NAME_SHORT = \"IMPALASIMBA\";\n\n  public ImpalaSimbaDatabaseDialect() {\n    super();\n  }\n\n  /**\n   * UID for serialization\n   */\n  private static final long serialVersionUID = -8456961348836455937L;\n\n  protected static final int DEFAULT_PORT = 21050;\n\n  protected static final String JDBC_URL_TEMPLATE = \"jdbc:impala://%s:%s/%s;AuthMech=%d%s\";\n\n  private static final IDatabaseType DBTYPE =\n    new DatabaseType( \"Cloudera Impala\", DB_TYPE_NAME_SHORT,\n      DatabaseAccessType.getList( DatabaseAccessType.NATIVE,\n        DatabaseAccessType.JNDI ), DEFAULT_PORT,\n      \"http://go.cloudera.com/odbc-driver-hive-impala.html\",\n        \"\",\n        ImmutableMap.<String, String>builder().put( Joiner.on( \".\" ).join( DB_TYPE_NAME_SHORT, SOCKET_TIMEOUT_OPTION ),\n            DEFAULT_SOCKET_TIMEOUT ).build()\n    );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n  @Override\n  public String getNativeDriver() {\n    return \"org.apache.hive.jdbc.ImpalaSimbaDriver\";\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return \"jdbc:impala://\";\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { \"ImpalaJDBC41.jar\" };\n  }\n\n\n  @Override public boolean initialize( String classname ) {\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaSimbaDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n@DatabaseMetaPlugin( type = \"IMPALASIMBA\", typeDescription = \"Cloudera Impala\" )\npublic class ImpalaSimbaDatabaseMeta extends BaseSimbaDatabaseMeta {\n\n  protected static final String JAR_FILE = \"ImpalaJDBC41.jar\";\n  protected static final String JDBC_URL_PREFIX = \"jdbc:impala://\";\n  protected static final String DRIVER_CLASS_NAME = \"com.cloudera.impala.jdbc41.Driver\";\n  protected static final int DEFAULT_PORT = 21050;\n  protected static final String SOCKET_TIMEOUT_OPTION = \"SocketTimeout\";\n\n  public ImpalaSimbaDatabaseMeta() {\n    this( DriverLocatorImpl.getInstance(), NamedClusterManager.getInstance() );\n  }\n  public ImpalaSimbaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n\n  @Override protected String getJdbcPrefix() {\n    return JDBC_URL_PREFIX;\n  }\n\n  @Override\n  public String getDriverClass() {\n    return DRIVER_CLASS_NAME;\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { JAR_FILE };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  @Override\n  public Map<String, String> getDefaultOptions() {\n    HashMap<String, String> options = new HashMap<>();\n    options.put( String.format( \"%s.%s\", getPluginId(), SOCKET_TIMEOUT_OPTION ), \"10\" );\n\n    return options;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/SimbaUrl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.di.core.database.DatabaseMeta;\n\nimport static com.google.common.base.Strings.isNullOrEmpty;\n\npublic class SimbaUrl {\n\n  @VisibleForTesting static final String KRB_HOST_FQDN = \"KrbHostFQDN\";\n  @VisibleForTesting static final String KRB_SERVICE_NAME = \"KrbServiceName\";\n  @VisibleForTesting static final String URL_IS_CONFIGURED_THROUGH_JNDI = \"Url is configured through JNDI\";\n\n  final String jdbcPrefix;\n  private String username;\n  private String password;\n  private boolean isKerberos;\n  private int accessType;\n  private int defaultPort;\n  private String port;\n  private String hostname;\n  private String databaseName;\n\n  private String jdbcUrlTemplate;\n  private static final String DEFAULT_DB = \"default\";\n\n  public SimbaUrl( Builder builder ) {\n    this.jdbcPrefix = builder.jdbcPrefix;\n    this.username = builder.username;\n    this.password = builder.password;\n    this.isKerberos = builder.isKerberos;\n    this.accessType = builder.accessType;\n    this.defaultPort = builder.defaultPort;\n    this.port = builder.port;\n    this.hostname = builder.hostname;\n    this.databaseName = builder.databaseName;\n    this.jdbcUrlTemplate = jdbcPrefix + \"%s:%d/%s;AuthMech=%d%s\";\n  }\n\n  public String getURL() {\n    Integer portNumber;\n    if ( isNullOrEmpty( port ) ) {\n      portNumber = defaultPort;\n    } else {\n      portNumber = Integer.valueOf( port );\n    }\n    if ( isNullOrEmpty( databaseName ) ) {\n      databaseName = DEFAULT_DB;\n    }\n    switch ( accessType ) {\n      case DatabaseMeta.TYPE_ACCESS_JNDI: {\n        return URL_IS_CONFIGURED_THROUGH_JNDI;\n      }\n      case DatabaseMeta.TYPE_ACCESS_NATIVE:\n      default: {\n        Integer authMethod = 0;\n        StringBuilder additional = new StringBuilder();\n        String userName = username;\n        String password = this.password;\n        if ( isKerberos ) {\n          authMethod = 1;\n        } else if ( !isNullOrEmpty( userName ) ) {\n          additional.append( \";UID=\" );\n          additional.append( userName );\n          if ( !isNullOrEmpty( password ) ) {\n            authMethod = 3;\n            additional.append( \";PWD=\" );\n            additional.append( password );\n          } else {\n            authMethod = 2;\n          }\n        }\n        return String.format( jdbcUrlTemplate, hostname, portNumber, databaseName, authMethod, additional );\n      }\n    }\n  }\n\n  public static final class Builder {\n    private String jdbcPrefix;\n    private int accessType;\n    private String databaseName;\n    private int defaultPort;\n    private String hostname;\n    private boolean isKerberos;\n    private String password;\n    private String port;\n    private String username;\n\n    private Builder() {\n    }\n\n    public static Builder create() {\n      return new Builder();\n    }\n\n    public Builder withAccessType( int accessType ) {\n      this.accessType = accessType;\n      return this;\n    }\n\n    public Builder withDatabaseName( String databaseName ) {\n      this.databaseName = databaseName;\n      return this;\n    }\n\n    public Builder withDefaultPort( int defaultPort ) {\n      this.defaultPort = defaultPort;\n      return this;\n    }\n\n    public Builder withHostname( String hostname ) {\n      this.hostname = hostname;\n      return this;\n    }\n\n    public Builder withIsKerberos( boolean isKerberos ) {\n      this.isKerberos = isKerberos;\n      return this;\n    }\n\n    public Builder withJdbcPrefix( String jdbcPrefix ) {\n      this.jdbcPrefix = jdbcPrefix;\n      return this;\n    }\n\n    public Builder withPassword( String password ) {\n      this.password = password;\n      return this;\n    }\n\n    public Builder withPort( String port ) {\n      this.port = port;\n      return this;\n    }\n\n    public Builder withUsername( String username ) {\n      this.username = username;\n      return this;\n    }\n\n    public SimbaUrl build() {\n      SimbaUrl simbaUrl =\n        new SimbaUrl( this );\n      return simbaUrl;\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/SparkSimbaDatabaseDialect.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Joiner;\nimport com.google.common.collect.ImmutableMap;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseType;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class SparkSimbaDatabaseDialect extends Hive2SimbaDatabaseDialect {\n  public static final String DB_TYPE_NAME_SHORT = \"SPARKSIMBA\";\n\n  public SparkSimbaDatabaseDialect() {\n    super();\n  }\n\n  private static final long serialVersionUID = 5665821298486490578L;\n\n  @VisibleForTesting static final IDatabaseType DBTYPE =\n    new DatabaseType( \"SparkSQL\", DB_TYPE_NAME_SHORT,\n      DatabaseAccessType.getList( DatabaseAccessType.NATIVE,\n        DatabaseAccessType.JNDI ), SparkSimbaDatabaseMeta.DEFAULT_PORT,\n      \"http://www.simba.com/drivers/spark-jdbc-odbc/\",\n        \"\",\n        ImmutableMap.<String, String>builder().put( Joiner.on( \".\" ).join( DB_TYPE_NAME_SHORT, SOCKET_TIMEOUT_OPTION ),\n            DEFAULT_SOCKET_TIMEOUT ).build()\n    );\n\n  public IDatabaseType getDatabaseType() {\n    return DBTYPE;\n  }\n\n\n  @Override\n  public String getNativeDriver() {\n    return SparkSimbaDatabaseMeta.DRIVER_CLASS_NAME;\n  }\n\n  @Override\n  public String getNativeJdbcPre() {\n    return SparkSimbaDatabaseMeta.JDBC_URL_PREFIX;\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DBTYPE.getDefaultDatabasePort();\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { SparkSimbaDatabaseMeta.JAR_FILE };\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/main/java/org/pentaho/big/data/kettle/plugins/hive/SparkSimbaDatabaseMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.plugins.DatabaseMetaPlugin;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n@DatabaseMetaPlugin( type = \"SPARKSIMBA\", typeDescription = \"SparkSQL\" )\npublic class SparkSimbaDatabaseMeta extends BaseSimbaDatabaseMeta {\n\n  @VisibleForTesting static final String JDBC_URL_PREFIX = \"jdbc:spark://\";\n  @VisibleForTesting static final String DRIVER_CLASS_NAME = \"org.apache.hive.jdbc.SparkSqlSimbaDriver\";\n  @VisibleForTesting static final String JAR_FILE = \"SparkJDBC41.jar\";\n  @VisibleForTesting static final int DEFAULT_PORT = 10015;\n  @VisibleForTesting static final String SOCKET_TIMEOUT_OPTION = \"SocketTimeout\";\n  private final String LIMIT_1 = \" LIMIT 1\";\n\n  public SparkSimbaDatabaseMeta() {\n    this( DriverLocatorImpl.getInstance(), NamedClusterManager.getInstance() );\n  }\n  public SparkSimbaDatabaseMeta( DriverLocator driverLocator, NamedClusterService namedClusterService ) {\n    super( driverLocator, namedClusterService );\n  }\n\n  @Override public int[] getAccessTypeList() {\n    return new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE, DatabaseMeta.TYPE_ACCESS_JNDI };\n  }\n\n  @Override protected String getJdbcPrefix() {\n    return JDBC_URL_PREFIX;\n  }\n\n  @Override\n  public String getDriverClass() {\n    return DRIVER_CLASS_NAME;\n  }\n\n  @Override\n  public String getSQLQueryFields( String tableName ) {\n    return \"SELECT * FROM \" + tableName + LIMIT_1;\n  }\n\n  @Override\n  public String getStartQuote() {\n    return \"`\";\n  }\n\n  @Override\n  public String getEndQuote() {\n    return \"`\";\n  }\n\n  @Override\n  public String getSQLTableExists( String tablename ) {\n    return \"SELECT 1 FROM \" + tablename + LIMIT_1;\n  }\n\n  @Override\n  public String getTruncateTableStatement( String tableName ) {\n    return \"TRUNCATE TABLE \" + tableName;\n  }\n\n  @Override\n  public String getSQLColumnExists( String columnname, String tablename ) {\n    return \"SELECT \" + columnname + \" FROM \" + tablename + LIMIT_1;\n  }\n\n  @Override\n  public String getLimitClause( int nrRows ) {\n    return \" LIMIT \" + nrRows;\n  }\n\n  @Override\n  public String getSelectCountStatement( String tableName ) {\n    return SELECT_COUNT_STATEMENT + \" \" + tableName;\n  }\n\n  @Override\n  public String getDropColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean use_autoinc,\n                                        String pk, boolean semicolon ) {\n    return \"\";\n  }\n\n  @Override\n  public String getAddColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                       String pk, boolean semicolon ) {\n    return \"\";\n  }\n\n  @Override\n  public String getModifyColumnStatement( String tablename, ValueMetaInterface v, String tk, boolean useAutoinc,\n                                          String pk, boolean semicolon ) {\n    return \"\";\n  }\n\n  @Override\n  public String[] getUsedLibraries() {\n    return new String[] { JAR_FILE };\n  }\n\n  @Override\n  public int getDefaultDatabasePort() {\n    return DEFAULT_PORT;\n  }\n\n  @Override public Map<String, String> getDefaultOptions() {\n    HashMap<String, String> options = new HashMap<>();\n    options.put( String.format( \"%s.%s\", getPluginId(), SOCKET_TIMEOUT_OPTION ), \"10\" );\n\n    return options;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/apache/hadoop/hive/jdbc/HiveDriverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hadoop.hive.jdbc;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic class HiveDriverTest {\n  @Test\n  public void testSubclass() {\n    DummyDriver.class.isInstance( new HiveDriver() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/apache/hive/jdbc/HiveDriverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic class HiveDriverTest {\n  @Test\n  public void testIsInstance() {\n    assertTrue( DummyDriver.class.isInstance( new HiveDriver() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/apache/hive/jdbc/HiveSimbaDriverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic class HiveSimbaDriverTest {\n  @Test\n  public void testIsInstance() {\n    assertTrue( DummyDriver.class.isInstance( new HiveSimbaDriver() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/apache/hive/jdbc/ImpalaDriverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic class ImpalaDriverTest {\n  @Test\n  public void testIsInstance() {\n    assertTrue( DummyDriver.class.isInstance( new ImpalaDriver() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/apache/hive/jdbc/ImpalaSimbaDriverTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.apache.hive.jdbc;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.hive.DummyDriver;\n\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 4/14/16.\n */\npublic class ImpalaSimbaDriverTest {\n  @Test\n  public void testIsInstance() {\n    assertTrue( DummyDriver.class.isInstance( new ImpalaSimbaDriver() ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/BaseSimbaDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBigNumber;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaDate;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaInternetAddress;\nimport org.pentaho.di.core.row.value.ValueMetaNumber;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.row.value.ValueMetaTimestamp;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.sql.Driver;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.mockito.Mockito.when;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_HOST_FQDN;\nimport static org.pentaho.big.data.kettle.plugins.hive.SimbaUrl.KRB_SERVICE_NAME;\n\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class BaseSimbaDatabaseMetaTest {\n  private static final String LOCALHOST = \"localhost\";\n  private static final String PORT = \"10000\";\n  private static final String DEFAULT = \"default\";\n  @Mock private DriverLocator driverLocator;\n  @Mock private Driver driver;\n  private BaseSimbaDatabaseMeta baseSimbaDatabaseMeta;\n\n  private String driverClassname = \"driverClassname\";\n  private String jdbcPrefix = \"jdbc:prefix://\";\n\n  @BeforeClass\n  public static void initLogs() {\n    KettleLogStore.init();\n  }\n\n  @Before\n  public void setup() throws Throwable {\n    baseSimbaDatabaseMeta = new BaseSimbaDatabaseMeta( driverLocator, null, null ) {\n      @Override protected String getJdbcPrefix() {\n        return jdbcPrefix;\n      }\n\n      @Override public String getDriverClass() {\n        return driverClassname;\n      }\n    };\n    String baseSimbaDatabaseMetaURL = baseSimbaDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( baseSimbaDatabaseMetaURL ) ).thenReturn( driver );\n  }\n\n  @Test\n  public void testVersionConstructor() throws Throwable {\n    int majorVersion = 22;\n    int minorVersion = 33;\n    when( driver.getMajorVersion() ).thenReturn( majorVersion );\n    when( driver.getMinorVersion() ).thenReturn( minorVersion );\n    assertTrue( baseSimbaDatabaseMeta.isDriverVersion( majorVersion, minorVersion ) );\n    assertFalse( baseSimbaDatabaseMeta.isDriverVersion( majorVersion, minorVersion + 1 ) );\n    assertFalse( baseSimbaDatabaseMeta.isDriverVersion( majorVersion + 1, minorVersion ) );\n  }\n\n  @Test\n  public void testGetAccessTypeList() {\n    assertArrayEquals(\n      new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE, DatabaseMeta.TYPE_ACCESS_JNDI },\n      baseSimbaDatabaseMeta.getAccessTypeList() );\n  }\n\n\n  @Test\n  public void testGetDriverClassOther() {\n    assertEquals( driverClassname, baseSimbaDatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetUrlDefaults() throws KettleDatabaseException, MalformedURLException {\n    String testHost = \"testHost\";\n    String urlString = baseSimbaDatabaseMeta.getURL( testHost, \"\", \"\" );\n    assertTrue( urlString.startsWith( jdbcPrefix ) );\n    URL url = new URL( \"http://\" + urlString.substring( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX.length() ) );\n    assertEquals( testHost, url.getHost() );\n    assertEquals( baseSimbaDatabaseMeta.getDefaultDatabasePort(), url.getPort() );\n    assertEquals( \"/default;AuthMech=0\", url.getPath() );\n  }\n\n  @Test\n  public void testGetUrlJndi() throws KettleDatabaseException {\n    baseSimbaDatabaseMeta.setAccessType( DatabaseMeta.TYPE_ACCESS_JNDI );\n    assertEquals( Hive2SimbaDatabaseMeta.URL_IS_CONFIGURED_THROUGH_JNDI, baseSimbaDatabaseMeta.getURL( \"\", \"\", \"\" ) );\n  }\n\n  @Test\n  public void testGetUrlKerb() throws Throwable {\n    String testHost = \"testHost\";\n    String testPort = \"1111\";\n    String testDb = \"testDb\";\n    // Regular properties\n    baseSimbaDatabaseMeta.getAttributes().put( KRB_HOST_FQDN, \"fqdn\" );\n    baseSimbaDatabaseMeta.getAttributes().put( KRB_SERVICE_NAME, \"service\" );\n    String urlString = baseSimbaDatabaseMeta.getURL( testHost, testPort, testDb );\n    assertTrue( urlString.startsWith( jdbcPrefix ) );\n    URL url = new URL( \"http://\" + urlString.substring( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX.length() ) );\n    assertEquals( testHost, url.getHost() );\n    assertEquals( Integer.valueOf( testPort ).intValue(), url.getPort() );\n    assertEquals( \"/\" + testDb + \";AuthMech=1\", url.getPath() );\n\n    // Extra properties\n    baseSimbaDatabaseMeta = new ImpalaSimbaDatabaseMeta( driverLocator, null );\n    baseSimbaDatabaseMeta.getAttributes().put(\n      Hive2SimbaDatabaseMeta.ATTRIBUTE_PREFIX_EXTRA_OPTION + baseSimbaDatabaseMeta.getPluginId() + \".\"\n        + KRB_HOST_FQDN, \"fqdn\" );\n    baseSimbaDatabaseMeta.getAttributes()\n      .put( Hive2SimbaDatabaseMeta.ATTRIBUTE_PREFIX_EXTRA_OPTION + baseSimbaDatabaseMeta.getPluginId() + \".\"\n        + KRB_SERVICE_NAME, \"service\" );\n    urlString = baseSimbaDatabaseMeta.getURL( testHost, testPort, testDb );\n    assertTrue( urlString.startsWith( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX ) );\n    url = new URL( \"http://\" + urlString.substring( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX.length() ) );\n    assertEquals( testHost, url.getHost() );\n    assertEquals( Integer.valueOf( testPort ).intValue(), url.getPort() );\n    assertEquals( \"/\" + testDb + \";AuthMech=1\", url.getPath() );\n  }\n\n  @Test\n  public void testGetUrlUsername() throws KettleDatabaseException, MalformedURLException {\n    String testUsername = \"testUsername\";\n    baseSimbaDatabaseMeta.setUsername( testUsername );\n\n    String testHost = \"testHost\";\n    String testPort = \"1111\";\n    String testDb = \"testDb\";\n    String urlString = baseSimbaDatabaseMeta.getURL( testHost, testPort, testDb );\n    assertTrue( urlString.startsWith( jdbcPrefix ) );\n    URL url = new URL( \"http://\" + urlString.substring( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX.length() ) );\n    assertEquals( testHost, url.getHost() );\n    assertEquals( Integer.valueOf( testPort ).intValue(), url.getPort() );\n    assertEquals( \"/\" + testDb + \";AuthMech=2;UID=\" + testUsername, url.getPath() );\n  }\n\n  @Test\n  public void testGetUrlPassword() throws KettleDatabaseException, MalformedURLException {\n    String testUsername = \"testUsername\";\n    String testPassword = \"testPassword\";\n    baseSimbaDatabaseMeta.setUsername( testUsername );\n    baseSimbaDatabaseMeta.setPassword( testPassword );\n\n    String testHost = \"testHost\";\n    String testPort = \"1111\";\n    String testDb = \"testDb\";\n    String urlString = baseSimbaDatabaseMeta.getURL( testHost, testPort, testDb );\n    assertTrue( urlString.startsWith( jdbcPrefix ) );\n    URL url = new URL( \"http://\" + urlString.substring( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX.length() ) );\n    assertEquals( testHost, url.getHost() );\n    assertEquals( Integer.valueOf( testPort ).intValue(), url.getPort() );\n    assertEquals(\n      \"/\" + testDb + \";AuthMech=3;UID=\" + testUsername + \";PWD=\" + testPassword,\n      url.getPath() );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBoolean() {\n    assertGetFieldDefinition( new ValueMetaBoolean(), \"boolName\", \"BOOLEAN\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionDate() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 12 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"DATE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionTimestamp() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 8 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionStringVarchar() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 12 );\n    assertGetFieldDefinition( new ValueMetaString(), \"stringName\", \"VARCHAR\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionNumber() {\n    String numberName = \"numberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaNumber();\n    valueMetaInterface.setName( numberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionInteger() {\n    String integerName = \"integerName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaInteger();\n    valueMetaInterface.setName( integerName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBigNumber() {\n    String bigNumberName = \"bigNumberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBigNumber();\n    valueMetaInterface.setName( bigNumberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinition() {\n    assertGetFieldDefinition( new ValueMetaInternetAddress(), \"internetAddressName\", \"\" );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String name, String expectedType ) {\n    valueMetaInterface = valueMetaInterface.clone();\n    valueMetaInterface.setName( name );\n    assertGetFieldDefinition( valueMetaInterface, expectedType );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String expectedType ) {\n    assertEquals( expectedType,\n      baseSimbaDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, false,\n        false ) );\n    assertEquals( valueMetaInterface.getName() + \" \" + expectedType,\n      baseSimbaDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, true, false ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/Hive2DatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport junit.framework.Assert;\nimport org.junit.Test;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class Hive2DatabaseDialectTest {\n\n  private Hive2DatabaseDialect dialect;\n\n  public Hive2DatabaseDialectTest() {\n    this.dialect = new Hive2DatabaseDialect();\n  }\n\n  @Test\n  public void testGetNativeDriver() {\n    Assert.assertEquals( dialect.getNativeDriver(), \"org.apache.hive.jdbc.HiveDriver\" );\n  }\n\n  @Test\n  public void testGetURL() throws Exception {\n    DatabaseConnection conn = new DatabaseConnection();\n    conn.setAccessType( DatabaseAccessType.NATIVE );\n    Assert.assertEquals( dialect.getURL( conn ), \"jdbc:hive2://null:null/null\" );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    Assert.assertEquals( dialect.getUsedLibraries()[0], \"pentaho-hadoop-hive-jdbc-shim-1.4-SNAPSHOT.jar\" );\n  }\n\n  @Test\n  public void testGetNativeJdbcPre() {\n    Assert.assertEquals( dialect.getNativeJdbcPre(), \"jdbc:hive2://\" );\n  }\n\n  @Test\n  public void testGetDatabaseType() {\n    IDatabaseType dbType = dialect.getDatabaseType();\n    Assert.assertEquals( dbType.getName(), \"Hadoop Hive 2\" );\n  }\n\n  @Test\n  public void testSupportsSchemas() {\n    Assert.assertFalse( dialect.supportsSchemas() );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    Assert.assertEquals( dialect.getDefaultDatabasePort(), 10000 );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/Hive2DatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBigNumber;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaDate;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaInternetAddress;\nimport org.pentaho.di.core.row.value.ValueMetaNumber;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.row.value.ValueMetaTimestamp;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.sql.Driver;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 4/14/16.\n */\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class Hive2DatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  @Mock NamedClusterService namedClusterService;\n  @Mock MetastoreLocator metastoreLocator;\n  @Mock IMetaStore iMetaStore;\n  Hive2DatabaseMeta hive2DatabaseMeta;\n  private String hive2DatabaseMetaURL;\n  private List<String> namedClusterList = Arrays.asList( new String[]{ \"cluster1\", \"cluster2\" } );\n  ArgumentCaptor<IMetaStore> iMetaStoreCaptor = ArgumentCaptor.forClass( IMetaStore.class );\n  private static String CLUSTER = \"cluster1\";\n  private static String PLUGIN_ID = \"hive2\";\n\n  @Before\n  public void setup() throws Throwable {\n    hive2DatabaseMeta = new Hive2DatabaseMeta( driverLocator, namedClusterService, metastoreLocator );\n    hive2DatabaseMetaURL = hive2DatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( hive2DatabaseMetaURL ) ).thenReturn( driver );\n    when( metastoreLocator.getMetastore() ).thenReturn( iMetaStore );\n    when( namedClusterService.listNames( any() ) ).thenReturn( namedClusterList );\n  }\n\n  @Test\n  public void testGetAccessTypeList() {\n    assertArrayEquals( Hive2DatabaseMeta.ACCESS_TYPE_LIST, hive2DatabaseMeta.getAccessTypeList() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertArrayEquals( new String[] { Hive2DatabaseMeta.JAR_FILE }, hive2DatabaseMeta.getUsedLibraries() );\n  }\n\n  @Test\n  public void testGetDriverClass() {\n    assertEquals( Hive2DatabaseMeta.DRIVER_CLASS_NAME, hive2DatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetAddColumnStatement() {\n    String testTable = \"testTable\";\n    String booleanCol = \"booleanCol\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBoolean();\n    valueMetaInterface.setName( booleanCol );\n    String addColumnStatement =\n      hive2DatabaseMeta.getAddColumnStatement( testTable, valueMetaInterface, null, false, null, false );\n    assertTrue( addColumnStatement.contains( \"BOOLEAN\" ) );\n    assertTrue( addColumnStatement.contains( testTable ) );\n    assertTrue( addColumnStatement.contains( booleanCol ) );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBoolean() {\n    assertGetFieldDefinition( new ValueMetaBoolean(), \"boolName\", \"BOOLEAN\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionDate() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 12 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"DATE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionDateUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 11 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"DATE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionTimestamp() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 8 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 7 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionString() {\n    assertGetFieldDefinition( new ValueMetaString(), \"stringName\", \"STRING\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionNumber() {\n    String numberName = \"numberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaNumber();\n    valueMetaInterface.setName( numberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionInteger() {\n    String integerName = \"integerName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaInteger();\n    valueMetaInterface.setName( integerName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBigNumber() {\n    String bigNumberName = \"bigNumberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBigNumber();\n    valueMetaInterface.setName( bigNumberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinition() {\n    assertGetFieldDefinition( new ValueMetaInternetAddress(), \"internetAddressName\", \"\" );\n  }\n\n  @Test\n  public void testGetModifyColumnStatement() {\n    String testTable = \"testTable\";\n    String booleanCol = \"booleanCol\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBoolean();\n    valueMetaInterface.setName( booleanCol );\n    String addColumnStatement = hive2DatabaseMeta.getModifyColumnStatement( testTable, valueMetaInterface, null, false,\n      null, false );\n    assertTrue( addColumnStatement.contains( \"BOOLEAN\" ) );\n    assertTrue( addColumnStatement.contains( testTable ) );\n    assertTrue( addColumnStatement.contains( booleanCol ) );\n  }\n\n  @Test\n  public void testGetURL() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    int port = 9429;\n    String testDbName = \"testDbName\";\n    String urlString = hive2DatabaseMeta.getURL( testHostname, \"\" + port, testDbName );\n    assertTrue( urlString.startsWith( Hive2DatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( Hive2DatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( port, url.getPort() );\n    assertEquals( \"/\" + testDbName, url.getPath() );\n  }\n\n  @Test\n  public void testGetSelectCountStatement() {\n    String testTable = \"testTable\";\n    assertEquals( Hive2DatabaseMeta.SELECT_COUNT_1_FROM + testTable,\n      hive2DatabaseMeta.getSelectCountStatement( testTable ) );\n  }\n\n  @Test\n  public void testGenerateColumnAlias5AndPrior() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 5 );\n    String suggestedName = \"suggestedName\";\n    int columnIndex = 12;\n    assertEquals( suggestedName, hive2DatabaseMeta.generateColumnAlias( columnIndex, suggestedName ) );\n  }\n\n  @Test\n  public void testGenerateColumnAlias6AndLater() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 6 );\n    String suggestedName = \"suggestedName\";\n    int columnIndex = 12;\n    assertEquals( suggestedName, hive2DatabaseMeta.generateColumnAlias( columnIndex, suggestedName ) );\n  }\n\n  @Test\n  public void testIsDriverVersionNull() {\n    assertTrue( hive2DatabaseMeta.isDriverVersion( -1, -1 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorGreater() {\n    when( driver.getMajorVersion() ).thenReturn( 6 );\n    when( driver.getMinorVersion() ).thenReturn( 0 );\n    assertTrue( hive2DatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorSameMinorEqual() {\n    when( driver.getMajorVersion() ).thenReturn( 5 );\n    when( driver.getMinorVersion() ).thenReturn( 5 );\n    assertTrue( hive2DatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorSameMinorLess() {\n    when( driver.getMajorVersion() ).thenReturn( 5 );\n    when( driver.getMinorVersion() ).thenReturn( 4 );\n    assertFalse( hive2DatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorLess() {\n    when( driver.getMajorVersion() ).thenReturn( 4 );\n    when( driver.getMinorVersion() ).thenReturn( 6 );\n    assertFalse( hive2DatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testGetStartQuote() {\n    assertEquals( 0, hive2DatabaseMeta.getStartQuote().length() );\n  }\n\n  @Test\n  public void testGetEndQuote() {\n    assertEquals( 0, hive2DatabaseMeta.getEndQuote().length() );\n  }\n\n  @Test\n  public void testGetTableTypesReturnsNull() {\n    assertNull( hive2DatabaseMeta.getTableTypes() );\n  }\n\n  @Test\n  public void testGetViewTypes() {\n    assertArrayEquals( new String[] { Hive2DatabaseMeta.VIEW, Hive2DatabaseMeta.VIRTUAL_VIEW },\n      hive2DatabaseMeta.getViewTypes() );\n  }\n\n  @Test\n  public void testGetTruncateTableStatement10OrPrior() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 10 );\n    String testTableName = \"testTableName\";\n    assertEquals( Hive2DatabaseMeta.TRUNCATE_TABLE + testTableName,\n      hive2DatabaseMeta.getTruncateTableStatement( testTableName ) );\n  }\n\n  @Test\n  public void testGetTruncateTableStatement11AndLater() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 11 );\n    String testTableName = \"testTableName\";\n    assertEquals( Hive2DatabaseMeta.TRUNCATE_TABLE + testTableName,\n      hive2DatabaseMeta.getTruncateTableStatement( testTableName ) );\n  }\n\n  @Test\n  public void testSupportsSetCharacterStream() {\n    assertFalse( hive2DatabaseMeta.supportsSetCharacterStream() );\n  }\n\n  @Test\n  public void testSupportsBatchUpdates() {\n    assertFalse( hive2DatabaseMeta.supportsBatchUpdates() );\n  }\n\n  @Test\n  public void testSupportsTimeStampToDateConversion() {\n    assertFalse( hive2DatabaseMeta.supportsTimeStampToDateConversion() );\n  }\n\n  @Test\n  public void testGetNamedClusterList() throws Exception {\n    assertEquals( namedClusterList, hive2DatabaseMeta.getNamedClusterList() );\n    verify( namedClusterService ).listNames( iMetaStoreCaptor.capture() );\n  }\n\n  @Test\n  public void testPutOptionalOptions() {\n    hive2DatabaseMeta.setNamedCluster( CLUSTER );\n    hive2DatabaseMeta.setPluginId( PLUGIN_ID );\n    Map<String, String> extraOptions = new HashMap<String, String>();\n    hive2DatabaseMeta.putOptionalOptions( extraOptions );\n    String value = extraOptions.get( PLUGIN_ID + \".pentahoNamedCluster\" );\n    assertEquals( CLUSTER, value );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String name, String expectedType ) {\n    valueMetaInterface = valueMetaInterface.clone();\n    valueMetaInterface.setName( name );\n    assertGetFieldDefinition( valueMetaInterface, expectedType );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String expectedType ) {\n    assertEquals( expectedType, hive2DatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, false,\n      false ) );\n    assertEquals( valueMetaInterface.getName() + \" \" + expectedType,\n      hive2DatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, true, false ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/Hive2SimbaDatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport junit.framework.Assert;\nimport org.junit.Test;\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.*;\n\npublic class Hive2SimbaDatabaseDialectTest {\n\n  private Hive2SimbaDatabaseDialect dialect;\n\n  public Hive2SimbaDatabaseDialectTest() {\n    this.dialect = new Hive2SimbaDatabaseDialect();\n  }\n\n  @Test\n  public void testGetNativeDriver() {\n    Assert.assertEquals( dialect.getNativeDriver(), \"org.apache.hive.jdbc.HiveSimbaDriver\" );\n  }\n\n  @Test\n  public void testGetURLNative() throws Exception {\n    DatabaseConnection conn = new DatabaseConnection();\n    conn.setAccessType( DatabaseAccessType.NATIVE );\n    conn.setUsername( \"joe\" );\n    assertThat( dialect.getURL( conn ), is( \"jdbc:hive2://null:10000/default;AuthMech=2;UID=joe\" ) );\n  }\n\n  @Test\n  public void testGetURLJndi() throws DatabaseDialectException {\n    DatabaseConnection conn = new DatabaseConnection();\n    conn.setAccessType( DatabaseAccessType.JNDI );\n    assertThat( dialect.getURL( conn ),\n      is( SimbaUrl.URL_IS_CONFIGURED_THROUGH_JNDI ) );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertEquals( dialect.getUsedLibraries()[0], \"HiveJDBC41.jar\" );\n  }\n\n  @Test\n  public void testGetNativeJdbcPre() {\n    Assert.assertEquals( dialect.getNativeJdbcPre(), \"jdbc:hive2://\" );\n  }\n\n  @Test\n  public void testGetDatabaseType() {\n    IDatabaseType dbType = dialect.getDatabaseType();\n    assertThat( dbType.getName(), is( \"Hadoop Hive 2 (Simba)\" ) );\n  }\n\n  @Test\n  public void testGetReservedWords() {\n    assertFalse( dialect.getReservedWords().length > 0 );\n  }\n\n  @Test\n  public void testSupportsBitmapIndex() {\n    assertTrue( dialect.supportsBitmapIndex() );\n  }\n\n  @Test\n  public void testGetTruncateTableStatement() {\n    String tableName = \"table1\";\n    assertEquals( dialect.getTruncateTableStatement( tableName ), \"TRUNCATE TABLE \" + tableName );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/Hive2SimbaDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\n\nimport java.sql.Driver;\nimport java.util.Arrays;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class Hive2SimbaDatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  @InjectMocks Hive2SimbaDatabaseMeta hive2SimbaDatabaseMeta;\n  private String hive2SimbaDatabaseMetaURL;\n\n  @BeforeClass\n  public static void initLogs() {\n    KettleLogStore.init();\n  }\n\n  @Before\n  public void setup() throws Throwable {\n    hive2SimbaDatabaseMetaURL = hive2SimbaDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( hive2SimbaDatabaseMetaURL ) ).thenReturn( driver );\n  }\n\n  @Test\n  public void testGetDriverClassOther() {\n    assertEquals( Hive2SimbaDatabaseMeta.DRIVER_CLASS_NAME, hive2SimbaDatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetJdbcPrefix() {\n    assertEquals( Hive2SimbaDatabaseMeta.JDBC_URL_PREFIX,\n      hive2SimbaDatabaseMeta.getJdbcPrefix() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertTrue( Arrays.equals(\n      hive2SimbaDatabaseMeta.getUsedLibraries(),\n      new String[] { hive2SimbaDatabaseMeta.JAR_FILE } ) );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    assertEquals( Hive2SimbaDatabaseMeta.DEFAULT_PORT,\n      hive2SimbaDatabaseMeta.getDefaultDatabasePort() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/HiveDatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport junit.framework.Assert;\nimport org.junit.Test;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\nimport org.pentaho.database.model.IDatabaseType;\n\npublic class HiveDatabaseDialectTest {\n\n  private HiveDatabaseDialect dialect;\n\n  public HiveDatabaseDialectTest() {\n    this.dialect = new HiveDatabaseDialect();\n  }\n\n  @Test\n  public void testGetNativeDriver() {\n    Assert.assertEquals( dialect.getNativeDriver(), \"org.apache.hadoop.hive.jdbc.HiveDriver\" );\n  }\n\n  @Test\n  public void testGetURL() throws Exception {\n    DatabaseConnection conn = new DatabaseConnection();\n    conn.setAccessType( DatabaseAccessType.NATIVE );\n    Assert.assertEquals( dialect.getURL( conn ), \"jdbc:hive://null:null/null\" );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    Assert.assertEquals( dialect.getUsedLibraries()[0], \"pentaho-hadoop-hive-jdbc-shim-1.4-SNAPSHOT.jar\" );\n  }\n\n  @Test\n  public void testGetNativeJdbcPre() {\n    Assert.assertEquals( dialect.getNativeJdbcPre(), \"jdbc:hive://\" );\n  }\n\n  @Test\n  public void testGetDatabaseType() {\n    IDatabaseType dbType = dialect.getDatabaseType();\n    Assert.assertEquals( dbType.getName(), \"Hadoop Hive (deprecated)\" );\n  }\n\n  @Test\n  public void testSupportsSchemas() {\n    Assert.assertFalse( dialect.supportsSchemas() );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    Assert.assertEquals( dialect.getDefaultDatabasePort(), 10000 );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/HiveDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaBigNumber;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaDate;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaInternetAddress;\nimport org.pentaho.di.core.row.value.ValueMetaNumber;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.row.value.ValueMetaTimestamp;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.sql.Driver;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class HiveDatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  private HiveDatabaseMeta hiveDatabaseMeta;\n  private String hiveDatabaseMetaURL;\n\n  @Before\n  public void setup() throws Throwable {\n    hiveDatabaseMeta = new HiveDatabaseMeta( driverLocator );\n    hiveDatabaseMetaURL = hiveDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( hiveDatabaseMetaURL ) ).thenReturn( driver );\n  }\n\n  @Test\n  public void testColumnAlias_060_And_Later() throws Throwable {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 6 );\n\n    String alias = hiveDatabaseMeta.generateColumnAlias( 0, \"alias\" );\n    assertEquals( \"alias\", alias );\n\n    alias = hiveDatabaseMeta.generateColumnAlias( 1, \"alias1\" );\n    assertEquals( \"alias1\", alias );\n\n    alias = hiveDatabaseMeta.generateColumnAlias( 2, \"alias2\" );\n    assertEquals( \"alias2\", alias );\n  }\n\n  @Test\n  public void testColumnAlias_050() throws Throwable {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 5 );\n\n    String alias = hiveDatabaseMeta.generateColumnAlias( 0, \"alias\" );\n    assertEquals( \"_col0\", alias );\n\n    alias = hiveDatabaseMeta.generateColumnAlias( 1, \"alias1\" );\n    assertEquals( \"_col1\", alias );\n\n    alias = hiveDatabaseMeta.generateColumnAlias( 2, \"alias2\" );\n    assertEquals( \"_col2\", alias );\n  }\n\n  @Test\n  public void testGetAddColumnStatement() {\n    String testTable = \"testTable\";\n    String booleanCol = \"booleanCol\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBoolean();\n    valueMetaInterface.setName( booleanCol );\n    String addColumnStatement =\n      hiveDatabaseMeta.getAddColumnStatement( testTable, valueMetaInterface, null, false, null, false );\n    assertTrue( addColumnStatement.contains( \"BOOLEAN\" ) );\n    assertTrue( addColumnStatement.contains( testTable ) );\n    assertTrue( addColumnStatement.contains( booleanCol ) );\n  }\n\n  @Test\n  public void testGetDriverClass() {\n    assertEquals( HiveDatabaseMeta.DRIVER_CLASS_NAME, hiveDatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBoolean() {\n    assertGetFieldDefinition( new ValueMetaBoolean(), \"boolName\", \"BOOLEAN\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionDate() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 12 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"DATE\" );\n  }\n\n  @Test( expected = IllegalArgumentException.class )\n  public void testGetFieldDefinitionDateUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 11 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"DATE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionTimestamp() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 8 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test( expected = IllegalArgumentException.class )\n  public void testGetFieldDefinitionUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 7 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionString() {\n    assertGetFieldDefinition( new ValueMetaString(), \"stringName\", \"STRING\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionNumber() {\n    String numberName = \"numberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaNumber();\n    valueMetaInterface.setName( numberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionInteger() {\n    String integerName = \"integerName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaInteger();\n    valueMetaInterface.setName( integerName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBigNumber() {\n    String bigNumberName = \"bigNumberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBigNumber();\n    valueMetaInterface.setName( bigNumberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinition() {\n    assertGetFieldDefinition( new ValueMetaInternetAddress(), \"internetAddressName\", \"\" );\n  }\n\n  @Test\n  public void testGetModifyColumnStatement() {\n    String testTable = \"testTable\";\n    String booleanCol = \"booleanCol\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBoolean();\n    valueMetaInterface.setName( booleanCol );\n    String addColumnStatement = hiveDatabaseMeta.getModifyColumnStatement( testTable, valueMetaInterface, null, false,\n      null, false );\n    assertTrue( addColumnStatement.contains( \"BOOLEAN\" ) );\n    assertTrue( addColumnStatement.contains( testTable ) );\n    assertTrue( addColumnStatement.contains( booleanCol ) );\n  }\n\n  @Test\n  public void testGetURL() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    int port = 9429;\n    String testDbName = \"testDbName\";\n    String urlString = hiveDatabaseMeta.getURL( testHostname, \"\" + port, testDbName );\n    assertTrue( urlString.startsWith( HiveDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( HiveDatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( port, url.getPort() );\n    assertEquals( \"/\" + testDbName, url.getPath() );\n  }\n\n  @Test\n  public void testGetURLEmptyPort() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    String testDbName = \"testDbName\";\n    String urlString = hiveDatabaseMeta.getURL( testHostname, \"\", testDbName );\n    assertTrue( urlString.startsWith( HiveDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( HiveDatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( hiveDatabaseMeta.getDefaultDatabasePort(), url.getPort() );\n    assertEquals( \"/\" + testDbName, url.getPath() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertArrayEquals( new String[] { HiveDatabaseMeta.JAR_FILE }, hiveDatabaseMeta.getUsedLibraries() );\n  }\n\n  @Test\n  public void testGetSelectCountStatement() {\n    String tableName = \"tableName\";\n    assertEquals( HiveDatabaseMeta.SELECT_COUNT_1_FROM + tableName,\n      hiveDatabaseMeta.getSelectCountStatement( tableName ) );\n  }\n\n  @Test\n  public void testIsDriverVersionNull() {\n    assertTrue( hiveDatabaseMeta.isDriverVersion( -1, -1 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorGreater() {\n    when( driver.getMajorVersion() ).thenReturn( 6 );\n    when( driver.getMinorVersion() ).thenReturn( 0 );\n    assertTrue( hiveDatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorSameMinorEqual() {\n    when( driver.getMajorVersion() ).thenReturn( 5 );\n    when( driver.getMinorVersion() ).thenReturn( 5 );\n    assertTrue( hiveDatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorSameMinorLess() {\n    when( driver.getMajorVersion() ).thenReturn( 5 );\n    when( driver.getMinorVersion() ).thenReturn( 4 );\n    assertFalse( hiveDatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testIsDriverVersionMajorLess() {\n    when( driver.getMajorVersion() ).thenReturn( 4 );\n    when( driver.getMinorVersion() ).thenReturn( 6 );\n    assertFalse( hiveDatabaseMeta.isDriverVersion( 5, 5 ) );\n  }\n\n  @Test\n  public void testGetStartQuote() {\n    assertEquals( 0, hiveDatabaseMeta.getStartQuote().length() );\n  }\n\n  @Test\n  public void testGetEndQuote() {\n    assertEquals( 0, hiveDatabaseMeta.getEndQuote().length() );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    assertEquals( HiveDatabaseMeta.DEFAULT_PORT, hiveDatabaseMeta.getDefaultDatabasePort() );\n  }\n\n  @Test\n  public void testGetTableTypes() {\n    assertNull( hiveDatabaseMeta.getTableTypes() );\n  }\n\n  @Test\n  public void testGetViewTypes() {\n    assertArrayEquals( new String[] { HiveDatabaseMeta.VIEW, HiveDatabaseMeta.VIRTUAL_VIEW },\n      hiveDatabaseMeta.getViewTypes() );\n  }\n\n  @Test\n  public void testGetTruncateTableStatement10OrPrior() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 10 );\n    assertNull( hiveDatabaseMeta.getTruncateTableStatement( \"testTableName\" ) );\n  }\n\n  @Test\n  public void testGetTruncateTableStatement11AndLater() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 11 );\n    String testTableName = \"testTableName\";\n    assertEquals( HiveDatabaseMeta.TRUNCATE_TABLE + testTableName,\n      hiveDatabaseMeta.getTruncateTableStatement( testTableName ) );\n  }\n\n  @Test\n  public void testSupportsSetCharacterStream() {\n    assertFalse( hiveDatabaseMeta.supportsSetCharacterStream() );\n  }\n\n  @Test\n  public void testSupportsBatchUpdates() {\n    assertFalse( hiveDatabaseMeta.supportsBatchUpdates() );\n  }\n\n  @Test\n  public void testSupportsTimeStampToDateConversion() {\n    assertFalse( hiveDatabaseMeta.supportsTimeStampToDateConversion() );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String name, String expectedType ) {\n    valueMetaInterface = valueMetaInterface.clone();\n    valueMetaInterface.setName( name );\n    assertGetFieldDefinition( valueMetaInterface, expectedType );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String expectedType ) {\n    assertEquals( expectedType, hiveDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, false,\n      false ) );\n    assertEquals( valueMetaInterface.getName() + \" \" + expectedType,\n      hiveDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, true, false ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaDatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Test;\nimport org.pentaho.database.model.DatabaseConnection;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * User: Dzmitry Stsiapanau Date: 10/4/14 Time: 10:55 PM\n */\npublic class ImpalaDatabaseDialectTest {\n\n  @Test\n  public void testGetURL() throws Exception {\n    ImpalaDatabaseDialect impala = new ImpalaDatabaseDialect();\n    DatabaseConnection dbconn = new DatabaseConnection();\n    String url = impala.getURL( dbconn );\n    assertEquals( \"noauth url\", \"jdbc:hive2://null:null/null;impala_db=true;auth=noSasl\", url );\n    dbconn.addExtraOption( impala.getDatabaseType().getShortName(), \"principal\", \"someValue\" );\n    url = impala.getURL( dbconn );\n    assertEquals( \"principal url\", \"jdbc:hive2://null:null/null;impala_db=true\", url );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.*;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.locator.api.MetastoreLocator;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.sql.Driver;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.junit.Assert.*;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/20/15.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class ImpalaDatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  @Mock NamedClusterService namedClusterService;\n  @Mock MetastoreLocator metastoreLocator;\n  @Mock IMetaStore iMetaStore;\n  private ImpalaDatabaseMeta impalaDatabaseMeta;\n  private String impalaDatabaseMetaURL;\n  private List<String> namedClusterList = Arrays.asList( new String[]{ \"cluster1\", \"cluster2\" } );\n  ArgumentCaptor<IMetaStore> iMetaStoreCaptor = ArgumentCaptor.forClass( IMetaStore.class );\n  private static String CLUSTER = \"cluster1\";\n  private static String PLUGIN_ID = \"impala\";\n\n  @Before\n  public void setup() throws Throwable {\n    impalaDatabaseMeta = new ImpalaDatabaseMeta( driverLocator, namedClusterService, metastoreLocator );\n    impalaDatabaseMetaURL = impalaDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( impalaDatabaseMetaURL ) ).thenReturn( driver );\n    when( metastoreLocator.getMetastore() ).thenReturn( iMetaStore );\n    when( namedClusterService.listNames( any() ) ).thenReturn( namedClusterList );\n  }\n\n  @Test\n  public void testVersionConstructor() throws Throwable {\n    int majorVersion = 22;\n    int minorVersion = 33;\n    when( driver.getMajorVersion() ).thenReturn( majorVersion );\n    when( driver.getMinorVersion() ).thenReturn( minorVersion );\n    assertTrue( impalaDatabaseMeta.isDriverVersion( majorVersion, minorVersion ) );\n    assertFalse( impalaDatabaseMeta.isDriverVersion( majorVersion, minorVersion + 1 ) );\n    assertFalse( impalaDatabaseMeta.isDriverVersion( majorVersion + 1, minorVersion ) );\n  }\n\n  @Test\n  public void testGetAccessTypeList() {\n    assertArrayEquals( new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE }, impalaDatabaseMeta.getAccessTypeList() );\n  }\n\n  @Test\n  public void testGetDriverClass() {\n    assertEquals( ImpalaDatabaseMeta.DRIVER_CLASS_NAME, impalaDatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBoolean() {\n    assertGetFieldDefinition( new ValueMetaBoolean(), \"boolName\", \"BOOLEAN\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionDate() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 8 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"TIMESTAMP\" );\n  }\n\n  @Test( expected = IllegalArgumentException.class )\n  public void testGetFieldDefinitionDateUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 7 );\n    assertGetFieldDefinition( new ValueMetaDate(), \"dateName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionTimestamp() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 8 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test( expected = IllegalArgumentException.class )\n  public void testGetFieldDefinitionUnsupported() {\n    when( driver.getMajorVersion() ).thenReturn( 0 );\n    when( driver.getMinorVersion() ).thenReturn( 7 );\n    assertGetFieldDefinition( new ValueMetaTimestamp(), \"timestampName\", \"TIMESTAMP\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionString() {\n    assertGetFieldDefinition( new ValueMetaString(), \"stringName\", \"STRING\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionNumber() {\n    String numberName = \"numberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaNumber();\n    valueMetaInterface.setName( numberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionInteger() {\n    String integerName = \"integerName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaInteger();\n    valueMetaInterface.setName( integerName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n  }\n\n  @Test\n  public void testGetFieldDefinitionBigNumber() {\n    String bigNumberName = \"bigNumberName\";\n    ValueMetaInterface valueMetaInterface = new ValueMetaBigNumber();\n    valueMetaInterface.setName( bigNumberName );\n    valueMetaInterface.setPrecision( 0 );\n    valueMetaInterface.setLength( 9 );\n    assertGetFieldDefinition( valueMetaInterface, \"INT\" );\n\n    valueMetaInterface.setLength( 18 );\n    assertGetFieldDefinition( valueMetaInterface, \"BIGINT\" );\n\n    valueMetaInterface.setLength( 19 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setPrecision( 10 );\n    valueMetaInterface.setLength( 16 );\n    assertGetFieldDefinition( valueMetaInterface, \"FLOAT\" );\n\n    valueMetaInterface.setLength( 15 );\n    assertGetFieldDefinition( valueMetaInterface, \"DOUBLE\" );\n  }\n\n  @Test\n  public void testGetFieldDefinition() {\n    assertGetFieldDefinition( new ValueMetaInternetAddress(), \"internetAddressName\", \"\" );\n  }\n\n  @Test\n  public void testGetURL() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    int port = 9429;\n    String testDbName = \"testDbName\";\n    String urlString = impalaDatabaseMeta.getURL( testHostname, \"\" + port, testDbName );\n    assertTrue( urlString.startsWith( ImpalaDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( ImpalaDatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( port, url.getPort() );\n    assertEquals( \"/\" + testDbName + ImpalaDatabaseMeta.AUTH_NO_SASL + \";impala_db=true\", url.getPath() );\n  }\n\n  @Test\n  public void testGetURLPrincipal() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    int port = 9429;\n    String testDbName = \"testDbName\";\n    impalaDatabaseMeta.getAttributes().put( \"principal\", \"testP\" );\n    String urlString = impalaDatabaseMeta.getURL( testHostname, \"\" + port, testDbName );\n    assertTrue( urlString.startsWith( ImpalaDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( ImpalaDatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( port, url.getPort() );\n    assertEquals( \"/\" + testDbName + \";impala_db=true\", url.getPath() );\n\n    impalaDatabaseMeta.getAttributes().remove( \"principal\" );\n    impalaDatabaseMeta.getAttributes()\n      .put( ImpalaDatabaseMeta.ATTRIBUTE_PREFIX_EXTRA_OPTION + impalaDatabaseMeta.getPluginId() + \".principal\",\n        \"testP\" );\n    urlString = impalaDatabaseMeta.getURL( testHostname, \"\" + port, testDbName );\n    assertTrue( urlString.startsWith( ImpalaDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( ImpalaDatabaseMeta.URL_PREFIX.length() );\n    url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( port, url.getPort() );\n    assertEquals( \"/\" + testDbName + \";impala_db=true\", url.getPath() );\n  }\n\n  @Test\n  public void testGetURLEmptyPort() throws KettleDatabaseException, MalformedURLException {\n    String testHostname = \"testHostname\";\n    String testDbName = \"testDbName\";\n    String urlString = impalaDatabaseMeta.getURL( testHostname, \"\", testDbName );\n    assertTrue( urlString.startsWith( ImpalaDatabaseMeta.URL_PREFIX ) );\n    // Use known prefix\n    urlString = \"http://\" + urlString.substring( ImpalaDatabaseMeta.URL_PREFIX.length() );\n    URL url = new URL( urlString );\n    assertEquals( testHostname, url.getHost() );\n    assertEquals( impalaDatabaseMeta.getDefaultDatabasePort(), url.getPort() );\n    assertEquals( \"/\" + testDbName + ImpalaDatabaseMeta.AUTH_NO_SASL + \";impala_db=true\", url.getPath() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertArrayEquals( new String[] { ImpalaDatabaseMeta.JAR_FILE }, impalaDatabaseMeta.getUsedLibraries() );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    assertEquals( ImpalaDatabaseMeta.DEFAULT_PORT, impalaDatabaseMeta.getDefaultDatabasePort() );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String name, String expectedType ) {\n    valueMetaInterface = valueMetaInterface.clone();\n    valueMetaInterface.setName( name );\n    assertGetFieldDefinition( valueMetaInterface, expectedType );\n  }\n\n  private void assertGetFieldDefinition( ValueMetaInterface valueMetaInterface, String expectedType ) {\n    assertEquals( expectedType, impalaDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, false,\n      false ) );\n    assertEquals( valueMetaInterface.getName() + \" \" + expectedType,\n      impalaDatabaseMeta.getFieldDefinition( valueMetaInterface, null, null, false, true, false ) );\n  }\n\n  @Test\n  public void testGetNamedClusterList() throws Exception {\n    assertEquals( namedClusterList, impalaDatabaseMeta.getNamedClusterList() );\n    verify( namedClusterService ).listNames( iMetaStoreCaptor.capture() );\n  }\n\n  @Test\n  public void testPutOptionalOptions() {\n    impalaDatabaseMeta.setNamedCluster( CLUSTER );\n    impalaDatabaseMeta.setPluginId( PLUGIN_ID );\n    Map<String, String> extraOptions = new HashMap<String, String>();\n    impalaDatabaseMeta.putOptionalOptions( extraOptions );\n    String value = extraOptions.get( PLUGIN_ID + \".pentahoNamedCluster\" );\n    assertEquals( CLUSTER, value );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaSimbaDatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.base.Joiner;\nimport java.util.Map;\nimport org.hamcrest.collection.IsMapContaining;\nimport org.hamcrest.collection.IsMapWithSize;\nimport org.junit.Test;\nimport org.pentaho.database.DatabaseDialectException;\nimport org.pentaho.database.model.DatabaseAccessType;\nimport org.pentaho.database.model.DatabaseConnection;\n\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.*;\n\npublic class ImpalaSimbaDatabaseDialectTest {\n  private ImpalaSimbaDatabaseDialect dialect = new ImpalaSimbaDatabaseDialect();\n\n  @Test\n  public void testGetUrlNative() throws DatabaseDialectException {\n    DatabaseConnection conn = new DatabaseConnection();\n    conn.setAccessType( DatabaseAccessType.NATIVE );\n    conn.setUsername( \"jack\" );\n    conn.setHostname( \"hostname\" );\n    assertThat( dialect.getURL( conn ), is( \"jdbc:impala://hostname:21050/default;AuthMech=2;UID=jack\" ) );\n  }\n\n  @Test\n  public void testDefaultSocketTimeout() {\n    Map<String, String> options = dialect.getDatabaseType().getDefaultOptions();\n    assertThat( options, IsMapWithSize.aMapWithSize( 1 ) );\n    assertThat( options, IsMapContaining.hasEntry( Joiner.on( \".\" ).join( ImpalaSimbaDatabaseDialect.DB_TYPE_NAME_SHORT,\n        Hive2SimbaDatabaseDialect.SOCKET_TIMEOUT_OPTION ), Hive2SimbaDatabaseDialect.DEFAULT_SOCKET_TIMEOUT ) );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/ImpalaSimbaDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport java.sql.Driver;\nimport java.util.Arrays;\nimport java.util.Map;\n\nimport org.hamcrest.collection.IsMapContaining;\nimport org.hamcrest.collection.IsMapWithSize;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.database.DatabaseMeta;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThat;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/21/15.\n */\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class ImpalaSimbaDatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  @InjectMocks ImpalaSimbaDatabaseMeta impalaSimbaDatabaseMeta;\n  private String impalaSimbaDatabaseMetaURL;\n\n  @BeforeClass\n  public static void initLogs() {\n    KettleLogStore.init();\n  }\n\n  @Before\n  public void setup() throws Throwable {\n    impalaSimbaDatabaseMetaURL = impalaSimbaDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( impalaSimbaDatabaseMetaURL ) ).thenReturn( driver );\n  }\n\n  @Test\n  public void testGetAccessTypeList() {\n    assertArrayEquals(\n      new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE, DatabaseMeta.TYPE_ACCESS_JNDI },\n      impalaSimbaDatabaseMeta.getAccessTypeList() );\n  }\n\n  @Test\n  public void testGetDriverClassOther() {\n    assertEquals( ImpalaSimbaDatabaseMeta.DRIVER_CLASS_NAME, impalaSimbaDatabaseMeta.getDriverClass() );\n  }\n\n  @Test\n  public void testGetJdbcPrefix() {\n    assertEquals( ImpalaSimbaDatabaseMeta.JDBC_URL_PREFIX,\n      impalaSimbaDatabaseMeta.getJdbcPrefix() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertTrue( Arrays.equals(\n      impalaSimbaDatabaseMeta.getUsedLibraries(),\n      new String[] { impalaSimbaDatabaseMeta.JAR_FILE } ) );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    assertEquals( ImpalaSimbaDatabaseMeta.DEFAULT_PORT,\n      impalaSimbaDatabaseMeta.getDefaultDatabasePort() );\n  }\n\n  @Test\n  public void testGetDefaultOptions() {\n    Map<String, String> options = impalaSimbaDatabaseMeta.getDefaultOptions();\n    assertThat( options, IsMapWithSize.aMapWithSize( 1 ) );\n    assertThat( options, IsMapContaining.hasEntry( impalaSimbaDatabaseMeta.getPluginId() + \".\"\n      + SparkSimbaDatabaseMeta.SOCKET_TIMEOUT_OPTION, \"10\" ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/SimbaUrlTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport org.junit.Test;\n\nimport static org.hamcrest.CoreMatchers.containsString;\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.*;\nimport static org.pentaho.di.core.database.DatabaseMeta.TYPE_ACCESS_JNDI;\nimport static org.pentaho.di.core.database.DatabaseMeta.TYPE_ACCESS_NATIVE;\n\npublic class SimbaUrlTest {\n\n  SimbaUrl.Builder builder = SimbaUrl.Builder.create();\n\n  @Test\n  public void testWithDefaultPort() {\n    assertThat(\n      builder\n        .withPort( \"\" )\n        .withDefaultPort( 101010 )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      containsString( \"foo:bar://localhost:101010/default\" ) );\n  }\n\n  @Test\n  public void testWithSetPort() {\n    assertThat(\n      builder\n        .withPort( \"\" )\n        .withDefaultPort( 101010 )\n        .withPort( \"202020\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      containsString( \"foo:bar://localhost:202020/default\" ) );\n  }\n\n  @Test\n  public void testWithDatabaseName() {\n    assertThat(\n      builder\n        .withPort( \"\" )\n        .withDefaultPort( 101010 )\n        .withPort( \"202020\" )\n        .withDatabaseName( \"mydatabase\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      containsString( \"foo:bar://localhost:202020/mydatabase\" ) );\n  }\n\n  @Test\n  public void testJndi() {\n    assertThat(\n      builder\n        .withAccessType( TYPE_ACCESS_JNDI )\n        .build().getURL(),\n      is( SimbaUrl.URL_IS_CONFIGURED_THROUGH_JNDI ) );\n  }\n\n  @Test\n  public void testAuthMech0() {\n    assertThat(\n      builder\n        .withAccessType( TYPE_ACCESS_NATIVE )\n        .withIsKerberos( false )\n        .withPort( \"202020\" )\n        .withDatabaseName( \"mydatabase\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      is( \"foo:bar://localhost:202020/mydatabase;AuthMech=0\" ) );\n  }\n\n  @Test\n  public void testAuthMech1() {\n    assertThat(\n      builder\n        .withAccessType( TYPE_ACCESS_NATIVE )\n        .withIsKerberos( false )\n        .withPort( \"202020\" )\n        .withIsKerberos( true )\n        .withDatabaseName( \"mydatabase\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      is( \"foo:bar://localhost:202020/mydatabase;AuthMech=1\" ) );\n  }\n\n  @Test\n  public void testAuthMech2() {\n    assertThat(\n      builder\n        .withAccessType( TYPE_ACCESS_NATIVE )\n        .withIsKerberos( false )\n        .withPort( \"202020\" )\n        .withUsername( \"user\" )\n        .withDatabaseName( \"mydatabase\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      is( \"foo:bar://localhost:202020/mydatabase;AuthMech=2;UID=user\" ) );\n  }\n\n  @Test\n  public void testAuthMech3() {\n    assertThat(\n      builder\n        .withAccessType( TYPE_ACCESS_NATIVE )\n        .withIsKerberos( false )\n        .withPort( \"202020\" )\n        .withUsername( \"user\" )\n        .withPassword( \"password\" )\n        .withDatabaseName( \"mydatabase\" )\n        .withJdbcPrefix( \"foo:bar://\" )\n        .withHostname( \"localhost\" )\n        .build().getURL(),\n      is( \"foo:bar://localhost:202020/mydatabase;AuthMech=3;UID=user;PWD=password\" ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/SparkSimbaDatabaseDialectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport com.google.common.base.Joiner;\nimport java.util.Map;\nimport org.hamcrest.collection.IsMapContaining;\nimport org.hamcrest.collection.IsMapWithSize;\nimport org.junit.Test;\n\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.junit.Assert.assertThat;\n\npublic class SparkSimbaDatabaseDialectTest {\n\n  SparkSimbaDatabaseDialect dialect = new SparkSimbaDatabaseDialect();\n\n  @Test\n  public void getDatabaseType() throws Exception {\n    assertThat( dialect.getDatabaseType(), is( SparkSimbaDatabaseDialect.DBTYPE ) );\n  }\n\n  @Test\n  public void getNativeDriver() throws Exception {\n    assertThat( dialect.getNativeDriver(), is( SparkSimbaDatabaseMeta.DRIVER_CLASS_NAME ) );\n  }\n\n  @Test\n  public void getNativeJdbcPre() throws Exception {\n    assertThat( dialect.getNativeJdbcPre(), is( \"jdbc:spark://\" ) );\n  }\n\n  @Test\n  public void getDefaultDatabasePort() throws Exception {\n    assertThat( dialect.getDefaultDatabasePort(), is( 10015 ) );\n  }\n\n  @Test\n  public void getUsedLibraries() throws Exception {\n    assertThat( dialect.getUsedLibraries(), is( new String[] { SparkSimbaDatabaseMeta.JAR_FILE } ) );\n  }\n\n  @Test\n  public void testDefaultSocketTimeout() {\n    Map<String, String> options = dialect.getDatabaseType().getDefaultOptions();\n    assertThat( options, IsMapWithSize.aMapWithSize( 1 ) );\n    assertThat( options, IsMapContaining.hasEntry( Joiner.on( \".\" ).join( SparkSimbaDatabaseDialect.DB_TYPE_NAME_SHORT,\n        Hive2SimbaDatabaseDialect.SOCKET_TIMEOUT_OPTION ), Hive2SimbaDatabaseDialect.DEFAULT_SOCKET_TIMEOUT ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/core/src/test/java/org/pentaho/big/data/kettle/plugins/hive/SparkSimbaDatabaseMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.hive;\n\nimport java.sql.Driver;\nimport java.util.Arrays;\nimport java.util.Map;\nimport org.hamcrest.collection.IsMapContaining;\nimport org.hamcrest.collection.IsMapWithSize;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.ExpectedException;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.hadoop.shim.api.jdbc.DriverLocator;\nimport org.pentaho.di.core.database.DatabaseMeta;\n\nimport static org.hamcrest.core.Is.is;\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertThat;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class SparkSimbaDatabaseMetaTest {\n  public static final String LOCALHOST = \"localhost\";\n  public static final String PORT = \"10000\";\n  public static final String DEFAULT = \"default\";\n  @Mock DriverLocator driverLocator;\n  @Mock Driver driver;\n  @InjectMocks private SparkSimbaDatabaseMeta sparkSimbaDatabaseMeta;\n  @Rule public final ExpectedException exception = ExpectedException.none();\n\n  private String sparkSimbaDatabaseMetaURL;\n  private static final String DB_NAME = \"dbName\";\n\n  @BeforeClass\n  public static void initLogs() {\n    KettleLogStore.init();\n  }\n\n  @Before\n  public void setup() throws Throwable {\n    sparkSimbaDatabaseMetaURL = sparkSimbaDatabaseMeta.getURL( LOCALHOST, PORT, DEFAULT );\n    when( driverLocator.getDriver( sparkSimbaDatabaseMetaURL ) ).thenReturn( driver );\n    sparkSimbaDatabaseMeta.setDatabaseName( DB_NAME );\n  }\n\n  @Test\n  public void testGetAccessTypeList() {\n    assertArrayEquals(\n      new int[] { DatabaseMeta.TYPE_ACCESS_NATIVE, DatabaseMeta.TYPE_ACCESS_JNDI },\n      sparkSimbaDatabaseMeta.getAccessTypeList() );\n  }\n\n  @Test\n  public void testGetJdbcPrefix() {\n    assertEquals( SparkSimbaDatabaseMeta.JDBC_URL_PREFIX,\n      sparkSimbaDatabaseMeta.getJdbcPrefix() );\n  }\n\n  @Test\n  public void testGetUsedLibraries() {\n    assertTrue( Arrays.equals(\n      sparkSimbaDatabaseMeta.getUsedLibraries(),\n      new String[] { sparkSimbaDatabaseMeta.JAR_FILE } ) );\n  }\n\n  @Test\n  public void testGetDefaultDatabasePort() {\n    assertEquals( SparkSimbaDatabaseMeta.DEFAULT_PORT,\n      sparkSimbaDatabaseMeta.getDefaultDatabasePort() );\n  }\n\n  @Test\n  public void testQuoting() {\n    assertEquals( \"`\", sparkSimbaDatabaseMeta.getStartQuote() );\n    assertEquals( \"`\", sparkSimbaDatabaseMeta.getEndQuote() );\n  }\n\n  @Test\n  public void testGeneratedSQLContainsSchemaReferenceWhenTableUnqualified() {\n    verifyExpectedSql( null, \"foo\" );\n  }\n\n  @Test\n  public void testGeneratedSQLContainsSchemaReferenceWhenTableQualified() {\n    verifyExpectedSql( DB_NAME, \"foo\" );\n  }\n\n  private void verifyExpectedSql( String schemaName, String tableName ) {\n    String expectedTableName = schemaName == null ? tableName\n      : schemaName + \".\" + tableName;\n    assertThat( sparkSimbaDatabaseMeta.getSQLTableExists( expectedTableName ),\n      is( \"SELECT 1 FROM \" + expectedTableName + \" LIMIT 1\" ) );\n    assertThat( sparkSimbaDatabaseMeta.getTruncateTableStatement( expectedTableName ),\n      is( \"TRUNCATE TABLE \" + expectedTableName ) );\n    assertThat( sparkSimbaDatabaseMeta.getSQLColumnExists( \"column\", expectedTableName ),\n      is( \"SELECT column FROM \" + expectedTableName  + \" LIMIT 1\" ) );\n    assertThat( sparkSimbaDatabaseMeta.getSQLQueryFields( expectedTableName ),\n      is( \"SELECT * FROM \" + expectedTableName + \" LIMIT 1\" ) );\n    assertThat( sparkSimbaDatabaseMeta.getSelectCountStatement( expectedTableName ),\n      is( SparkSimbaDatabaseMeta.SELECT_COUNT_STATEMENT + \" \" + expectedTableName ) );\n  }\n\n\n  @Test\n  public void testUnsupportedDrop() {\n    assertThat(\n      sparkSimbaDatabaseMeta.getDropColumnStatement( \"tab\", null, \"tk\", false, \"pk\", false ),\n      is( \"\" ) );\n  }\n\n  @Test\n  public void testUnsupportedAddCol() {\n    assertThat(\n      sparkSimbaDatabaseMeta.getAddColumnStatement( \"tab\", null, \"tk\", false, \"pk\", false ),\n      is( \"\" ) );\n  }\n\n  @Test\n  public void testUnsupportedModCol() {\n    assertThat(\n      sparkSimbaDatabaseMeta.getModifyColumnStatement( \"tab\", null, \"tk\", false, \"pk\", false ),\n      is( \"\" ) );\n  }\n\n  @Test\n  public void testGetDriverClass() {\n    assertThat( sparkSimbaDatabaseMeta.getDriverClass(),\n      is( SparkSimbaDatabaseMeta.DRIVER_CLASS_NAME ) );\n  }\n\n  @Test\n  public void testGetDefaultOptions() {\n    SparkSimbaDatabaseMeta meta = mock( SparkSimbaDatabaseMeta.class );\n    when( meta.getPluginId() ).thenReturn( \"SPARKSIMBA\" );\n    when( meta.getDefaultOptions() ).thenCallRealMethod();\n\n    Map<String, String> options = meta.getDefaultOptions();\n    assertThat( options, IsMapWithSize.aMapWithSize( 1 ) );\n    assertThat( options, IsMapContaining.hasEntry( meta.getPluginId() + \".\"\n      + SparkSimbaDatabaseMeta.SOCKET_TIMEOUT_OPTION, \"10\" ) );\n  }\n\n  @Test\n  public void testLimit() {\n    assertThat(\n      sparkSimbaDatabaseMeta.getLimitClause( 100 ), is( \" LIMIT 100\" ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/hive/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-hive</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>mapreduce-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-mapreduce-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Mapreduce Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-mapreduce-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-mapreduce-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-mapreduce-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-mapreduce-core:*</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/mapreduce/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/mapreduce/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-mapreduce</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>mapreduce-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Mapreduce Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-mapreduce</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-mapreduce-core</artifactId>\n  <name>PDI Mapreduce Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <easymock.versin>3.0</easymock.versin>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n   <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n   -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n\n      <scope>provided</scope>\n\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n\n      <scope>provided</scope>\n\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.dom4j</groupId>\n      <artifactId>dom4j</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.easymock</groupId>\n      <artifactId>easymock</artifactId>\n      <version>${easymock.versin}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>stax</groupId>\n      <artifactId>stax</artifactId>\n      <version>1.2.0</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/DialogClassUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce;\n\n/**\n * Created by bryan on 1/12/16.\n */\npublic class DialogClassUtil {\n  private static final String PKG_NAME = DialogClassUtil.class.getPackage().getName();\n  private static final String UI_PKG_NAME = PKG_NAME + \".ui\";\n\n  public static String getDialogClassName( Class<?> clazz ) {\n    String className = clazz.getCanonicalName().replace( PKG_NAME, UI_PKG_NAME );\n    if ( className.endsWith( \"Meta\" ) ) {\n      className = className.substring( 0, className.length() - 4 );\n    }\n\n    className = className + \"Dialog\";\n    return className;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/NamedClusterLoadSaveUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.w3c.dom.Node;\n\n/**\n * Created by bryan on 1/6/16.\n */\npublic class NamedClusterLoadSaveUtil {\n  public static final String CLUSTER_NAME = \"cluster_name\";\n  public static final String HDFS_HOSTNAME = \"hdfs_hostname\";\n  public static final String HDFS_PORT = \"hdfs_port\";\n  public static final String JOB_TRACKER_HOSTNAME = \"job_tracker_hostname\";\n  public static final String JOB_TRACKER_PORT = \"job_tracker_port\";\n\n  public void saveNamedClusterRep( NamedCluster namedCluster, NamedClusterService namedClusterService, Repository rep,\n      IMetaStore metaStore, ObjectId id_job, ObjectId objectId, LogChannelInterface logChannelInterface )\n        throws KettleException {\n    if ( namedCluster != null ) {\n      String namedClusterName = namedCluster.getName();\n      if ( !Const.isEmpty( namedClusterName ) ) {\n        rep.saveJobEntryAttribute( id_job, objectId, CLUSTER_NAME, namedClusterName ); // $NON-NLS-1$\n      }\n      try {\n        if ( !StringUtils.isEmpty( namedClusterName ) && namedClusterService.contains( namedClusterName, metaStore ) ) {\n          // pull config from NamedCluster\n          namedCluster = namedClusterService.read( namedClusterName, metaStore );\n        }\n      } catch ( MetaStoreException e ) {\n        logChannelInterface.logDebug( e.getMessage(), e );\n      }\n      rep.saveJobEntryAttribute( id_job, objectId, HDFS_HOSTNAME, namedCluster.getHdfsHost() ); // $NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, objectId, HDFS_PORT, namedCluster.getHdfsPort() ); // $NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, objectId, JOB_TRACKER_HOSTNAME, namedCluster.getJobTrackerHost() ); // $NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, objectId, JOB_TRACKER_PORT, namedCluster.getJobTrackerPort() ); // $NON-NLS-1$\n    }\n  }\n\n  public void getXmlNamedCluster( NamedCluster namedCluster, NamedClusterService namedClusterService, IMetaStore metaStore,\n      LogChannelInterface logChannelInterface, StringBuilder retval ) {\n    if ( namedCluster != null ) {\n      String namedClusterName = namedCluster.getName();\n      if ( !Const.isEmpty( namedClusterName ) ) {\n        retval.append( \"      \" ).append( XMLHandler.addTagValue( CLUSTER_NAME, namedClusterName ) ); // $NON-NLS-1$\n                                                                                                      // //$NON-NLS-2$\n      }\n      try {\n        if ( metaStore != null && !StringUtils.isEmpty( namedClusterName ) && namedClusterService\n          .contains( namedClusterName, metaStore ) ) {\n          // pull config from NamedCluster\n          namedCluster = namedClusterService.read( namedClusterName, metaStore );\n        }\n      } catch ( MetaStoreException e ) {\n        logChannelInterface.logDebug( e.getMessage(), e );\n      }\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( HDFS_HOSTNAME, namedCluster.getHdfsHost() ) ); // $NON-NLS-1$\n                                                                                                               // //$NON-NLS-2$\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( HDFS_PORT, namedCluster.getHdfsPort() ) ); // $NON-NLS-1$\n                                                                                                           // //$NON-NLS-2$\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( JOB_TRACKER_HOSTNAME, namedCluster\n          .getJobTrackerHost() ) ); // $NON-NLS-1$ //$NON-NLS-2$\n      retval.append( \"      \" ).append( XMLHandler.addTagValue( JOB_TRACKER_PORT, namedCluster.getJobTrackerPort() ) ); // $NON-NLS-1$\n                                                                                                                        // //$NON-NLS-2$\n    }\n  }\n\n  public NamedCluster loadClusterConfig( NamedClusterService namedClusterService, ObjectId id_jobentry, Repository rep,\n                                         IMetaStore metaStore, Node entrynode,\n                                         LogChannelInterface logChannelInterface ) {\n    boolean configLoaded = false;\n    try {\n      String clusterName = null;\n      // attempt to load from named cluster\n      if ( entrynode != null ) {\n        clusterName = XMLHandler.getTagValue( entrynode, CLUSTER_NAME ); //$NON-NLS-1$\n      } else if ( rep != null ) {\n        clusterName = rep.getJobEntryAttributeString( id_jobentry, CLUSTER_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n      }\n      // load from system first, then fall back to copy stored with job (AbstractMeta)\n      NamedCluster nc = null;\n      if ( !StringUtils.isEmpty( clusterName ) && namedClusterService.contains( clusterName, metaStore ) ) {\n        // pull config from NamedCluster\n        nc = namedClusterService.read( clusterName, metaStore );\n      }\n      if ( nc != null ) {\n        return nc;\n      }\n    } catch ( Throwable t ) {\n      logChannelInterface.logDebug( t.getMessage(), t );\n    }\n\n    NamedCluster namedCluster = namedClusterService.getClusterTemplate();\n    if ( entrynode != null ) {\n      // load default values for cluster & legacy fallback\n      namedCluster.setHdfsHost( XMLHandler.getTagValue( entrynode, HDFS_HOSTNAME ) ); //$NON-NLS-1$\n      namedCluster.setHdfsPort( XMLHandler.getTagValue( entrynode, HDFS_PORT ) ); //$NON-NLS-1$\n      namedCluster.setJobTrackerHost( XMLHandler.getTagValue( entrynode, JOB_TRACKER_HOSTNAME ) ); //$NON-NLS-1$\n      namedCluster.setJobTrackerPort( XMLHandler.getTagValue( entrynode, JOB_TRACKER_PORT ) ); //$NON-NLS-1$\n    } else if ( rep != null ) {\n      // load default values for cluster & legacy fallback\n      try {\n        namedCluster.setHdfsHost( rep.getJobEntryAttributeString( id_jobentry, HDFS_HOSTNAME ) );\n        namedCluster.setHdfsPort( rep.getJobEntryAttributeString( id_jobentry, HDFS_PORT ) ); //$NON-NLS-1$\n        namedCluster\n          .setJobTrackerHost( rep.getJobEntryAttributeString( id_jobentry, JOB_TRACKER_HOSTNAME ) ); //$NON-NLS-1$\n        namedCluster\n          .setJobTrackerPort( rep.getJobEntryAttributeString( id_jobentry, JOB_TRACKER_PORT ) ); //$NON-NLS-1$\n      } catch ( KettleException ke ) {\n        logChannelInterface.logError( ke.getMessage(), ke );\n      }\n    }\n    return namedCluster;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/UserDefinedItem.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry;\n\nimport org.pentaho.ui.xul.XulEventSource;\n\nimport java.beans.PropertyChangeListener;\n\npublic class UserDefinedItem implements XulEventSource {\n  private String name;\n  private String value;\n\n  public UserDefinedItem() {\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getValue() {\n    return value;\n  }\n\n  public void setValue( String value ) {\n    this.value = value;\n  }\n\n  public void addPropertyChangeListener( PropertyChangeListener listener ) {\n  }\n\n  public void removePropertyChangeListener( PropertyChangeListener listener ) {\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/hadoop/JobEntryHadoopJobExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.DialogClassUtil;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.UserDefinedItem;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJobAdvanced;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJobBuilder;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJobSimple;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceService;\nimport org.pentaho.hadoop.shim.api.mapreduce.TaskCompletionEvent;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\nimport java.net.MalformedURLException;\nimport java.net.URL;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.concurrent.TimeUnit;\n\n@JobEntry( id = \"HadoopJobExecutorPlugin\", image = \"HDE.svg\", name = \"HadoopJobExecutorPlugin.Name\",\n  description = \"HadoopJobExecutorPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.big.data.kettle.plugins.mapreduce\",\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Hadoop+Job+Executor\" )\npublic class JobEntryHadoopJobExecutor extends JobEntryBase implements Cloneable, JobEntryInterface {\n  private static final String DEFAULT_LOGGING_INTERVAL = \"60\";\n  public static final String CLUSTER_NAME = \"cluster_name\";\n  public static final String HDFS_HOSTNAME = \"hdfs_hostname\";\n  public static final String HDFS_PORT = \"hdfs_port\";\n  public static final String JOB_TRACKER_HOSTNAME = \"job_tracker_hostname\";\n  public static final String JOB_TRACKER_PORT = \"job_tracker_port\";\n  private static Class<?> PKG = JobEntryHadoopJobExecutor.class; // for i18n purposes, needed by Translator2!!\n  public static final String DIALOG_NAME = DialogClassUtil.getDialogClassName( PKG );\n  // $NON-NLS-1$\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final NamedClusterLoadSaveUtil namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n  private String hadoopJobName;\n  private String jarUrl = \"\";\n  private String driverClass = \"\";\n  private boolean isSimple = true;\n  private String cmdLineArgs;\n  private String outputKeyClass;\n  private String outputValueClass;\n  private String mapperClass;\n  private String combinerClass;\n  private String reducerClass;\n  private String inputFormatClass;\n  private String outputFormatClass;\n  private NamedCluster namedCluster;\n  private String inputPath;\n  private String outputPath;\n  private boolean blocking;\n  private String loggingInterval = DEFAULT_LOGGING_INTERVAL; // 60 seconds default\n  private boolean simpleBlocking;\n  private String simpleLoggingInterval = loggingInterval;\n  private String numMapTasks = \"1\";\n  private String numReduceTasks = \"1\";\n  private List<UserDefinedItem> userDefined = new ArrayList<UserDefinedItem>();\n\n  public JobEntryHadoopJobExecutor( NamedClusterService namedClusterService,\n                                    RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                                    NamedClusterServiceLocator namedClusterServiceLocator ) {\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  public JobEntryHadoopJobExecutor() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.namedClusterServiceLocator = BigDataServicesHelper.getNamedClusterServiceLocator();\n  }\n\n\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public NamedClusterServiceLocator getNamedClusterServiceLocator() {\n    return namedClusterServiceLocator;\n  }\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    this.hadoopJobName = hadoopJobName;\n  }\n\n  public String getJarUrl() {\n    return jarUrl;\n  }\n\n  public void setJarUrl( String jarUrl ) {\n    this.jarUrl = jarUrl;\n  }\n\n  public String getDriverClass() {\n    return driverClass;\n  }\n\n  public void setDriverClass( String driverClass ) {\n    this.driverClass = driverClass;\n  }\n\n  public boolean isSimple() {\n    return isSimple;\n  }\n\n  public void setSimple( boolean isSimple ) {\n    this.isSimple = isSimple;\n  }\n\n  public String getCmdLineArgs() {\n    return cmdLineArgs;\n  }\n\n  public void setCmdLineArgs( String cmdLineArgs ) {\n    this.cmdLineArgs = cmdLineArgs;\n  }\n\n  public String getOutputKeyClass() {\n    return outputKeyClass;\n  }\n\n  public void setOutputKeyClass( String outputKeyClass ) {\n    this.outputKeyClass = outputKeyClass;\n  }\n\n  public String getOutputValueClass() {\n    return outputValueClass;\n  }\n\n  public void setOutputValueClass( String outputValueClass ) {\n    this.outputValueClass = outputValueClass;\n  }\n\n  public String getMapperClass() {\n    return mapperClass;\n  }\n\n  public void setMapperClass( String mapperClass ) {\n    this.mapperClass = mapperClass;\n  }\n\n  public String getCombinerClass() {\n    return combinerClass;\n  }\n\n  public void setCombinerClass( String combinerClass ) {\n    this.combinerClass = combinerClass;\n  }\n\n  public String getReducerClass() {\n    return reducerClass;\n  }\n\n  public void setReducerClass( String reducerClass ) {\n    this.reducerClass = reducerClass;\n  }\n\n  public String getInputFormatClass() {\n    return inputFormatClass;\n  }\n\n  public void setInputFormatClass( String inputFormatClass ) {\n    this.inputFormatClass = inputFormatClass;\n  }\n\n  public String getOutputFormatClass() {\n    return outputFormatClass;\n  }\n\n  public void setOutputFormatClass( String outputFormatClass ) {\n    this.outputFormatClass = outputFormatClass;\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public String getInputPath() {\n    return inputPath;\n  }\n\n  public void setInputPath( String inputPath ) {\n    this.inputPath = inputPath;\n  }\n\n  public String getOutputPath() {\n    return outputPath;\n  }\n\n  public void setOutputPath( String outputPath ) {\n    this.outputPath = outputPath;\n  }\n\n  public boolean isBlocking() {\n    return blocking;\n  }\n\n  public void setBlocking( boolean blocking ) {\n    this.blocking = blocking;\n  }\n\n  public String getLoggingInterval() {\n    return loggingInterval == null ? DEFAULT_LOGGING_INTERVAL : loggingInterval;\n  }\n\n  public void setLoggingInterval( String loggingInterval ) {\n    this.loggingInterval = loggingInterval;\n  }\n\n  public List<UserDefinedItem> getUserDefined() {\n    return userDefined;\n  }\n\n  public void setUserDefined( List<UserDefinedItem> userDefined ) {\n    this.userDefined = userDefined;\n  }\n\n  public String getNumMapTasks() {\n    return numMapTasks;\n  }\n\n  public void setNumMapTasks( String numMapTasks ) {\n    this.numMapTasks = numMapTasks;\n  }\n\n  public String getNumReduceTasks() {\n    return numReduceTasks;\n  }\n\n  public void setNumReduceTasks( String numReduceTasks ) {\n    this.numReduceTasks = numReduceTasks;\n  }\n\n  public Result execute( final Result result, int arg1 ) throws KettleException {\n    result.setNrErrors( 0 );\n\n    log.setLogLevel( parentJob.getLogLevel() );\n\n    try {\n      URL resolvedJarUrl = resolveJarUrl( jarUrl );\n      if ( log.isDetailed() ) {\n        logDetailed( BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.ResolvedJar\", resolvedJarUrl\n          .toExternalForm() ) );\n      }\n\n      MapReduceService mapReduceService = namedClusterServiceLocator.getService( namedCluster, MapReduceService.class );\n      if ( isSimple ) {\n        String simpleLoggingIntervalS = environmentSubstitute( getSimpleLoggingInterval() );\n        int simpleLogInt = 60;\n        try {\n          simpleLogInt = Integer.parseInt( simpleLoggingIntervalS, 10 );\n        } catch ( NumberFormatException e ) {\n          logError( BaseMessages.getString( PKG, \"ErrorParsingLogInterval\", simpleLoggingIntervalS, simpleLogInt ) );\n        }\n\n        MapReduceJobSimple mapReduceJobSimple =\n          mapReduceService.executeSimple( resolvedJarUrl, environmentSubstitute( driverClass ),\n            environmentSubstitute( cmdLineArgs ) );\n\n        String mainClass = mapReduceJobSimple.getMainClass();\n        if ( log.isDetailed() ) {\n          logDetailed( BaseMessages.getString(\n            PKG, \"JobEntryHadoopJobExecutor.UsingDriverClass\", mainClass == null ? \"null\" : mainClass ) );\n          logDetailed( BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.SimpleMode\" ) );\n        }\n        if ( simpleBlocking ) {\n          boolean done = false;\n          do {\n            done =\n              mapReduceJobSimple.waitOnCompletion( simpleLogInt, TimeUnit.SECONDS, new MapReduceService.Stoppable() {\n                @Override public boolean isStopped() {\n                  return parentJob.isStopped();\n                }\n              } );\n            logDetailed( BaseMessages\n              .getString( JobEntryHadoopJobExecutor.class, \"JobEntryHadoopJobExecutor.Blocking\", mainClass ) );\n          } while ( !parentJob.isStopped() && !done );\n          if ( !done ) {\n            mapReduceJobSimple.killJob();\n          }\n          if ( !mapReduceJobSimple.isSuccessful() ) {\n            result.setStopped( true );\n            result.setNrErrors( 1 );\n            result.setResult( false );\n            log.logError(\n              BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.FailedToExecuteClass\", mainClass,\n                mapReduceJobSimple.getStatus() ) );\n          }\n        }\n      } else {\n        if ( log.isDetailed() ) {\n          logDetailed( BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.AdvancedMode\" ) );\n        }\n        MapReduceJobBuilder jobBuilder = mapReduceService.createJobBuilder( log, variables );\n\n        jobBuilder.setResolvedJarUrl( resolvedJarUrl );\n        jobBuilder.setJarUrl( environmentSubstitute( jarUrl ) );\n        jobBuilder.setHadoopJobName( environmentSubstitute( hadoopJobName ) );\n\n        jobBuilder.setOutputKeyClass( environmentSubstitute( outputKeyClass ) );\n        jobBuilder.setOutputValueClass( environmentSubstitute( outputValueClass ) );\n\n        if ( mapperClass != null ) {\n          jobBuilder.setMapperClass( environmentSubstitute( mapperClass ) );\n        }\n        if ( combinerClass != null ) {\n          jobBuilder.setCombinerClass( environmentSubstitute( combinerClass ) );\n        }\n        if ( reducerClass != null ) {\n          jobBuilder.setReducerClass( environmentSubstitute( reducerClass ) );\n        }\n\n        if ( inputFormatClass != null ) {\n          jobBuilder.setInputFormatClass( environmentSubstitute( inputFormatClass ) );\n        }\n        if ( outputFormatClass != null ) {\n          jobBuilder.setOutputFormatClass( environmentSubstitute( outputFormatClass ) );\n        }\n\n        jobBuilder.setInputPaths( JobEntryHadoopTransJobExecutor.splitInputPaths( inputPath, variables ) );\n        jobBuilder.setOutputPath( environmentSubstitute( outputPath ) );\n\n        // process user defined values\n        for ( UserDefinedItem item : userDefined ) {\n          if ( item.getName() != null && !\"\".equals( item.getName() ) && item.getValue() != null\n            && !\"\".equals( item.getValue() ) ) {\n            String nameS = environmentSubstitute( item.getName() );\n            String valueS = environmentSubstitute( item.getValue() );\n            jobBuilder.set( nameS, valueS );\n          }\n        }\n\n        String numMapTasksS = environmentSubstitute( numMapTasks );\n        String numReduceTasksS = environmentSubstitute( numReduceTasks );\n        int numM = 1;\n        try {\n          numM = Integer.parseInt( numMapTasksS );\n        } catch ( NumberFormatException e ) {\n          logError( \"Can't parse number of map tasks '\" + numMapTasksS + \"'. Setting num\" + \"map tasks to 1\" );\n        }\n        int numR = 1;\n        try {\n          numR = Integer.parseInt( numReduceTasksS );\n        } catch ( NumberFormatException e ) {\n          logError( \"Can't parse number of reduce tasks '\" + numReduceTasksS + \"'. Setting num\" + \"reduce tasks to 1\" );\n        }\n\n        jobBuilder.setNumMapTasks( numM );\n        jobBuilder.setNumReduceTasks( numR );\n\n        MapReduceJobAdvanced mapReduceJobAdvanced = jobBuilder.submit();\n\n        String loggingIntervalS = environmentSubstitute( getLoggingInterval() );\n        int logIntv = 60;\n        try {\n          logIntv = Integer.parseInt( loggingIntervalS );\n        } catch ( NumberFormatException e ) {\n          logError( BaseMessages.getString( PKG, \"ErrorParsingLogInterval\", loggingIntervalS, logIntv ) );\n        }\n        if ( blocking ) {\n          try {\n            int taskCompletionEventIndex = 0;\n            while ( !mapReduceJobAdvanced\n              .waitOnCompletion( logIntv >= 1 ? logIntv : 60, TimeUnit.SECONDS, new MapReduceService.Stoppable() {\n                @Override public boolean isStopped() {\n                  return parentJob.isStopped();\n                }\n              } ) ) {\n              if ( logIntv >= 1 ) {\n                printJobStatus( mapReduceJobAdvanced );\n                taskCompletionEventIndex = logTaskMessages( mapReduceJobAdvanced, taskCompletionEventIndex );\n              }\n            }\n\n            if ( parentJob.isStopped() && !mapReduceJobAdvanced.isComplete() ) {\n              // We must stop the job running on Hadoop\n              mapReduceJobAdvanced.killJob();\n              // Indicate this job entry did not complete\n              result.setResult( false );\n            }\n\n            printJobStatus( mapReduceJobAdvanced );\n            // Log any messages we may have missed while polling\n            logTaskMessages( mapReduceJobAdvanced, taskCompletionEventIndex );\n          } catch ( InterruptedException ie ) {\n            logError( ie.getMessage(), ie );\n          }\n\n          // Entry is successful if the MR job is successful overall\n          result.setResult( mapReduceJobAdvanced.isSuccessful() );\n        }\n\n      }\n    } catch ( Throwable t ) {\n      t.printStackTrace();\n      result.setStopped( true );\n      result.setNrErrors( 1 );\n      result.setResult( false );\n      logError( t.getMessage(), t );\n    }\n\n    return result;\n  }\n\n  @VisibleForTesting\n  URL resolveJarUrl( final String jarUrl ) throws MalformedURLException {\n    return resolveJarUrl( jarUrl, this );\n  }\n\n  public static URL resolveJarUrl( final String jarUrl, VariableSpace variableSpace ) throws MalformedURLException {\n    String jarUrlS = variableSpace.environmentSubstitute( jarUrl );\n    if ( jarUrlS.indexOf( \"://\" ) == -1 ) {\n      // default to file://\n      File jarFile = new File( jarUrlS );\n      return jarFile.toURI().toURL();\n    } else {\n      return new URL( jarUrlS );\n    }\n  }\n\n  /**\n   * Log messages indicating completion (success/failure) of component tasks for the provided running job.\n   *\n   * @param runningJob Running job to poll for completion events\n   * @param startIndex Start at this event index to poll from\n   * @return Total events consumed\n   * @throws IOException Error fetching events\n   */\n  private int logTaskMessages( MapReduceJobAdvanced runningJob, int startIndex ) throws IOException {\n    TaskCompletionEvent[] tcEvents = runningJob.getTaskCompletionEvents( startIndex );\n    for ( int i = 0; i < tcEvents.length; i++ ) {\n      String[] diags = runningJob.getTaskDiagnostics( tcEvents[ i ].getTaskAttemptId() );\n      StringBuilder diagsOutput = new StringBuilder();\n\n      if ( diags != null && diags.length > 0 ) {\n        diagsOutput.append( Const.CR );\n        for ( String s : diags ) {\n          diagsOutput.append( s );\n          diagsOutput.append( Const.CR );\n        }\n      }\n\n      switch ( tcEvents[ i ].getTaskStatus() ) {\n        case KILLED:\n          logError( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopJobExecutor.TaskDetails\", TaskCompletionEvent.Status.KILLED,\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(),\n              diagsOutput ) ); //$NON-NLS-1$\n\n          break;\n        case FAILED:\n          logError( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopJobExecutor.TaskDetails\", TaskCompletionEvent.Status.FAILED,\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(),\n              diagsOutput ) ); //$NON-NLS-1$\n\n          break;\n        case SUCCEEDED:\n          logDetailed( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopJobExecutor.TaskDetails\", TaskCompletionEvent.Status.SUCCEEDED,\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(),\n              diagsOutput ) ); //$NON-NLS-1$\n\n          break;\n      }\n    }\n    return tcEvents.length;\n  }\n\n  /**\n   * Execute the main method of the provided class with the current command line arguments.\n   *\n   * @param clazz Class with main method to execute\n   * @throws NoSuchMethodException\n   * @throws IllegalAccessException\n   * @throws InvocationTargetException\n   */\n  protected void executeMainMethod( Class<?> clazz ) throws NoSuchMethodException, IllegalAccessException,\n    InvocationTargetException {\n    final ClassLoader cl = Thread.currentThread().getContextClassLoader();\n    try {\n      Thread.currentThread().setContextClassLoader( clazz.getClassLoader() );\n      Method mainMethod = clazz.getMethod( \"main\", new Class[] { String[].class } );\n      String commandLineArgs = environmentSubstitute( cmdLineArgs );\n      Object[] args = ( commandLineArgs != null ) ? new Object[] { commandLineArgs.split( \" \" ) } : new Object[ 0 ];\n      mainMethod.invoke( null, args );\n    } finally {\n      Thread.currentThread().setContextClassLoader( cl );\n    }\n  }\n\n  public void printJobStatus( MapReduceJobAdvanced runningJob ) throws IOException {\n    if ( log.isBasic() ) {\n      double setupPercent = runningJob.getSetupProgress() * 100f;\n      double mapPercent = runningJob.getMapProgress() * 100f;\n      double reducePercent = runningJob.getReduceProgress() * 100f;\n      logBasic( BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.RunningPercent\", setupPercent, mapPercent,\n        reducePercent ) );\n    }\n  }\n\n  @Override\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers, Repository rep,\n                       IMetaStore metaStore )\n    throws KettleXMLException {\n    super.loadXML( entrynode, databases, slaveServers );\n    hadoopJobName = XMLHandler.getTagValue( entrynode, \"hadoop_job_name\" );\n\n    isSimple = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"simple\" ) );\n    jarUrl = XMLHandler.getTagValue( entrynode, \"jar_url\" );\n    driverClass = XMLHandler.getTagValue( entrynode, \"driver_class\" );\n    cmdLineArgs = XMLHandler.getTagValue( entrynode, \"command_line_args\" );\n    simpleBlocking = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"simple_blocking\" ) );\n    blocking = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"blocking\" ) );\n    simpleLoggingInterval = XMLHandler.getTagValue( entrynode, \"simple_logging_interval\" );\n    loggingInterval = XMLHandler.getTagValue( entrynode, \"logging_interval\" );\n\n    mapperClass = XMLHandler.getTagValue( entrynode, \"mapper_class\" );\n    combinerClass = XMLHandler.getTagValue( entrynode, \"combiner_class\" );\n    reducerClass = XMLHandler.getTagValue( entrynode, \"reducer_class\" );\n    inputPath = XMLHandler.getTagValue( entrynode, \"input_path\" );\n    inputFormatClass = XMLHandler.getTagValue( entrynode, \"input_format_class\" );\n    outputPath = XMLHandler.getTagValue( entrynode, \"output_path\" );\n    outputKeyClass = XMLHandler.getTagValue( entrynode, \"output_key_class\" );\n    outputValueClass = XMLHandler.getTagValue( entrynode, \"output_value_class\" );\n    outputFormatClass = XMLHandler.getTagValue( entrynode, \"output_format_class\" );\n\n    namedCluster = namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, null, rep, metaStore, entrynode, log );\n\n    setRepository( rep );\n\n    // numMapTasks = Integer.parseInt(XMLHandler.getTagValue(entrynode, \"num_map_tasks\"));\n    numMapTasks = XMLHandler.getTagValue( entrynode, \"num_map_tasks\" );\n    // numReduceTasks = Integer.parseInt(XMLHandler.getTagValue(entrynode, \"num_reduce_tasks\"));\n    numReduceTasks = XMLHandler.getTagValue( entrynode, \"num_reduce_tasks\" );\n\n    // How many user defined elements?\n    userDefined = new ArrayList<UserDefinedItem>();\n    Node userDefinedList = XMLHandler.getSubNode( entrynode, \"user_defined_list\" );\n    int nrUserDefined = XMLHandler.countNodes( userDefinedList, \"user_defined\" );\n    for ( int i = 0; i < nrUserDefined; i++ ) {\n      Node userDefinedNode = XMLHandler.getSubNodeByNr( userDefinedList, \"user_defined\", i );\n      String name = XMLHandler.getTagValue( userDefinedNode, \"name\" );\n      String value = XMLHandler.getTagValue( userDefinedNode, \"value\" );\n      UserDefinedItem item = new UserDefinedItem();\n      item.setName( name );\n      item.setValue( value );\n      userDefined.add( item );\n    }\n  }\n\n  @Override public String getXML() {\n    StringBuilder retval = new StringBuilder( 1024 );\n    retval.append( super.getXML() );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_name\", hadoopJobName ) );\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"simple\", isSimple ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"jar_url\", jarUrl ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"driver_class\", driverClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"command_line_args\", cmdLineArgs ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"simple_blocking\", simpleBlocking ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"blocking\", blocking ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"logging_interval\", loggingInterval ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"simple_logging_interval\", simpleLoggingInterval ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_name\", hadoopJobName ) );\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"mapper_class\", mapperClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"combiner_class\", combinerClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"reducer_class\", reducerClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"input_path\", inputPath ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"input_format_class\", inputFormatClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"output_path\", outputPath ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"output_key_class\", outputKeyClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"output_value_class\", outputValueClass ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"output_format_class\", outputFormatClass ) );\n\n    namedClusterLoadSaveUtil.getXmlNamedCluster( namedCluster, namedClusterService, metaStore, log, retval );\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"num_map_tasks\", numMapTasks ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"num_reduce_tasks\", numReduceTasks ) );\n\n    retval.append( \"      <user_defined_list>\" ).append( Const.CR );\n    if ( userDefined != null ) {\n      for ( UserDefinedItem item : userDefined ) {\n        if ( item.getName() != null && !\"\".equals( item.getName() ) && item.getValue() != null\n          && !\"\".equals( item.getValue() ) ) {\n          retval.append( \"        <user_defined>\" ).append( Const.CR );\n          retval.append( \"          \" ).append( XMLHandler.addTagValue( \"name\", item.getName() ) );\n          retval.append( \"          \" ).append( XMLHandler.addTagValue( \"value\", item.getValue() ) );\n          retval.append( \"        </user_defined>\" ).append( Const.CR );\n        }\n      }\n    }\n    retval.append( \"      </user_defined_list>\" ).append( Const.CR );\n    return retval.toString();\n  }\n\n  @Override\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    if ( rep != null ) {\n      super.loadRep( rep, metaStore, id_jobentry, databases, slaveServers );\n\n      setHadoopJobName( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_name\" ) );\n\n      setSimple( rep.getJobEntryAttributeBoolean( id_jobentry, \"simple\" ) );\n\n      setJarUrl( rep.getJobEntryAttributeString( id_jobentry, \"jar_url\" ) );\n      setDriverClass( rep.getJobEntryAttributeString( id_jobentry, \"driver_class\" ) );\n      setCmdLineArgs( rep.getJobEntryAttributeString( id_jobentry, \"command_line_args\" ) );\n      setSimpleBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"simple_blocking\" ) );\n      setBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"blocking\" ) );\n      setSimpleLoggingInterval( rep.getJobEntryAttributeString( id_jobentry, \"simple_logging_interval\" ) );\n      setLoggingInterval( rep.getJobEntryAttributeString( id_jobentry, \"logging_interval\" ) );\n\n      setMapperClass( rep.getJobEntryAttributeString( id_jobentry, \"mapper_class\" ) );\n      setCombinerClass( rep.getJobEntryAttributeString( id_jobentry, \"combiner_class\" ) );\n      setReducerClass( rep.getJobEntryAttributeString( id_jobentry, \"reducer_class\" ) );\n      setInputPath( rep.getJobEntryAttributeString( id_jobentry, \"input_path\" ) );\n      setInputFormatClass( rep.getJobEntryAttributeString( id_jobentry, \"input_format_class\" ) );\n      setOutputPath( rep.getJobEntryAttributeString( id_jobentry, \"output_path\" ) );\n      setOutputKeyClass( rep.getJobEntryAttributeString( id_jobentry, \"output_key_class\" ) );\n      setOutputValueClass( rep.getJobEntryAttributeString( id_jobentry, \"output_value_class\" ) );\n      setOutputFormatClass( rep.getJobEntryAttributeString( id_jobentry, \"output_format_class\" ) );\n\n      namedCluster = namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, id_jobentry, rep, metaStore, null, log );\n\n      setRepository( rep );\n\n      // setNumMapTasks(new Long(rep.getJobEntryAttributeInteger(id_jobentry, \"num_map_tasks\")).intValue());\n      setNumMapTasks( rep.getJobEntryAttributeString( id_jobentry, \"num_map_tasks\" ) );\n      // setNumReduceTasks(new Long(rep.getJobEntryAttributeInteger(id_jobentry, \"num_reduce_tasks\")).intValue());\n      setNumReduceTasks( rep.getJobEntryAttributeString( id_jobentry, \"num_reduce_tasks\" ) );\n\n      int argnr = rep.countNrJobEntryAttributes( id_jobentry, \"user_defined_name\" ); //$NON-NLS-1$\n      if ( argnr > 0 ) {\n        userDefined = new ArrayList<UserDefinedItem>();\n\n        UserDefinedItem item = null;\n        for ( int i = 0; i < argnr; i++ ) {\n          item = new UserDefinedItem();\n          item.setName( rep.getJobEntryAttributeString( id_jobentry, i, \"user_defined_name\" ) ); //$NON-NLS-1$\n          item.setValue( rep.getJobEntryAttributeString( id_jobentry, i, \"user_defined_value\" ) ); //$NON-NLS-1$\n          userDefined.add( item );\n        }\n      }\n    } else {\n      throw new KettleException( \"Unable to save to a repository. The repository is null.\" ); //$NON-NLS-1$\n    }\n  }\n\n  @Override public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_job ) throws KettleException {\n    if ( rep != null ) {\n      super.saveRep( rep, id_job );\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_name\", hadoopJobName ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"simple\", isSimple ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"jar_url\", jarUrl ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"driver_class\", driverClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"command_line_args\", cmdLineArgs ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"simple_blocking\", simpleBlocking ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"blocking\", blocking ); //$NON-NLS-1$\n      rep\n        .saveJobEntryAttribute( id_job, getObjectId(), \"simple_logging_interval\", simpleLoggingInterval ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"logging_interval\", loggingInterval ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_name\", hadoopJobName ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"mapper_class\", mapperClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_class\", combinerClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reducer_class\", reducerClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"input_path\", inputPath ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"input_format_class\", inputFormatClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_path\", outputPath ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_key_class\", outputKeyClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_value_class\", outputValueClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_format_class\", outputFormatClass ); //$NON-NLS-1$\n\n      namedClusterLoadSaveUtil\n        .saveNamedClusterRep( namedCluster, namedClusterService, rep, metaStore, id_job, getObjectId(), log );\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_map_tasks\", numMapTasks ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_reduce_tasks\", numReduceTasks ); //$NON-NLS-1$\n\n      if ( userDefined != null ) {\n        for ( int i = 0; i < userDefined.size(); i++ ) {\n          UserDefinedItem item = userDefined.get( i );\n          if ( item.getName() != null\n            && !\"\".equals( item.getName() ) && item.getValue() != null && !\"\"\n            .equals( item.getValue() ) ) { //$NON-NLS-1$ //$NON-NLS-2$\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"user_defined_name\", item.getName() ); //$NON-NLS-1$\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"user_defined_value\", item.getValue() ); //$NON-NLS-1$\n          }\n        }\n      }\n\n    } else {\n      throw new KettleException( \"Unable to save to a repository. The repository is null.\" ); //$NON-NLS-1$\n    }\n  }\n\n  @Override\n  public boolean evaluates() {\n    return true;\n  }\n\n  @Override\n  public boolean isUnconditional() {\n    return true;\n  }\n\n  public String getSimpleLoggingInterval() {\n    return simpleLoggingInterval == null ? DEFAULT_LOGGING_INTERVAL : simpleLoggingInterval;\n  }\n\n  public void setSimpleLoggingInterval( String simpleLoggingInterval ) {\n    this.simpleLoggingInterval = simpleLoggingInterval;\n  }\n\n  public boolean isSimpleBlocking() {\n    return simpleBlocking;\n  }\n\n  public void setSimpleBlocking( boolean simpleBlocking ) {\n    this.simpleBlocking = simpleBlocking;\n  }\n\n  @Override public String getDialogClassName() {\n    return  DIALOG_NAME;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/pmr/JobEntryHadoopTransJobExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.DialogClassUtil;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.NamedClusterLoadSaveUtil;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.UserDefinedItem;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.ObjectLocationSpecificationMethod;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.plugins.JobEntryPluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.CurrentDirectoryResolver;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.HasRepositoryDirectories;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.repository.RepositoryDirectory;\nimport org.pentaho.di.repository.RepositoryDirectoryInterface;\nimport org.pentaho.di.repository.StringObjectId;\nimport org.pentaho.di.resource.ResourceDefinition;\nimport org.pentaho.di.resource.ResourceNamingInterface;\nimport org.pentaho.di.trans.TransConfiguration;\nimport org.pentaho.di.trans.TransExecutionConfiguration;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransMeta.TransformationType;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJobAdvanced;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJobBuilder;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceService;\nimport org.pentaho.hadoop.shim.api.mapreduce.PentahoMapReduceJobBuilder;\nimport org.pentaho.hadoop.shim.api.mapreduce.TaskCompletionEvent;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.w3c.dom.Node;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\n@SuppressWarnings( \"deprecation\" )\n@JobEntry( id = \"HadoopTransJobExecutorPlugin\", image = \"HDT.svg\", name = \"HadoopTransJobExecutorPlugin.Name\",\n  description = \"HadoopTransJobExecutorPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.job.entries.hadooptransjobexecutor\" )\npublic class JobEntryHadoopTransJobExecutor extends JobEntryBase implements Cloneable, JobEntryInterface,\n  HasRepositoryDirectories {\n  public static final String MAPREDUCE_APPLICATION_CLASSPATH = \"mapreduce.application.classpath\";\n  public static final String DEFAULT_MAPREDUCE_APPLICATION_CLASSPATH =\n    \"$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*\";\n  public static final String PENTAHO_MAPREDUCE_PROPERTY_USE_DISTRIBUTED_CACHE = \"pmr.use.distributed.cache\";\n  // $NON-NLS-1$\n  public static final String PENTAHO_MAPREDUCE_PROPERTY_PMR_LIBRARIES_ARCHIVE_FILE = \"pmr.libraries.archive.file\";\n  public static final String PENTAHO_MAPREDUCE_PROPERTY_KETTLE_HDFS_INSTALL_DIR = \"pmr.kettle.dfs.install.dir\";\n  public static final String PENTAHO_MAPREDUCE_PROPERTY_KETTLE_INSTALLATION_ID = \"pmr.kettle.installation.id\";\n  public static final String PENTAHO_MAPREDUCE_PROPERTY_ADDITIONAL_PLUGINS = \"pmr.kettle.additional.plugins\";\n  private static Class<?> PKG = JobEntryHadoopTransJobExecutor.class; // for i18n purposes, needed by Translator2!!\n  public static final String DIALOG_NAME = DialogClassUtil.getDialogClassName( PKG );\n  private final NamedClusterService namedClusterService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final NamedClusterLoadSaveUtil namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n  private String hadoopJobName;\n  private String mapRepositoryDir;\n  private String mapRepositoryFile;\n  private ObjectId mapRepositoryReference;\n  private String mapTrans;\n  private String combinerRepositoryDir;\n  private String combinerRepositoryFile;\n  private ObjectId combinerRepositoryReference;\n  private String combinerTrans;\n  private boolean combiningSingleThreaded;\n  private String reduceRepositoryDir;\n  private String reduceRepositoryFile;\n  private ObjectId reduceRepositoryReference;\n  private String reduceTrans;\n  private boolean reducingSingleThreaded;\n  private String mapInputStepName;\n  private String mapOutputStepName;\n  private String combinerInputStepName;\n  private String combinerOutputStepName;\n  private String reduceInputStepName;\n  private String reduceOutputStepName;\n  private boolean suppressOutputMapKey;\n  private boolean suppressOutputMapValue;\n  private boolean suppressOutputKey;\n  private boolean suppressOutputValue;\n  private String inputFormatClass;\n  private String outputFormatClass;\n  private NamedCluster namedCluster;\n  private String inputPath;\n  private String outputPath;\n  private boolean cleanOutputPath;\n  private boolean blocking = true;\n  private String loggingInterval = \"60\";\n  private String numMapTasks = \"1\";\n  private String numReduceTasks = \"1\";\n  private static final String KTR_EXT = \".ktr\";\n  private List<UserDefinedItem> userDefined = new ArrayList<UserDefinedItem>();\n  private final RuntimeTester runtimeTester;\n  private final RuntimeTestActionService runtimeTestActionService;\n\n  public JobEntryHadoopTransJobExecutor( NamedClusterService namedClusterService,\n                                         RuntimeTestActionService runtimeTestActionService,\n                                         RuntimeTester runtimeTester,\n                                         NamedClusterServiceLocator namedClusterServiceLocator ) throws Throwable {\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTester = runtimeTester;\n    reducingSingleThreaded = false;\n    combiningSingleThreaded = false;\n  }\n\n  public JobEntryHadoopTransJobExecutor() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.namedClusterServiceLocator = BigDataServicesHelper.getNamedClusterServiceLocator();\n  }\n\n  protected static final TransMeta loadTransMeta( Bowl bowl, VariableSpace space, Repository rep, String filename,\n                                                ObjectId transformationId, String repositoryDir, String repositoryFile )\n          throws KettleException {\n\n    TransMeta transMeta = null;\n\n    if ( rep == null ) {\n      if ( !Const.isEmpty( filename ) ) {\n        String realFilename = space.environmentSubstitute( filename );\n        transMeta = new TransMeta( bowl, realFilename );\n      }\n    } else {\n      if ( !Const.isEmpty( filename ) ) {\n        transMeta = getTransMetaFromRepo( filename, rep, space );\n      } else if ( transformationId != null ) {\n        transMeta = rep.loadTransformation( transformationId, null );\n      } else if ( !Const.isEmpty( repositoryDir ) && !Const.isEmpty( repositoryFile ) ) {\n        transMeta = getTransMetaFromRepo( repositoryDir, repositoryFile, rep, space );\n      }\n    }\n\n    return transMeta;\n  }\n\n  public static TransMeta getTransMetaFromRepo( String fullPath, Repository rep, VariableSpace space ) throws KettleException {\n    if ( fullPath == null ) {\n      return null;\n    }\n    String trimPath = fullPath.trim();\n\n    if ( trimPath.isEmpty() || trimPath.endsWith( \"/\" ) ) {\n      return null;\n    }\n\n    int index = trimPath.lastIndexOf( '/' );\n\n    if ( index == -1 ) {\n      return null;\n    }\n\n    String filename = trimPath.substring( index + 1 );\n    String repDir = trimPath.substring( 0, index );\n\n    return getTransMetaFromRepo( repDir, filename, rep, space );\n  }\n\n  public static TransMeta getTransMetaFromRepo( String repositoryDir, String repositoryFile, Repository rep, VariableSpace space  ) throws KettleException {\n    if ( space instanceof JobEntryHadoopTransJobExecutor ) {\n      CurrentDirectoryResolver r = new CurrentDirectoryResolver();\n      JobEntryHadoopTransJobExecutor jobEntry = (JobEntryHadoopTransJobExecutor) space;\n      space = r.resolveCurrentDirectory( jobEntry.getParentJobMeta().getBowl(), jobEntry,\n        jobEntry.getParentJob().getRepositoryDirectory(), null );\n    }\n    String repositoryDirS = space.environmentSubstitute( repositoryDir );\n    if ( repositoryDirS.isEmpty() ) {\n      repositoryDirS = \"/\";\n    }\n    String repositoryFileS = space.environmentSubstitute( repositoryFile );\n    RepositoryDirectoryInterface repositoryDirectory =\n            rep.loadRepositoryDirectoryTree().findDirectory( repositoryDirS );\n    return  rep.loadTransformation( repositoryFileS, repositoryDirectory, null, true, null );\n  }\n\n  public static String[] splitInputPaths( String inputPath, VariableSpace variableSpace ) {\n    String inputPathS = variableSpace.environmentSubstitute( inputPath );\n\n    // This is a non-elegant way to split the path on commas unless inside curly braces. There should be\n    // a method in Const and/or a fancy regex for this kind of thing. Instead, find the curly-brace groups\n    // and temporarily replace the commas with a non-sensical string. Then restore the commas after\n    // splitting the input paths.\n    Matcher m = Pattern.compile( \"[{][^{]*[}]\" ).matcher( inputPathS );\n    StringBuffer sb = new StringBuffer();\n    while ( m.find() ) {\n      m.appendReplacement( sb, m.group().replace( \",\", \"@!@\" ) );\n    }\n    m.appendTail( sb );\n\n    return sb.toString().split( \",\" );\n  }\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    this.hadoopJobName = hadoopJobName;\n  }\n\n  /**\n   * @return  An array of 3 elements : 0 - specification method for mapper,\n   *                                   1 - specification method for combiner,\n   *                                   2 - specification method for reducer.\n   */\n  @Override\n  public ObjectLocationSpecificationMethod[] getSpecificationMethods() {\n    return new ObjectLocationSpecificationMethod[] {\n      defineSpecificationMethod( mapRepositoryDir, mapRepositoryFile, mapRepositoryReference ),\n      defineSpecificationMethod( combinerRepositoryDir, combinerRepositoryFile, combinerRepositoryReference ),\n      defineSpecificationMethod( reduceRepositoryDir, reduceRepositoryFile, reduceRepositoryReference )\n    };\n  }\n\n  /**\n   * Returns an array of 3 elements : 0 - mapper, 1 - combiner, 2 - reducer directories\n   * @return String array[2] of mapper, combiner, reducer repository directories\n   */\n  @Override\n  public String[] getDirectories() {\n    return new String[]{\n      mapRepositoryDir != null ? mapRepositoryDir : mapTrans,\n      combinerRepositoryDir != null ? combinerRepositoryDir : combinerTrans,\n      reduceRepositoryDir != null ? reduceRepositoryDir : reduceTrans };\n  }\n\n  /**\n   * Updates repository directories with values from an array of 3 elements :\n   * 0 - mapper, 1 - combiner, 2 - reducer directories\n   * @param directory Array[2] of updated mapper, combiner, reducer directories to set\n   */\n  @Override\n  public void setDirectories( String[] directory ) {\n    if ( mapRepositoryDir != null ) {\n      mapRepositoryDir = directory[0];\n    } else {\n      mapTrans = directory[0];\n    }\n    if ( combinerRepositoryDir != null ) {\n      combinerRepositoryDir = directory[1];\n    } else {\n      combinerTrans = directory[1];\n    }\n    if ( reduceRepositoryDir != null ) {\n      reduceRepositoryDir = directory[2];\n    } else {\n      reduceTrans = directory[2];\n    }\n  }\n\n  public String getMapTrans() {\n    return mapTrans;\n  }\n\n  public void setMapTrans( String mapTrans ) {\n    this.mapTrans = mapTrans;\n  }\n\n  public String getCombinerTrans() {\n    return combinerTrans;\n  }\n\n  public void setCombinerTrans( String combinerTrans ) {\n    this.combinerTrans = combinerTrans;\n  }\n\n  public String getReduceTrans() {\n    return reduceTrans;\n  }\n\n  public void setReduceTrans( String reduceTrans ) {\n    this.reduceTrans = reduceTrans;\n  }\n\n  public String getMapRepositoryDir() {\n    return mapRepositoryDir;\n  }\n\n  public void setMapRepositoryDir( String mapRepositoryDir ) {\n    this.mapRepositoryDir = mapRepositoryDir;\n  }\n\n  public String getMapRepositoryFile() {\n    return mapRepositoryFile;\n  }\n\n  public void setMapRepositoryFile( String mapRepositoryFile ) {\n    this.mapRepositoryFile = mapRepositoryFile;\n  }\n\n  public ObjectId getMapRepositoryReference() {\n    return mapRepositoryReference;\n  }\n\n  public void setMapRepositoryReference( ObjectId mapRepositoryReference ) {\n    this.mapRepositoryReference = mapRepositoryReference;\n  }\n\n  public String getCombinerRepositoryDir() {\n    return combinerRepositoryDir;\n  }\n\n  public void setCombinerRepositoryDir( String combinerRepositoryDir ) {\n    this.combinerRepositoryDir = combinerRepositoryDir;\n  }\n\n  public String getCombinerRepositoryFile() {\n    return combinerRepositoryFile;\n  }\n\n  public void setCombinerRepositoryFile( String combinerRepositoryFile ) {\n    this.combinerRepositoryFile = combinerRepositoryFile;\n  }\n\n  public ObjectId getCombinerRepositoryReference() {\n    return combinerRepositoryReference;\n  }\n\n  public void setCombinerRepositoryReference( ObjectId combinerRepositoryReference ) {\n    this.combinerRepositoryReference = combinerRepositoryReference;\n  }\n\n  public String getReduceRepositoryDir() {\n    return reduceRepositoryDir;\n  }\n\n  public void setReduceRepositoryDir( String reduceRepositoryDir ) {\n    this.reduceRepositoryDir = reduceRepositoryDir;\n  }\n\n  public String getReduceRepositoryFile() {\n    return reduceRepositoryFile;\n  }\n\n  public void setReduceRepositoryFile( String reduceRepositoryFile ) {\n    this.reduceRepositoryFile = reduceRepositoryFile;\n  }\n\n  public ObjectId getReduceRepositoryReference() {\n    return reduceRepositoryReference;\n  }\n\n  public void setReduceRepositoryReference( ObjectId reduceRepositoryReference ) {\n    this.reduceRepositoryReference = reduceRepositoryReference;\n  }\n\n  public String getMapInputStepName() {\n    return mapInputStepName;\n  }\n\n  public void setMapInputStepName( String mapInputStepName ) {\n    this.mapInputStepName = mapInputStepName;\n  }\n\n  public String getMapOutputStepName() {\n    return mapOutputStepName;\n  }\n\n  public void setMapOutputStepName( String mapOutputStepName ) {\n    this.mapOutputStepName = mapOutputStepName;\n  }\n\n  public String getCombinerInputStepName() {\n    return combinerInputStepName;\n  }\n\n  public void setCombinerInputStepName( String combinerInputStepName ) {\n    this.combinerInputStepName = combinerInputStepName;\n  }\n\n  public String getCombinerOutputStepName() {\n    return combinerOutputStepName;\n  }\n\n  public void setCombinerOutputStepName( String combinerOutputStepName ) {\n    this.combinerOutputStepName = combinerOutputStepName;\n  }\n\n  public String getReduceInputStepName() {\n    return reduceInputStepName;\n  }\n\n  public void setReduceInputStepName( String reduceInputStepName ) {\n    this.reduceInputStepName = reduceInputStepName;\n  }\n\n  public String getReduceOutputStepName() {\n    return reduceOutputStepName;\n  }\n\n  public void setReduceOutputStepName( String reduceOutputStepName ) {\n    this.reduceOutputStepName = reduceOutputStepName;\n  }\n\n  public boolean getSuppressOutputOfMapKey() {\n    return suppressOutputMapKey;\n  }\n\n  public void setSuppressOutputOfMapKey( boolean suppress ) {\n    suppressOutputMapKey = suppress;\n  }\n\n  public boolean getSuppressOutputOfMapValue() {\n    return suppressOutputMapValue;\n  }\n\n  public void setSuppressOutputOfMapValue( boolean suppress ) {\n    suppressOutputMapValue = suppress;\n  }\n\n  public boolean getSuppressOutputOfKey() {\n    return suppressOutputKey;\n  }\n\n  public void setSuppressOutputOfKey( boolean suppress ) {\n    suppressOutputKey = suppress;\n  }\n\n  public boolean getSuppressOutputOfValue() {\n    return suppressOutputValue;\n  }\n\n  public void setSuppressOutputOfValue( boolean suppress ) {\n    suppressOutputValue = suppress;\n  }\n\n  public String getInputFormatClass() {\n    return inputFormatClass;\n  }\n\n  public void setInputFormatClass( String inputFormatClass ) {\n    this.inputFormatClass = inputFormatClass;\n  }\n\n  public String getOutputFormatClass() {\n    return outputFormatClass;\n  }\n\n  public void setOutputFormatClass( String outputFormatClass ) {\n    this.outputFormatClass = outputFormatClass;\n  }\n\n  public String getInputPath() {\n    return inputPath;\n  }\n\n  public void setInputPath( String inputPath ) {\n    this.inputPath = inputPath;\n  }\n\n  public String getOutputPath() {\n    return outputPath;\n  }\n\n  public void setOutputPath( String outputPath ) {\n    this.outputPath = outputPath;\n  }\n\n  public boolean isCleanOutputPath() {\n    return cleanOutputPath;\n  }\n\n  public void setCleanOutputPath( boolean cleanOutputPath ) {\n    this.cleanOutputPath = cleanOutputPath;\n  }\n\n  public boolean isBlocking() {\n    return blocking;\n  }\n\n  public void setBlocking( boolean blocking ) {\n    this.blocking = blocking;\n  }\n\n  public String getLoggingInterval() {\n    return loggingInterval;\n  }\n\n  public void setLoggingInterval( String loggingInterval ) {\n    this.loggingInterval = loggingInterval;\n  }\n\n  public List<UserDefinedItem> getUserDefined() {\n    return userDefined;\n  }\n\n  public void setUserDefined( List<UserDefinedItem> userDefined ) {\n    this.userDefined = userDefined;\n  }\n\n  public String getNumMapTasks() {\n    return numMapTasks;\n  }\n\n  public void setNumMapTasks( String numMapTasks ) {\n    this.numMapTasks = numMapTasks;\n  }\n\n  public String getNumReduceTasks() {\n    return numReduceTasks;\n  }\n\n  public void setNumReduceTasks( String numReduceTasks ) {\n    this.numReduceTasks = numReduceTasks;\n  }\n\n  public Result execute( Result result, int arg1 ) throws KettleException {\n\n    result.setNrErrors( 0 );\n\n    try {\n\n      MapReduceService mapReduceService = namedClusterServiceLocator.getService( namedCluster, MapReduceService.class );\n      PentahoMapReduceJobBuilder jobBuilder = mapReduceService.createPentahoMapReduceJobBuilder( log, variables );\n\n      String hadoopJobNameS = environmentSubstitute( hadoopJobName );\n      jobBuilder.setHadoopJobName( hadoopJobNameS );\n\n      // mapper\n      TransExecutionConfiguration transExecConfig = new TransExecutionConfiguration();\n      TransMeta transMeta =\n        loadTransMeta( parentJobMeta.getBowl(), this, rep, mapTrans, mapRepositoryReference, mapRepositoryDir,\n          mapRepositoryFile );\n      TransConfiguration transConfig = new TransConfiguration( transMeta, transExecConfig );\n      String mapInputStepNameS = environmentSubstitute( mapInputStepName );\n      String mapOutputStepNameS = environmentSubstitute( mapOutputStepName );\n\n      try {\n        jobBuilder.verifyTransMeta( transMeta, mapInputStepNameS, mapOutputStepNameS );\n      } catch ( Exception ex ) {\n        throw new KettleException( BaseMessages\n          .getString( PKG, \"JobEntryHadoopTransJobExecutor.MapConfiguration.Error\" ), ex );\n      } finally {\n        if ( transMeta != null ) {\n          transMeta.disposeEmbeddedMetastoreProvider();\n        }\n      }\n\n      jobBuilder.setMapperInfo( transConfig.getXML(), mapInputStepNameS, mapOutputStepNameS );\n\n      jobBuilder.set( MapReduceJobBuilder.STRING_COMBINE_SINGLE_THREADED, combiningSingleThreaded ? \"true\" : \"false\" );\n\n      // Pass the single threaded reduction to the configuration...\n      //\n      jobBuilder.set( MapReduceJobBuilder.STRING_REDUCE_SINGLE_THREADED, reducingSingleThreaded ? \"true\" : \"false\" );\n\n      if ( getSuppressOutputOfMapKey() ) {\n        jobBuilder.setMapOutputKeyClass( jobBuilder.getHadoopWritableCompatibleClassName( null ) );\n      }\n      if ( getSuppressOutputOfMapValue() ) {\n        jobBuilder.setMapOutputValueClass( jobBuilder.getHadoopWritableCompatibleClassName( null ) );\n      }\n\n      // auto configure the output mapper key and value classes\n      if ( !getSuppressOutputOfMapKey() || !getSuppressOutputOfMapValue() && transMeta != null ) {\n        StepMeta mapOut = transMeta.findStep( mapOutputStepNameS );\n        if ( mapOut.getStepMetaInterface() instanceof HadoopExitMeta ) {\n          RowMetaInterface prevStepFields = transMeta.getPrevStepFields( mapOut );\n          if ( !getSuppressOutputOfMapKey() ) {\n            String keyName = ( (HadoopExitMeta) mapOut.getStepMetaInterface() ).getOutKeyFieldname();\n            int keyI = prevStepFields.indexOfValue( keyName );\n            ValueMetaInterface keyVM = ( keyI >= 0 ) ? prevStepFields.getValueMeta( keyI ) : null;\n            if ( keyVM == null ) {\n              throw new KettleException( BaseMessages.getString( PKG,\n                \"JobEntryHadoopTransJobExecutor.NoMapOutputKeyDefined.Error\" ) );\n            }\n            String hadoopWritableKey = jobBuilder.getHadoopWritableCompatibleClassName( keyVM );\n            jobBuilder.setMapOutputKeyClass( hadoopWritableKey );\n            logDebug( BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.Message.MapOutputKeyMessage\",\n              hadoopWritableKey ) );\n          }\n\n          if ( !getSuppressOutputOfMapValue() ) {\n            String valName = ( (HadoopExitMeta) mapOut.getStepMetaInterface() ).getOutValueFieldname();\n            int valI = prevStepFields.indexOfValue( valName );\n            ValueMetaInterface valueVM = ( valI >= 0 ) ? prevStepFields.getValueMeta( valI ) : null;\n            if ( valueVM == null ) {\n              throw new KettleException( BaseMessages.getString( PKG,\n                \"JobEntryHadoopTransJobExecutor.NoMapOutputValueDefined.Error\" ) );\n            }\n            String hadoopWritableValue = jobBuilder.getHadoopWritableCompatibleClassName( valueVM );\n            jobBuilder.setMapOutputValueClass( hadoopWritableValue );\n            logDebug( BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.Message.MapOutputValueMessage\",\n              hadoopWritableValue ) );\n          }\n        }\n      }\n\n      // combiner\n      transMeta =\n        loadTransMeta( parentJobMeta.getBowl(), this, rep, combinerTrans, combinerRepositoryReference,\n          combinerRepositoryDir, combinerRepositoryFile );\n      if ( transMeta != null ) {\n\n        if ( combiningSingleThreaded ) {\n          verifySingleThreadingValidity( transMeta );\n        }\n\n        String combinerInputStepNameS = environmentSubstitute( combinerInputStepName );\n        String combinerOutputStepNameS = environmentSubstitute( combinerOutputStepName );\n        transConfig = new TransConfiguration( transMeta, transExecConfig );\n        jobBuilder.setCombinerInfo( transConfig.getXML(), combinerInputStepNameS, combinerOutputStepNameS );\n        try {\n          jobBuilder.verifyTransMeta( transMeta, combinerInputStepNameS, combinerOutputStepNameS );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( PKG,\n            \"JobEntryHadoopTransJobExecutor.CombinerConfiguration.Error\" ), ex );\n        } finally {\n          if ( transMeta != null ) {\n            transMeta.disposeEmbeddedMetastoreProvider();\n          }\n        }\n      }\n\n      // reducer\n      transMeta =\n        loadTransMeta( parentJobMeta.getBowl(), this, rep, reduceTrans, reduceRepositoryReference, reduceRepositoryDir,\n          reduceRepositoryFile );\n\n      if ( transMeta != null ) {\n\n        // See if this is a valid single threading reducer\n        //\n        if ( reducingSingleThreaded ) {\n          verifySingleThreadingValidity( transMeta );\n        }\n\n        String reduceInputStepNameS = environmentSubstitute( reduceInputStepName );\n        String reduceOutputStepNameS = environmentSubstitute( reduceOutputStepName );\n        transConfig = new TransConfiguration( transMeta, transExecConfig );\n        jobBuilder.setReducerInfo( transConfig.getXML(), reduceInputStepNameS, reduceOutputStepNameS );\n\n        try {\n          jobBuilder.verifyTransMeta( transMeta, reduceInputStepNameS, reduceOutputStepNameS );\n        } catch ( Exception ex ) {\n          throw new KettleException( BaseMessages.getString( PKG,\n            \"JobEntryHadoopTransJobExecutor.ReducerConfiguration.Error\" ), ex );\n        } finally {\n          if ( transMeta != null ) {\n            transMeta.disposeEmbeddedMetastoreProvider();\n          }\n        }\n\n        if ( getSuppressOutputOfKey() ) {\n          jobBuilder.setOutputKeyClass( jobBuilder.getHadoopWritableCompatibleClassName( null ) );\n        }\n        if ( getSuppressOutputOfValue() ) {\n          jobBuilder.setOutputValueClass( jobBuilder.getHadoopWritableCompatibleClassName( null ) );\n        }\n\n        // auto configure the output reduce key and value classes\n        if ( !getSuppressOutputOfKey() || !getSuppressOutputOfValue() ) {\n          StepMeta reduceOut = transMeta.findStep( reduceOutputStepNameS );\n          RowMetaInterface prevStepFields = transMeta.getPrevStepFields( reduceOut );\n          if ( reduceOut.getStepMetaInterface() instanceof HadoopExitMeta ) {\n            String keyName = ( (HadoopExitMeta) reduceOut.getStepMetaInterface() ).getOutKeyFieldname();\n            String valName = ( (HadoopExitMeta) reduceOut.getStepMetaInterface() ).getOutValueFieldname();\n            int keyI = prevStepFields.indexOfValue( keyName );\n            ValueMetaInterface keyVM = ( keyI >= 0 ) ? prevStepFields.getValueMeta( keyI ) : null;\n            int valI = prevStepFields.indexOfValue( valName );\n            ValueMetaInterface valueVM = ( valI >= 0 ) ? prevStepFields.getValueMeta( valI ) : null;\n\n            if ( !getSuppressOutputOfKey() ) {\n              if ( keyVM == null ) {\n                throw new KettleException( BaseMessages.getString( PKG,\n                  \"JobEntryHadoopTransJobExecutor.NoOutputKeyDefined.Error\" ) );\n              }\n              String hadoopWritableKey = jobBuilder.getHadoopWritableCompatibleClassName( keyVM );\n              jobBuilder.setOutputKeyClass( hadoopWritableKey );\n              logDebug( BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.Message.OutputKeyMessage\",\n                hadoopWritableKey ) );\n\n            }\n\n            if ( !getSuppressOutputOfValue() ) {\n              if ( valueVM == null ) {\n                throw new KettleException( BaseMessages.getString( PKG,\n                  \"JobEntryHadoopTransJobExecutor.NoOutputValueDefined.Error\" ) );\n              }\n              String hadoopWritableValue = jobBuilder.getHadoopWritableCompatibleClassName( valueVM );\n              jobBuilder.setOutputValueClass( hadoopWritableValue );\n              logDebug( BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.Message.OutputValueMessage\",\n                hadoopWritableValue ) );\n            }\n          }\n        }\n      }\n\n      jobBuilder.setInputFormatClass( inputFormatClass );\n      jobBuilder.setOutputFormatClass( outputFormatClass );\n\n      jobBuilder.setInputPaths( splitInputPaths( inputPath, variables ) );\n      jobBuilder.setOutputPath( environmentSubstitute( outputPath ) );\n\n      // process user defined values\n      for ( UserDefinedItem item : userDefined ) {\n        if ( item.getName() != null\n          && !\"\".equals( item.getName() ) && item.getValue() != null && !\"\"\n          .equals( item.getValue() ) ) { //$NON-NLS-1$ //$NON-NLS-2$\n          String nameS = environmentSubstitute( item.getName() );\n          String valueS = environmentSubstitute( item.getValue() );\n          jobBuilder.set( nameS, valueS );\n        }\n      }\n\n      String numMapTasksS = environmentSubstitute( numMapTasks );\n      try {\n        if ( Integer.parseInt( numMapTasksS ) < 0 ) {\n          throw new KettleException(\n            BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.NumMapTasks.Error\" ) );\n        }\n      } catch ( NumberFormatException e ) {\n        if ( log.isDebug() ) {\n          logError( Const.getStackTracker( e ) );\n        }\n      }\n\n      String numReduceTasksS = environmentSubstitute( numReduceTasks );\n      try {\n        if ( Integer.parseInt( numReduceTasksS ) < 0 ) {\n          throw new KettleException( BaseMessages\n            .getString( PKG, \"JobEntryHadoopTransJobExecutor.NumReduceTasks.Error\" ) );\n        }\n      } catch ( NumberFormatException e ) {\n        if ( log.isDebug() ) {\n          logError( Const.getStackTracker( e ) );\n        }\n      }\n\n      jobBuilder.setNumMapTasks( Const.toInt( numMapTasksS, 1 ) );\n      jobBuilder.setNumReduceTasks( Const.toInt( numReduceTasksS, 1 ) );\n\n      jobBuilder.setLogLevel( getLogLevel() );\n\n      jobBuilder.setCleanOutputPath( isCleanOutputPath() );\n\n      MapReduceJobAdvanced runningJob = jobBuilder.submit();\n\n      String loggingIntervalS = environmentSubstitute( loggingInterval );\n      int logIntv = 60;\n      try {\n        logIntv = Integer.parseInt( loggingIntervalS );\n      } catch ( NumberFormatException e ) {\n        logError( \"Can't parse logging interval '\" + loggingIntervalS + \"'. Setting \" + \"logging interval to 60\" );\n      }\n\n      if ( blocking ) {\n        try {\n          int taskCompletionEventIndex = 0;\n          while ( !parentJob.isStopped() && !runningJob.isComplete() ) {\n            if ( logIntv >= 1 ) {\n              printJobStatus( runningJob );\n              taskCompletionEventIndex += logTaskMessages( runningJob, taskCompletionEventIndex );\n              Thread.sleep( logIntv * 1000 );\n            } else {\n              Thread.sleep( 60000 );\n            }\n          }\n\n          if ( parentJob.isStopped() && !runningJob.isComplete() ) {\n            // We must stop the job running on Hadoop\n            runningJob.killJob();\n            // Indicate this job entry did not complete\n            result.setResult( false );\n          }\n\n          printJobStatus( runningJob );\n          // Log any messages we may have missed while polling\n          logTaskMessages( runningJob, taskCompletionEventIndex );\n        } catch ( InterruptedException ie ) {\n          logError( ie.getMessage(), ie );\n        }\n\n        // Entry is successful if the MR job is successful overall\n        result.setResult( runningJob.isSuccessful() );\n      }\n\n    } catch ( Throwable t ) {\n      t.printStackTrace();\n      result.setStopped( true );\n      result.setNrErrors( 1 );\n      result.setResult( false );\n      logError( Const.NVL( t.getMessage(), \"\" ), t );\n    }\n\n    return result;\n  }\n\n  /**\n   * Log messages indicating completion (success/failure) of component tasks for the provided running job.\n   *\n   * @param runningJob\n   *          Running job to poll for completion events\n   * @param startIndex\n   *          Start at this event index to poll from\n   * @return Total events consumed\n   * @throws IOException\n   *           Error fetching events\n   */\n  private int logTaskMessages( MapReduceJobAdvanced runningJob, int startIndex ) throws IOException {\n    TaskCompletionEvent[] tcEvents = runningJob.getTaskCompletionEvents( startIndex );\n    for ( int i = 0; i < tcEvents.length; i++ ) {\n      String[] diags = runningJob.getTaskDiagnostics( tcEvents[ i ].getTaskAttemptId() );\n      StringBuilder diagsOutput = new StringBuilder();\n\n      if ( diags != null && diags.length > 0 ) {\n        diagsOutput.append( Const.CR );\n        for ( String s : diags ) {\n          diagsOutput.append( s );\n          diagsOutput.append( Const.CR );\n        }\n      }\n\n      TaskCompletionEvent.Status status = tcEvents[ i ].getTaskStatus();\n      switch ( tcEvents[ i ].getTaskStatus() ) {\n        case KILLED:\n        case FAILED:\n        case TIPFAILED:\n          logError( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopTransJobExecutor.TaskDetails\", status, tcEvents[ i ].getTaskAttemptId(),\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(), diagsOutput ) ); //$NON-NLS-1$\n          break;\n        case SUCCEEDED:\n        case OBSOLETE:\n          logDetailed( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopTransJobExecutor.TaskDetails\", TaskCompletionEvent.Status.SUCCEEDED,\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(),\n              diagsOutput ) ); //$NON-NLS-1$\n          break;\n        default:\n          logError( BaseMessages\n            .getString(\n              PKG,\n              \"JobEntryHadoopTransJobExecutor.TaskDetails\", \"UNKNOWN\", tcEvents[ i ].getTaskAttemptId(),\n              tcEvents[ i ].getTaskAttemptId(), tcEvents[ i ].getEventId(), diagsOutput ) ); //$NON-NLS-1$\n      }\n    }\n    return tcEvents.length;\n  }\n\n  /**\n   * @return the plugin interface for this job entry.\n   */\n  public PluginInterface getPluginInterface() {\n    String pluginId = PluginRegistry.getInstance().getPluginId( this );\n    return PluginRegistry.getInstance().findPluginWithId( JobEntryPluginType.class, pluginId );\n  }\n\n  private void verifySingleThreadingValidity( TransMeta transMeta ) throws KettleException {\n    for ( StepMeta stepMeta : transMeta.getSteps() ) {\n      TransformationType[] types = stepMeta.getStepMetaInterface().getSupportedTransformationTypes();\n      boolean ok = false;\n      for ( TransformationType type : types ) {\n        if ( type == TransformationType.SingleThreaded ) {\n          ok = true;\n        }\n      }\n      if ( !ok ) {\n        throw new KettleException( \"Step '\" + stepMeta.getName() + \"' of type '\" + stepMeta.getStepID()\n          + \"' is not supported in a Single Threaded transformation engine.\" );\n      }\n    }\n  }\n\n  public void printJobStatus( MapReduceJobAdvanced runningJob ) throws IOException {\n    if ( log.isBasic() ) {\n      double setupPercent = runningJob.getSetupProgress() * 100f;\n      double mapPercent = runningJob.getMapProgress() * 100f;\n      double reducePercent = runningJob.getReduceProgress() * 100f;\n      logBasic( BaseMessages.getString( PKG,\n        \"JobEntryHadoopTransJobExecutor.RunningPercent\", setupPercent, mapPercent, reducePercent ) ); //$NON-NLS-1$\n    }\n  }\n\n  @Override\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers, Repository rep,\n                       IMetaStore metaStore )\n    throws KettleXMLException {\n    super.loadXML( entrynode, databases, slaveServers );\n    hadoopJobName = XMLHandler.getTagValue( entrynode, \"hadoop_job_name\" ); //$NON-NLS-1$\n\n    mapRepositoryDir = XMLHandler.getTagValue( entrynode, \"map_trans_repo_dir\" ); //$NON-NLS-1$\n    mapRepositoryFile = XMLHandler.getTagValue( entrynode, \"map_trans_repo_file\" ); //$NON-NLS-1$\n    String mapTransId = XMLHandler.getTagValue( entrynode, \"map_trans_repo_reference\" ); //$NON-NLS-1$\n    mapRepositoryReference = Const.isEmpty( mapTransId ) ? null : new StringObjectId( mapTransId );\n    mapTrans = XMLHandler.getTagValue( entrynode, \"map_trans\" ); //$NON-NLS-1$\n\n    combinerRepositoryDir = XMLHandler.getTagValue( entrynode, \"combiner_trans_repo_dir\" ); //$NON-NLS-1$\n    combinerRepositoryFile = XMLHandler.getTagValue( entrynode, \"combiner_trans_repo_file\" ); //$NON-NLS-1$\n    String combinerTransId = XMLHandler.getTagValue( entrynode, \"combiner_trans_repo_reference\" ); //$NON-NLS-1$\n    combinerRepositoryReference = Const.isEmpty( combinerTransId ) ? null : new StringObjectId( combinerTransId );\n    combinerTrans = XMLHandler.getTagValue( entrynode, \"combiner_trans\" ); //$NON-NLS-1$\n    final String combinerSingleThreaded = XMLHandler.getTagValue( entrynode, \"combiner_single_threaded\" ); //$NON-NLS-1$\n    if ( !Const.isEmpty( combinerSingleThreaded ) ) {\n      setCombiningSingleThreaded( \"Y\".equalsIgnoreCase( combinerSingleThreaded ) ); //$NON-NLS-1$\n    } else {\n      setCombiningSingleThreaded( false );\n    }\n\n    reduceRepositoryDir = XMLHandler.getTagValue( entrynode, \"reduce_trans_repo_dir\" ); //$NON-NLS-1$\n    reduceRepositoryFile = XMLHandler.getTagValue( entrynode, \"reduce_trans_repo_file\" ); //$NON-NLS-1$\n    String reduceTransId = XMLHandler.getTagValue( entrynode, \"reduce_trans_repo_reference\" ); //$NON-NLS-1$\n    reduceRepositoryReference = Const.isEmpty( reduceTransId ) ? null : new StringObjectId( reduceTransId );\n    reduceTrans = XMLHandler.getTagValue( entrynode, \"reduce_trans\" ); //$NON-NLS-1$\n    String single = XMLHandler.getTagValue( entrynode, \"reduce_single_threaded\" ); //$NON-NLS-1$\n    if ( Const.isEmpty( single ) ) {\n      setReducingSingleThreaded( false );\n    } else {\n      setReducingSingleThreaded( \"Y\".equalsIgnoreCase( single ) ); //$NON-NLS-1$\n    }\n\n    mapInputStepName = XMLHandler.getTagValue( entrynode, \"map_input_step_name\" ); //$NON-NLS-1$\n    mapOutputStepName = XMLHandler.getTagValue( entrynode, \"map_output_step_name\" ); //$NON-NLS-1$\n    combinerInputStepName = XMLHandler.getTagValue( entrynode, \"combiner_input_step_name\" ); //$NON-NLS-1$\n    combinerOutputStepName = XMLHandler.getTagValue( entrynode, \"combiner_output_step_name\" ); //$NON-NLS-1$\n    reduceInputStepName = XMLHandler.getTagValue( entrynode, \"reduce_input_step_name\" ); //$NON-NLS-1$\n    reduceOutputStepName = XMLHandler.getTagValue( entrynode, \"reduce_output_step_name\" ); //$NON-NLS-1$\n\n    blocking = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"blocking\" ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    loggingInterval = XMLHandler.getTagValue( entrynode, \"logging_interval\" ); //$NON-NLS-1$\n    inputPath = XMLHandler.getTagValue( entrynode, \"input_path\" ); //$NON-NLS-1$\n    inputFormatClass = XMLHandler.getTagValue( entrynode, \"input_format_class\" ); //$NON-NLS-1$\n    outputPath = XMLHandler.getTagValue( entrynode, \"output_path\" ); //$NON-NLS-1$\n\n    final String cleanOutputPath = XMLHandler.getTagValue( entrynode, \"clean_output_path\" );\n    if ( !Const.isEmpty( cleanOutputPath ) ) { //$NON-NLS-1$\n      setCleanOutputPath( cleanOutputPath.equalsIgnoreCase( \"Y\" ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    }\n\n    if ( !Const.isEmpty( XMLHandler.getTagValue( entrynode, \"suppress_output_map_key\" ) ) ) {\n      suppressOutputMapKey = XMLHandler.getTagValue( entrynode, \"suppress_output_map_key\" ).equalsIgnoreCase( \"Y\" );\n    }\n    if ( !Const.isEmpty( XMLHandler.getTagValue( entrynode, \"suppress_output_map_value\" ) ) ) {\n      suppressOutputMapValue = XMLHandler.getTagValue( entrynode, \"suppress_output_map_value\" ).equalsIgnoreCase( \"Y\" );\n    }\n\n    if ( !Const.isEmpty( XMLHandler.getTagValue( entrynode, \"suppress_output_key\" ) ) ) {\n      suppressOutputKey = XMLHandler.getTagValue( entrynode, \"suppress_output_key\" ).equalsIgnoreCase( \"Y\" );\n    }\n    if ( !Const.isEmpty( XMLHandler.getTagValue( entrynode, \"suppress_output_value\" ) ) ) {\n      suppressOutputValue = XMLHandler.getTagValue( entrynode, \"suppress_output_value\" ).equalsIgnoreCase( \"Y\" );\n    }\n    outputFormatClass = XMLHandler.getTagValue( entrynode, \"output_format_class\" ); //$NON-NLS-1$\n\n    namedCluster =\n      namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, null, rep, metaStore, entrynode, log );\n    setRepository( rep );\n    numMapTasks = XMLHandler.getTagValue( entrynode, \"num_map_tasks\" ); //$NON-NLS-1$\n    numReduceTasks = XMLHandler.getTagValue( entrynode, \"num_reduce_tasks\" ); //$NON-NLS-1$\n\n    // How many user defined elements?\n    userDefined = new ArrayList<UserDefinedItem>();\n    Node userDefinedList = XMLHandler.getSubNode( entrynode, \"user_defined_list\" ); //$NON-NLS-1$\n    int nrUserDefined = XMLHandler.countNodes( userDefinedList, \"user_defined\" ); //$NON-NLS-1$\n    for ( int i = 0; i < nrUserDefined; i++ ) {\n      Node userDefinedNode = XMLHandler.getSubNodeByNr( userDefinedList, \"user_defined\", i ); //$NON-NLS-1$\n      String name = XMLHandler.getTagValue( userDefinedNode, \"name\" ); //$NON-NLS-1$\n      String value = XMLHandler.getTagValue( userDefinedNode, \"value\" ); //$NON-NLS-1$\n      UserDefinedItem item = new UserDefinedItem();\n      item.setName( name );\n      item.setValue( value );\n      userDefined.add( item );\n    }\n  }\n\n  @Override\n  public String getXML() {\n    StringBuilder retval = new StringBuilder( 1024 );\n    retval.append( super.getXML() );\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"hadoop_job_name\", hadoopJobName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n        .append( XMLHandler.addTagValue( \"map_trans_repo_dir\", mapRepositoryDir ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n        .append( XMLHandler.addTagValue( \"map_trans_repo_file\", mapRepositoryFile ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval\n        .append( \"      \" ).append( XMLHandler.addTagValue( \"map_trans_repo_reference\",\n        mapRepositoryReference == null ? null : mapRepositoryReference.toString() ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"map_trans\", mapTrans ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"combiner_trans_repo_dir\", combinerRepositoryDir ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append(\n        XMLHandler.addTagValue( \"combiner_trans_repo_file\", combinerRepositoryFile ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval\n        .append( \"      \" ).append( XMLHandler.addTagValue( \"combiner_trans_repo_reference\",\n        combinerRepositoryReference == null ? null : combinerRepositoryReference.toString() ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" );\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"combiner_trans\", combinerTrans ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append(\n      XMLHandler.addTagValue( \"combiner_single_threaded\", combiningSingleThreaded ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"reduce_trans_repo_dir\", reduceRepositoryDir ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n        .append( XMLHandler.addTagValue( \"reduce_trans_repo_file\", reduceRepositoryFile ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval\n        .append( \"      \" ).append( XMLHandler.addTagValue( \"reduce_trans_repo_reference\",\n        reduceRepositoryReference == null ? null : reduceRepositoryReference.toString() ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"reduce_trans\", reduceTrans ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"reduce_single_threaded\", reducingSingleThreaded ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"map_input_step_name\", mapInputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"map_output_step_name\", mapOutputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append(\n      XMLHandler.addTagValue( \"combiner_input_step_name\", combinerInputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append(\n      XMLHandler.addTagValue( \"combiner_output_step_name\", combinerOutputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"reduce_input_step_name\", reduceInputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"reduce_output_step_name\", reduceOutputStepName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"blocking\", blocking ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"logging_interval\", loggingInterval ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"input_path\", inputPath ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"input_format_class\", inputFormatClass ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"output_path\", outputPath ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"clean_output_path\", cleanOutputPath ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"suppress_output_map_key\", suppressOutputMapKey ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" ).append(\n      XMLHandler.addTagValue( \"suppress_output_map_value\", suppressOutputMapValue ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"suppress_output_key\", suppressOutputKey ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"suppress_output_value\", suppressOutputValue ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"output_format_class\", outputFormatClass ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    namedClusterLoadSaveUtil.getXmlNamedCluster( namedCluster, namedClusterService, metaStore, log, retval );\n\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"num_map_tasks\", numMapTasks ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"num_reduce_tasks\", numReduceTasks ) ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    retval.append( \"      <user_defined_list>\" ).append( Const.CR ); //$NON-NLS-1$\n    if ( userDefined != null ) {\n      for ( UserDefinedItem item : userDefined ) {\n        if ( item.getName() != null\n          && !\"\".equals( item.getName() ) && item.getValue() != null && !\"\"\n          .equals( item.getValue() ) ) { //$NON-NLS-1$ //$NON-NLS-2$\n          retval.append( \"        <user_defined>\" ).append( Const.CR ); //$NON-NLS-1$\n          retval.append( \"          \" )\n            .append( XMLHandler.addTagValue( \"name\", item.getName() ) ); //$NON-NLS-1$ //$NON-NLS-2$\n          retval.append( \"          \" )\n            .append( XMLHandler.addTagValue( \"value\", item.getValue() ) ); //$NON-NLS-1$ //$NON-NLS-2$\n          retval.append( \"        </user_defined>\" ).append( Const.CR ); //$NON-NLS-1$\n        }\n      }\n    }\n    retval.append( \"      </user_defined_list>\" ).append( Const.CR ); //$NON-NLS-1$\n    return retval.toString();\n  }\n\n  @Override\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    if ( rep != null ) {\n      setHadoopJobName( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_name\" ) ); //$NON-NLS-1$\n\n      setMapRepositoryDir( rep.getJobEntryAttributeString( id_jobentry, \"map_trans_repo_dir\" ) ); //$NON-NLS-1$\n      setMapRepositoryFile( rep.getJobEntryAttributeString( id_jobentry, \"map_trans_repo_file\" ) ); //$NON-NLS-1$\n      String mapTransId = rep.getJobEntryAttributeString( id_jobentry, \"map_trans_repo_reference\" ); //$NON-NLS-1$\n      setMapRepositoryReference( Const.isEmpty( mapTransId ) ? null : new StringObjectId( mapTransId ) );\n      setMapTrans( rep.getJobEntryAttributeString( id_jobentry, \"map_trans\" ) ); //$NON-NLS-1$\n\n      setReduceRepositoryDir( rep.getJobEntryAttributeString( id_jobentry, \"reduce_trans_repo_dir\" ) ); //$NON-NLS-1$\n      setReduceRepositoryFile( rep.getJobEntryAttributeString( id_jobentry, \"reduce_trans_repo_file\" ) ); //$NON-NLS-1$\n      String reduceTransId = rep.getJobEntryAttributeString( id_jobentry, \"reduce_trans_repo_reference\" ); //$NON-NLS-1$\n      setReduceRepositoryReference( Const.isEmpty( reduceTransId ) ? null : new StringObjectId( reduceTransId ) );\n      setReduceTrans( rep.getJobEntryAttributeString( id_jobentry, \"reduce_trans\" ) ); //$NON-NLS-1$\n      setReducingSingleThreaded(\n        rep.getJobEntryAttributeBoolean( id_jobentry, \"reduce_single_threaded\", false ) ); //$NON-NLS-1$\n\n      setCombinerRepositoryDir(\n        rep.getJobEntryAttributeString( id_jobentry, \"combiner_trans_repo_dir\" ) ); //$NON-NLS-1$\n      setCombinerRepositoryFile(\n        rep.getJobEntryAttributeString( id_jobentry, \"combiner_trans_repo_file\" ) ); //$NON-NLS-1$\n      String combinerTransId =\n        rep.getJobEntryAttributeString( id_jobentry, \"combiner_trans_repo_reference\" ); //$NON-NLS-1$\n      setCombinerRepositoryReference( Const.isEmpty( combinerTransId ) ? null : new StringObjectId( combinerTransId ) );\n      setCombinerTrans( rep.getJobEntryAttributeString( id_jobentry, \"combiner_trans\" ) ); //$NON-NLS-1$\n      setCombiningSingleThreaded(\n        rep.getJobEntryAttributeBoolean( id_jobentry, \"combiner_single_threaded\", false ) ); //$NON-NLS-1$\n\n      setMapInputStepName( rep.getJobEntryAttributeString( id_jobentry, \"map_input_step_name\" ) ); //$NON-NLS-1$\n      setMapOutputStepName( rep.getJobEntryAttributeString( id_jobentry, \"map_output_step_name\" ) ); //$NON-NLS-1$\n      setCombinerInputStepName(\n        rep.getJobEntryAttributeString( id_jobentry, \"combiner_input_step_name\" ) ); //$NON-NLS-1$\n      setCombinerOutputStepName(\n        rep.getJobEntryAttributeString( id_jobentry, \"combiner_output_step_name\" ) ); //$NON-NLS-1$\n      setReduceInputStepName( rep.getJobEntryAttributeString( id_jobentry, \"reduce_input_step_name\" ) ); //$NON-NLS-1$\n      setReduceOutputStepName( rep.getJobEntryAttributeString( id_jobentry, \"reduce_output_step_name\" ) ); //$NON-NLS-1$\n\n      setBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"blocking\" ) ); //$NON-NLS-1$\n      setLoggingInterval( rep.getJobEntryAttributeString( id_jobentry, \"logging_interval\" ) ); //$NON-NLS-1$\n\n      setInputPath( rep.getJobEntryAttributeString( id_jobentry, \"input_path\" ) ); //$NON-NLS-1$\n      setInputFormatClass( rep.getJobEntryAttributeString( id_jobentry, \"input_format_class\" ) ); //$NON-NLS-1$\n      setOutputPath( rep.getJobEntryAttributeString( id_jobentry, \"output_path\" ) ); //$NON-NLS-1$\n      setCleanOutputPath( rep.getJobEntryAttributeBoolean( id_jobentry, \"clean_output_path\" ) ); //$NON-NLS-1$\n\n      setSuppressOutputOfMapKey(\n        rep.getJobEntryAttributeBoolean( id_jobentry, \"suppress_output_map_key\" ) ); //$NON-NLS-1$\n      setSuppressOutputOfMapValue(\n        rep.getJobEntryAttributeBoolean( id_jobentry, \"suppress_output_map_value\" ) ); //$NON-NLS-1$\n\n      setSuppressOutputOfKey( rep.getJobEntryAttributeBoolean( id_jobentry, \"suppress_output_key\" ) ); //$NON-NLS-1$\n      setSuppressOutputOfValue( rep.getJobEntryAttributeBoolean( id_jobentry, \"suppress_output_value\" ) ); //$NON-NLS-1$\n      setOutputFormatClass( rep.getJobEntryAttributeString( id_jobentry, \"output_format_class\" ) ); //$NON-NLS-1$\n\n      namedCluster =\n        namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, id_jobentry, rep, metaStore, null, log );\n      setRepository( rep );\n      setNumMapTasks( rep.getJobEntryAttributeString( id_jobentry, \"num_map_tasks\" ) );\n      setNumReduceTasks( rep.getJobEntryAttributeString( id_jobentry, \"num_reduce_tasks\" ) );\n\n      int argnr = rep.countNrJobEntryAttributes( id_jobentry, \"user_defined_name\" ); //$NON-NLS-1$\n      if ( argnr > 0 ) {\n        userDefined = new ArrayList<UserDefinedItem>();\n\n        UserDefinedItem item = null;\n        for ( int i = 0; i < argnr; i++ ) {\n          item = new UserDefinedItem();\n          item.setName( rep.getJobEntryAttributeString( id_jobentry, i, \"user_defined_name\" ) ); //$NON-NLS-1$\n          item.setValue( rep.getJobEntryAttributeString( id_jobentry, i, \"user_defined_value\" ) ); //$NON-NLS-1$\n          userDefined.add( item );\n        }\n      }\n\n    } else {\n      throw new KettleException( \"Unable to save to a repository. The repository is null.\" ); //$NON-NLS-1$\n    }\n  }\n\n  public void saveRep( Repository rep, ObjectId id_job ) throws KettleException {\n    if ( rep != null ) {\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_name\", hadoopJobName ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_trans_repo_dir\", mapRepositoryDir ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_trans_repo_file\", mapRepositoryFile ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_trans_repo_reference\",\n          mapRepositoryReference == null ? null : mapRepositoryReference.toString() ); //$NON-NLS-1$\n      mapTrans = mapTrans != null && mapTrans.endsWith( KTR_EXT ) ? mapTrans.replace( KTR_EXT, \"\" ) : mapTrans;\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_trans\", mapTrans ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_trans_repo_dir\", reduceRepositoryDir ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_trans_repo_file\", reduceRepositoryFile ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_trans_repo_reference\",\n          reduceRepositoryReference == null ? null : reduceRepositoryReference.toString() ); //$NON-NLS-1$\n      reduceTrans = reduceTrans != null && reduceTrans.endsWith( KTR_EXT ) ? reduceTrans.replace( KTR_EXT, \"\" ) : reduceTrans;\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_trans\", reduceTrans ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_single_threaded\", reducingSingleThreaded ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_trans_repo_dir\", combinerRepositoryDir ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_trans_repo_file\", combinerRepositoryFile ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_trans_repo_reference\",\n          combinerRepositoryReference == null ? null : combinerRepositoryReference.toString() ); //$NON-NLS-1$\n      combinerTrans = combinerTrans != null && combinerTrans.endsWith( KTR_EXT ) ? combinerTrans.replace( KTR_EXT, \"\" ) : combinerTrans;\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_trans\", combinerTrans ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_single_threaded\", combiningSingleThreaded ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_input_step_name\", mapInputStepName ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"map_output_step_name\", mapOutputStepName ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_input_step_name\",\n        combinerInputStepName ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"combiner_output_step_name\",\n        combinerOutputStepName ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_input_step_name\", reduceInputStepName ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"reduce_output_step_name\", reduceOutputStepName ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"blocking\", blocking ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"logging_interval\", loggingInterval ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"input_path\", inputPath ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"input_format_class\", inputFormatClass ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_path\", outputPath ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"clean_output_path\", cleanOutputPath ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"suppress_output_map_key\", suppressOutputMapKey ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"suppress_output_map_value\",\n        suppressOutputMapValue ); //$NON-NLS-1$\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"suppress_output_key\", suppressOutputKey ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"suppress_output_value\", suppressOutputValue ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"output_format_class\", outputFormatClass ); //$NON-NLS-1$\n\n      namedClusterLoadSaveUtil\n        .saveNamedClusterRep( namedCluster, namedClusterService, rep, metaStore, id_job, getObjectId(), log );\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_map_tasks\", numMapTasks ); //$NON-NLS-1$\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_reduce_tasks\", numReduceTasks ); //$NON-NLS-1$\n\n      if ( userDefined != null ) {\n        for ( int i = 0; i < userDefined.size(); i++ ) {\n          UserDefinedItem item = userDefined.get( i );\n          if ( item.getName() != null\n            && !\"\".equals( item.getName() ) && item.getValue() != null && !\"\"\n            .equals( item.getValue() ) ) { //$NON-NLS-1$ //$NON-NLS-2$\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"user_defined_name\", item.getName() ); //$NON-NLS-1$\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"user_defined_value\", item.getValue() ); //$NON-NLS-1$\n          }\n        }\n      }\n\n    } else {\n      throw new KettleException( \"Unable to save to a repository. The repository is null.\" ); //$NON-NLS-1$\n    }\n  }\n\n  public boolean evaluates() {\n    return true;\n  }\n\n  public boolean isUnconditional() {\n    return true;\n  }\n\n  /**\n   * @return the reduceSingleThreaded\n   */\n  public boolean isReducingSingleThreaded() {\n    return reducingSingleThreaded;\n  }\n\n  /**\n   * @param reducingSingleThreaded\n   *          the reducing single threaded to set\n   */\n  public void setReducingSingleThreaded( boolean reducingSingleThreaded ) {\n    this.reducingSingleThreaded = reducingSingleThreaded;\n  }\n\n  public boolean isCombiningSingleThreaded() {\n    return combiningSingleThreaded;\n  }\n\n  public void setCombiningSingleThreaded( boolean combiningSingleThreaded ) {\n    this.combiningSingleThreaded = combiningSingleThreaded;\n  }\n\n  private boolean hasMapperDefinition() {\n    return !Const.isEmpty( mapTrans ) || mapRepositoryReference != null\n      || ( !Const.isEmpty( mapRepositoryDir ) && !Const.isEmpty( mapRepositoryFile ) );\n  }\n\n  private boolean hasReducerDefinition() {\n    return !Const.isEmpty( reduceTrans ) || reduceRepositoryReference != null\n      || ( !Const.isEmpty( reduceRepositoryDir ) && !Const.isEmpty( reduceRepositoryFile ) );\n  }\n\n  private boolean hasCombinerDefinition() {\n    return !Const.isEmpty( combinerTrans ) || combinerRepositoryReference != null\n      || ( !Const.isEmpty( combinerRepositoryDir ) && !Const.isEmpty( combinerRepositoryFile ) );\n  }\n\n  private ObjectLocationSpecificationMethod defineSpecificationMethod( String repDir, String repFileName, ObjectId reference ) {\n    if ( reference != null ) {\n      return ObjectLocationSpecificationMethod.REPOSITORY_BY_REFERENCE;\n    }\n    return ObjectLocationSpecificationMethod.REPOSITORY_BY_NAME;\n  }\n\n  /**\n   * @return The objects referenced in the step, like a a transformation, a job, a mapper, a reducer, a combiner, ...\n   */\n  public String[] getReferencedObjectDescriptions() {\n    return new String[] { BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.ReferencedObject.Mapper\" ),\n      BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.ReferencedObject.Combiner\" ),\n      BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.ReferencedObject.Reducer\" ), };\n  }\n\n  /**\n   * @return true for each referenced object that is enabled or has a valid reference definition.\n   */\n  public boolean[] isReferencedObjectEnabled() {\n    return new boolean[] { hasMapperDefinition(), hasCombinerDefinition(), hasReducerDefinition(), };\n  }\n\n  /**\n   * Load the referenced object\n   *\n   * @param index\n   *          the referenced object index to load (in case there are multiple references)\n   * @param rep\n   *          the repository\n   * @param space\n   *          the variable space to use\n   * @return the referenced object once loaded\n   * @throws KettleException\n   */\n  public Object loadReferencedObject( Bowl bowl, int index, Repository rep, IMetaStore metaStore, VariableSpace space ) throws KettleException {\n    switch ( index ) {\n      case 0:\n        return loadTransMeta( bowl, space, rep, mapTrans, mapRepositoryReference, mapRepositoryDir,\n          mapRepositoryFile );\n      case 1:\n        return loadTransMeta( bowl, space, rep, combinerTrans, combinerRepositoryReference,\n          combinerRepositoryDir, combinerRepositoryFile );\n      case 2:\n        return loadTransMeta( bowl, space, rep, reduceTrans, reduceRepositoryReference,\n          reduceRepositoryDir, reduceRepositoryFile );\n    }\n    return null;\n\n  }\n\n  /**\n   * Exports the object to a flat-file system, adding content with filename keys to a set of definitions. The supplied\n   * resource naming interface allows the object to name appropriately without worrying about those parts of the\n   * implementation specific details.\n   *\n   * @param executionBowl\n   *          For file access\n   * @param globalManagementBowl\n   *          if needed for access to the current \"global\" (System or Repository) level config for export. If null, no\n   *          global config will be exported.\n   * @param space\n   *          The variable space to resolve (environment) variables with.\n   * @param definitions\n   *          The map containing the filenames and content\n   * @param namingInterface\n   *          The resource naming interface allows the object to be named appropriately\n   * @param repository\n   *          The repository to load resources from\n   * @param metaStore\n   *          the metaStore to load external metadata from\n   * @return The filename for this object. (also contained in the definitions map)\n   * @throws KettleException\n   *           in case something goes wrong during the export\n   */\n  @Override\n  public String exportResources( Bowl executionBowl, Bowl globalManagementBowl, VariableSpace space,\n      Map<String, ResourceDefinition> definitions, ResourceNamingInterface namingInterface, Repository repository,\n      IMetaStore metaStore ) throws KettleException {\n\n    // Try to load the transformation from repository or file.\n\n    // Modify this recursively too...\n    //\n    // AGAIN: there is no need to clone this job entry because the caller is responsible for this.\n\n    copyVariablesFrom( space );\n\n    boolean[] enabled = isReferencedObjectEnabled();\n    TransMeta transMeta;\n    for ( int i = 0; i < enabled.length; i++ ) {\n      if ( enabled[ i ] ) {\n        //\n        // First load the transformation metadata...\n        //\n        transMeta = (TransMeta) loadReferencedObject( executionBowl, i, repository, metaStore, space );\n        // Also go down into the transformation and export the files there. (mapping recursively down)\n        //\n        String proposedNewFilename =\n          transMeta.exportResources( executionBowl, globalManagementBowl, transMeta, definitions, namingInterface,\n            repository, metaStore );\n        // To get a relative path to it, we inject ${Internal.Job.Filename.Directory}\n        //\n        String newFilename = \"${\" + Const.INTERNAL_VARIABLE_JOB_FILENAME_DIRECTORY + \"}/\" + proposedNewFilename;\n\n        // Set the correct filename inside the XML.\n        //\n        transMeta.setFilename( newFilename );\n\n        // exports always reside in the root directory, in case we want to turn this into a file repository...\n        //\n        transMeta.setRepositoryDirectory( new RepositoryDirectory() );\n\n        // export to filename ALWAYS (this allows the exported XML to be executed remotely)\n        // change it in the job entry\n        setSpecificationMethodAndValue( i, ObjectLocationSpecificationMethod.FILENAME, newFilename, null, null );\n      }\n    }\n\n    return getHadoopJobName();\n  }\n\n  private void setSpecificationMethodAndValue( int i, ObjectLocationSpecificationMethod specification, String filename,\n                                               String repositoryDir, ObjectId referrence ) {\n    switch ( specification ) {\n      case FILENAME: {\n        switch ( i ) {\n          case 0: {\n            setMapTrans( filename );\n            break;\n          }\n          case 1: {\n            setCombinerTrans( filename );\n            break;\n          }\n          case 2: {\n            setReduceTrans( filename );\n            break;\n          }\n        }\n        break;\n      }\n      case REPOSITORY_BY_NAME: {\n        switch ( i ) {\n          case 0: {\n            setMapRepositoryDir( repositoryDir );\n            setMapRepositoryFile( filename );\n            break;\n          }\n          case 1: {\n            setCombinerRepositoryDir( repositoryDir );\n            setCombinerRepositoryFile( filename );\n            break;\n          }\n          case 2: {\n            setReduceRepositoryDir( repositoryDir );\n            setReduceRepositoryFile( filename );\n            break;\n          }\n        }\n        break;\n      }\n      case REPOSITORY_BY_REFERENCE: {\n        switch ( i ) {\n          case 0: {\n            setMapRepositoryReference( referrence );\n            break;\n          }\n          case 1: {\n            setCombinerRepositoryReference( referrence );\n            break;\n          }\n          case 2: {\n            setReduceRepositoryReference( referrence );\n            break;\n          }\n        }\n        break;\n      }\n    }\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  @Override public String getDialogClassName() {\n    return DIALOG_NAME;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/enter/HadoopEnterMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.enter;\n\nimport org.pentaho.big.data.kettle.plugins.mapreduce.DialogClassUtil;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.trans.steps.injector.InjectorMeta;\n\n@Step( id = \"HadoopEnterPlugin\", image = \"MRI.svg\", name = \"HadoopEnterPlugin.Name\",\n    description = \"HadoopEnterPlugin.Description\",\n    documentationUrl = \"pdi-transformation-steps-reference-overview/mapreduce-input\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.hadoopenter\" )\n@InjectionSupported( localizationPrefix = \"HadoopEnterPlugin.Injection.\" )\npublic class HadoopEnterMeta extends InjectorMeta {\n  @SuppressWarnings( \"unused\" )\n  private static Class<?> PKG = HadoopEnterMeta.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n  public static final String DIALOG_NAME = DialogClassUtil.getDialogClassName( PKG );\n\n  public static final String KEY_FIELDNAME = \"key\";\n  public static final String VALUE_FIELDNAME = \"value\";\n\n  public HadoopEnterMeta() throws Throwable {\n    setDefault();\n  }\n\n  @Override public void setDefault() {\n    allocate( 2 );\n\n    getFieldname()[ 0 ] = HadoopEnterMeta.KEY_FIELDNAME;\n    getFieldname()[ 1 ] = HadoopEnterMeta.VALUE_FIELDNAME;\n  }\n\n  @Override public String getDialogClassName() {\n    return DIALOG_NAME;\n  }\n\n  @Injection( name = \"KEY_TYPE\" )\n  public void setKeyType( int type ) {\n    getType()[0] = type;\n  }\n\n  @Injection( name = \"KEY_LENGTH\" )\n  public void setKeyLength( int length ) {\n    getLength()[0] = length;\n  }\n\n  @Injection( name = \"KEY_PRECISION\" )\n  public void setKeyPrecision( int precision ) {\n    getPrecision()[0] = precision;\n  }\n\n  @Injection( name = \"VALUE_TYPE\" )\n  public void setValueType( int type ) {\n    getType()[1] = type;\n  }\n\n  @Injection( name = \"VALUE_LENGTH\" )\n  public void setValueLength( int length ) {\n    getLength()[1] = length;\n  }\n\n  @Injection( name = \"VALUE_PRECISION\" )\n  public void setValuePrecision( int precision ) {\n    getPrecision()[1] = precision;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExit.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\npublic class HadoopExit extends BaseStep implements StepInterface {\n  private static final Class<?> PKG = HadoopExit.class;\n\n  private HadoopExitMeta meta;\n  private HadoopExitData data;\n\n  public HadoopExit( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n      Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  public void runtimeInit() throws KettleException {\n    data.init( getTransMeta().getBowl(), getInputRowMeta(), meta, this );\n  }\n\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n    meta = (HadoopExitMeta) smi;\n    data = (HadoopExitData) sdi;\n\n    Object[] r = getRow();\n    if ( r == null ) {\n      // no more input to be expected...\n      setOutputDone();\n      return false;\n    }\n\n    if ( first ) {\n      runtimeInit();\n      first = false;\n    }\n\n    Object[] outputRow = new Object[2];\n    outputRow[HadoopExitData.getOutKeyOrdinal()] = r[data.getInKeyOrdinal()];\n    outputRow[HadoopExitData.getOutValueOrdinal()] = r[data.getInValueOrdinal()];\n\n    putRow( data.getOutputRowMeta(), outputRow );\n\n    if ( checkFeedback( getLinesRead() ) ) {\n      logBasic( BaseMessages.getString( PKG, \"HadoopExit.Linenr\", getLinesRead() ) );\n    }\n\n    return true;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\npublic class HadoopExitData extends BaseStepData implements StepDataInterface {\n  private RowMetaInterface outputRowMeta = null;\n\n  private int inKeyOrdinal = -1;\n  private int inValueOrdinal = -1;\n\n  public static final int outKeyOrdinal = 0;\n  public static final int outValueOrdinal = 1;\n\n  public HadoopExitData() {\n    super();\n  }\n\n  public void init( Bowl bowl, RowMetaInterface rowMeta, HadoopExitMeta stepMeta, VariableSpace space )\n      throws KettleException {\n    if ( rowMeta != null ) {\n      outputRowMeta = rowMeta.clone();\n      stepMeta.getFields( bowl, outputRowMeta, stepMeta.getName(), null, null, space );\n\n      setInKeyOrdinal( rowMeta.indexOfValue( stepMeta.getOutKeyFieldname() ) );\n      setInValueOrdinal( rowMeta.indexOfValue( stepMeta.getOutValueFieldname() ) );\n    }\n  }\n\n  public RowMetaInterface getOutputRowMeta() {\n    return outputRowMeta;\n  }\n\n  public void setInKeyOrdinal( int inKeyOrdinal ) {\n    this.inKeyOrdinal = inKeyOrdinal;\n  }\n\n  public int getInKeyOrdinal() {\n    return inKeyOrdinal;\n  }\n\n  public void setInValueOrdinal( int inValueOrdinal ) {\n    this.inValueOrdinal = inValueOrdinal;\n  }\n\n  public int getInValueOrdinal() {\n    return inValueOrdinal;\n  }\n\n  public static int getOutKeyOrdinal() {\n    return outKeyOrdinal;\n  }\n\n  public static int getOutValueOrdinal() {\n    return outValueOrdinal;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.pentaho.big.data.kettle.plugins.mapreduce.DialogClassUtil;\nimport org.pentaho.di.core.CheckResult;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.util.Arrays;\nimport java.util.List;\n\n@Step( id = \"HadoopExitPlugin\", image = \"MRO.svg\", name = \"HadoopExitPlugin.Name\",\n    description = \"HadoopExitPlugin.Description\",\n    documentationUrl = \"pdi-transformation-steps-reference-overview/mapreduce-output\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.hadoopexit\" )\n@InjectionSupported( localizationPrefix = \"HadoopExitPlugin.Injection.\" )\npublic class HadoopExitMeta extends BaseStepMeta implements StepMetaInterface {\n  public static final String ERROR_INVALID_KEY_FIELD = \"Error.InvalidKeyField\";\n  public static final String ERROR_INVALID_VALUE_FIELD = \"Error.InvalidValueField\";\n  public static final String OUT_KEY = \"outKey\";\n  public static final String OUT_VALUE = \"outValue\";\n  public static final String HADOOP_EXIT_META_CHECK_RESULT_NO_DATA_STREAM = \"HadoopExitMeta.CheckResult.NoDataStream\";\n  public static final String HADOOP_EXIT_META_CHECK_RESULT_NO_SPECIFIED_FIELDS =\n    \"HadoopExitMeta.CheckResult.NoSpecifiedFields\";\n  public static final String HADOOP_EXIT_META_CHECK_RESULT_STEP_RECEVING_DATA =\n    \"HadoopExitMeta.CheckResult.StepRecevingData\";\n  public static final String HADOOP_EXIT_META_CHECK_RESULT_NOT_RECEVING_SPECIFIED_FIELDS =\n    \"HadoopExitMeta.CheckResult.NotRecevingSpecifiedFields\";\n  public static Class<?> PKG = HadoopExit.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n  public static final String DIALOG_NAME = DialogClassUtil.getDialogClassName( PKG );\n\n  public static String OUT_KEY_FIELDNAME = \"outkeyfieldname\";\n\n  public static String OUT_VALUE_FIELDNAME = \"outvaluefieldname\";\n\n  @Injection( name = \"KEY_FIELD\" )\n  private String outKeyFieldname;\n\n  @Injection( name = \"VALUE_FIELD\" )\n  private String outValueFieldname;\n\n  public HadoopExitMeta() throws Throwable {\n    super();\n  }\n\n  @Override public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore )\n    throws KettleXMLException {\n    setOutKeyFieldname( XMLHandler.getTagValue( stepnode, HadoopExitMeta.OUT_KEY_FIELDNAME ) ); //$NON-NLS-1$\n    setOutValueFieldname( XMLHandler.getTagValue( stepnode, HadoopExitMeta.OUT_VALUE_FIELDNAME ) ); //$NON-NLS-1$\n  }\n\n  @Override public String getXML() {\n    StringBuilder retval = new StringBuilder();\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( HadoopExitMeta.OUT_KEY_FIELDNAME, getOutKeyFieldname() ) );\n    retval.append( \"    \" )\n        .append( XMLHandler.addTagValue( HadoopExitMeta.OUT_VALUE_FIELDNAME, getOutValueFieldname() ) );\n\n    return retval.toString();\n  }\n\n  public Object clone() {\n    return super.clone();\n  }\n\n  @Override public void setDefault() {\n    setOutKeyFieldname( null );\n    setOutValueFieldname( null );\n  }\n\n  @Override public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    setOutKeyFieldname( rep.getStepAttributeString( id_step, HadoopExitMeta.OUT_KEY_FIELDNAME ) ); //$NON-NLS-1$\n    setOutValueFieldname( rep.getStepAttributeString( id_step, HadoopExitMeta.OUT_VALUE_FIELDNAME ) ); //$NON-NLS-1$\n  }\n\n  @Override public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step ) throws KettleException {\n    rep.saveStepAttribute( id_transformation, id_step, HadoopExitMeta.OUT_KEY_FIELDNAME, getOutKeyFieldname() ); //$NON-NLS-1$\n    rep.saveStepAttribute( id_transformation, id_step, HadoopExitMeta.OUT_VALUE_FIELDNAME, getOutValueFieldname() ); //$NON-NLS-1$\n  }\n\n  @Override public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info,\n      StepMeta nextStep, VariableSpace space ) throws KettleStepException {\n\n    ValueMetaInterface key = rowMeta.searchValueMeta( getOutKeyFieldname() );\n    ValueMetaInterface value = rowMeta.searchValueMeta( getOutValueFieldname() );\n\n    if ( key == null ) {\n      throw new KettleStepException( BaseMessages.getString( PKG, ERROR_INVALID_KEY_FIELD, getOutKeyFieldname() ) );\n    }\n    if ( value == null ) {\n      throw new KettleStepException( BaseMessages.getString( PKG, ERROR_INVALID_VALUE_FIELD, getOutValueFieldname() ) );\n    }\n\n    // The output consists of 2 fields: outKey and outValue\n    // The data types rely on the input data type so we look those up\n    //\n    ValueMetaInterface keyMeta = key.clone();\n    ValueMetaInterface valueMeta = value.clone();\n\n    keyMeta.setName( OUT_KEY );\n    valueMeta.setName( OUT_VALUE );\n\n    rowMeta.clear();\n\n    rowMeta.addValueMeta( keyMeta );\n    rowMeta.addValueMeta( valueMeta );\n  }\n\n  @Override\n  public void check( List<CheckResultInterface> remarks, TransMeta transMeta, StepMeta stepinfo, RowMetaInterface prev,\n      String[] input, String[] output, RowMetaInterface info ) {\n    CheckResult cr;\n\n    // Make sure we have an input stream that contains the desired field names\n    if ( prev == null || prev.size() == 0 ) {\n      cr =\n          new CheckResult( CheckResultInterface.TYPE_RESULT_ERROR, BaseMessages.getString( PKG,\n            HADOOP_EXIT_META_CHECK_RESULT_NO_DATA_STREAM ), stepinfo ); //$NON-NLS-1$\n      remarks.add( cr );\n    } else {\n      List<String> fieldnames = Arrays.asList( prev.getFieldNames() );\n\n      HadoopExitMeta stepMeta = (HadoopExitMeta) stepinfo.getStepMetaInterface();\n\n      if ( ( stepMeta.getOutKeyFieldname() == null ) || stepMeta.getOutValueFieldname() == null ) {\n        cr =\n            new CheckResult( CheckResultInterface.TYPE_RESULT_ERROR, BaseMessages.getString( PKG,\n              HADOOP_EXIT_META_CHECK_RESULT_NO_SPECIFIED_FIELDS, prev.size() + \"\" ), stepinfo ); //$NON-NLS-1$ //$NON-NLS-2$\n        remarks.add( cr );\n      } else {\n\n        if ( fieldnames.contains( stepMeta.getOutKeyFieldname() )\n            && fieldnames.contains( stepMeta.getOutValueFieldname() ) ) {\n          cr =\n              new CheckResult( CheckResultInterface.TYPE_RESULT_OK, BaseMessages.getString( PKG,\n                HADOOP_EXIT_META_CHECK_RESULT_STEP_RECEVING_DATA, prev.size() + \"\" ), stepinfo ); //$NON-NLS-1$ //$NON-NLS-2$\n          remarks.add( cr );\n        } else {\n          cr =\n              new CheckResult( CheckResultInterface.TYPE_RESULT_ERROR, BaseMessages.getString( PKG,\n                HADOOP_EXIT_META_CHECK_RESULT_NOT_RECEVING_SPECIFIED_FIELDS, prev.size() + \"\" ), stepinfo ); //$NON-NLS-1$ //$NON-NLS-2$\n          remarks.add( cr );\n        }\n      }\n    }\n  }\n\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int cnr, TransMeta tr,\n      Trans trans ) {\n    return new HadoopExit( stepMeta, stepDataInterface, cnr, tr, trans );\n  }\n\n  public StepDataInterface getStepData() {\n    return new HadoopExitData();\n  }\n\n  public String getOutKeyFieldname() {\n    return outKeyFieldname;\n  }\n\n  public void setOutKeyFieldname( String arg ) {\n    outKeyFieldname = arg;\n  }\n\n  public String getOutValueFieldname() {\n    return outValueFieldname;\n  }\n\n  public void setOutValueFieldname( String arg ) {\n    outValueFieldname = arg;\n  }\n\n  @Override public String getDialogClassName() {\n    return DIALOG_NAME;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/hadoop/JobEntryHadoopJobExecutorController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.hadoop;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop.JobEntryHadoopJobExecutor;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.UserDefinedItem;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceJarInfo;\nimport org.pentaho.hadoop.shim.api.mapreduce.MapReduceService;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.plugins.JobEntryPluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulDomException;\nimport org.pentaho.ui.xul.XulEventSourceAdapter;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.components.XulTextbox;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.containers.XulVbox;\nimport org.pentaho.ui.xul.impl.AbstractXulEventHandler;\nimport org.pentaho.ui.xul.jface.tags.JfaceCMenuList;\nimport org.pentaho.ui.xul.jface.tags.JfaceMenuList;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.io.File;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\n\npublic class JobEntryHadoopJobExecutorController extends AbstractXulEventHandler {\n  public static final String JOB_ENTRY_NAME = \"jobEntryName\"; //$NON-NLS-1$\n  public static final String HADOOP_JOB_NAME = \"hadoopJobName\"; //$NON-NLS-1$\n  public static final String JAR_URL = \"jarUrl\"; //$NON-NLS-1$\n  public static final String DRIVER_CLASS = \"driverClass\"; //$NON-NLS-1$\n  public static final String DRIVER_CLASSES = \"driverClasses\"; //$NON-NLS-1$\n  public static final String IS_SIMPLE = \"isSimple\"; //$NON-NLS-1$\n  public static final String USER_DEFINED = \"userDefined\"; //$NON-NLS-1$\n  private static final Class<?> PKG = JobEntryHadoopJobExecutor.class;\n  private String jobEntryName;\n  private String hadoopJobName;\n  private String jarUrl = \"\";\n  private String driverClass = \"\";\n  private List<String> driverClasses = new ArrayList<>();\n\n  private boolean isSimple = true;\n\n  private SimpleConfiguration sConf = new SimpleConfiguration();\n  private AdvancedConfiguration aConf = new AdvancedConfiguration();\n\n  private JobEntryHadoopJobExecutor jobEntry;\n\n  private JobMeta jobMeta;\n\n  private AbstractModelList<UserDefinedItem> userDefined = new AbstractModelList<UserDefinedItem>();\n\n  private final NamedClusterService namedClusterService;\n\n  private final HadoopClusterDelegateImpl ncDelegate;\n\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n\n  public JobEntryHadoopJobExecutorController( HadoopClusterDelegateImpl hadoopClusterDelegate,\n                                              NamedClusterService namedClusterService,\n                                              NamedClusterServiceLocator namedClusterServiceLocator ) {\n    this.ncDelegate = hadoopClusterDelegate;\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  protected VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n  public void accept() {\n    ExtTextbox tempBox =\n      (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-hadoopjob-name\" );\n    this.hadoopJobName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jar-url\" );\n    this.jarUrl = ( (Text) tempBox.getTextControl() ).getText();\n\n    JfaceCMenuList tempList = (JfaceCMenuList) getXulDomContainer().getDocumentRoot().getElementById( \"driver-class\" );\n    this.driverClass = tempList.getValue();\n\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"command-line-arguments\" );\n    sConf.cmdLineArgs = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-key-class\" );\n    aConf.outputKeyClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-value-class\" );\n    aConf.outputValueClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-mapper-class\" );\n    aConf.mapperClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-reducer-class\" );\n    aConf.reducerClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"input-path\" );\n    aConf.inputPath = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"output-path\" );\n    aConf.outputPath = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-input-format\" );\n    aConf.inputFormatClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-format\" );\n    aConf.outputFormatClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-map-tasks\" );\n    aConf.numMapTasks = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-reduce-tasks\" );\n    aConf.numReduceTasks = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"logging-interval\" );\n    aConf.loggingInterval = ( (Text) tempBox.getTextControl() ).getText();\n\n    JfaceMenuList<?> ncBox =\n      (JfaceMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById( \"named-clusters\" );\n\n    if ( !isSimple() && aConf.selectedNamedCluster != null ) {\n      NamedCluster reload = namedClusterService.getNamedClusterByName( aConf.selectedNamedCluster.getName(), jobMeta.getMetaStore() );\n      if ( reload != null ) {\n        aConf.selectedNamedCluster = reload;\n      }\n    }\n\n    String validationErrors = \"\";\n    if ( StringUtil.isEmpty( jobEntryName ) ) {\n      validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.JobEntryName.Error\" ) + \"\\n\";\n    }\n    if ( StringUtil.isEmpty( hadoopJobName ) ) {\n      validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopJobExecutor.HadoopJobName.Error\" ) + \"\\n\";\n    }\n\n    if ( !StringUtil.isEmpty( validationErrors ) ) {\n      openErrorDialog( BaseMessages.getString( PKG, \"Dialog.Error\" ), validationErrors );\n      // show validation errors dialog\n      return;\n    }\n\n    // common/simple\n    jobEntry.setName( jobEntryName );\n    jobEntry.setHadoopJobName( hadoopJobName );\n    jobEntry.setSimple( isSimple );\n    jobEntry.setJarUrl( jarUrl );\n    jobEntry.setDriverClass( driverClass );\n    jobEntry.setCmdLineArgs( sConf.getCommandLineArgs() );\n    jobEntry.setSimpleBlocking( sConf.isSimpleBlocking() );\n    jobEntry.setSimpleLoggingInterval( sConf.getSimpleLoggingInterval() );\n    // advanced config\n    jobEntry.setBlocking( aConf.isBlocking() );\n    jobEntry.setLoggingInterval( aConf.getLoggingInterval() );\n    jobEntry.setMapperClass( aConf.getMapperClass() );\n    jobEntry.setCombinerClass( aConf.getCombinerClass() );\n    jobEntry.setReducerClass( aConf.getReducerClass() );\n    jobEntry.setInputPath( aConf.getInputPath() );\n    jobEntry.setInputFormatClass( aConf.getInputFormatClass() );\n    jobEntry.setOutputPath( aConf.getOutputPath() );\n    jobEntry.setOutputKeyClass( aConf.getOutputKeyClass() );\n    jobEntry.setOutputValueClass( aConf.getOutputValueClass() );\n    jobEntry.setOutputFormatClass( aConf.getOutputFormatClass() );\n\n    jobEntry.setNamedCluster( aConf.selectedNamedCluster );\n    jobEntry.setNumMapTasks( aConf.getNumMapTasks() );\n    jobEntry.setNumReduceTasks( aConf.getNumReduceTasks() );\n    jobEntry.setUserDefined( userDefined );\n\n    jobEntry.setChanged();\n\n    cancel();\n  }\n\n  public void init() throws XulDomException {\n    if ( jobEntry != null ) {\n      // common/simple\n      setName( jobEntry.getName() );\n      setJobEntryName( jobEntry.getName() );\n      setHadoopJobName( jobEntry.getHadoopJobName() );\n      setSimple( jobEntry.isSimple() );\n      setJarUrl( jobEntry.getJarUrl() );\n      aConf.setSelectedNamedCluster( jobEntry.getNamedCluster() );\n      populateDriverMenuList();\n      setDriverClass( jobEntry.getDriverClass() );\n      sConf.setCommandLineArgs( jobEntry.getCmdLineArgs() );\n      sConf.setSimpleBlocking( jobEntry.isSimpleBlocking() );\n      sConf.setSimpleLoggingInterval( jobEntry.getSimpleLoggingInterval() );\n      // advanced config\n      userDefined.clear();\n      if ( jobEntry.getUserDefined() != null ) {\n        userDefined.addAll( jobEntry.getUserDefined() );\n      }\n\n      VariableSpace varSpace = getVariableSpace();\n      ExtTextbox tempBox;\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-hadoopjob-name\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jar-url\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"command-line-arguments\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-key-class\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-value-class\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-mapper-class\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-combiner-class\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-reducer-class\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"input-path\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"output-path\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-input-format\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-format\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-map-tasks\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-reduce-tasks\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"logging-interval\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"simple-logging-interval\" );\n      tempBox.setVariableSpace( varSpace );\n\n      aConf.setBlocking( jobEntry.isBlocking() );\n      aConf.setLoggingInterval( jobEntry.getLoggingInterval() );\n      aConf.setMapperClass( jobEntry.getMapperClass() );\n      aConf.setCombinerClass( jobEntry.getCombinerClass() );\n      aConf.setReducerClass( jobEntry.getReducerClass() );\n      aConf.setInputPath( jobEntry.getInputPath() );\n      aConf.setInputFormatClass( jobEntry.getInputFormatClass() );\n      aConf.setOutputPath( jobEntry.getOutputPath() );\n      aConf.setOutputKeyClass( jobEntry.getOutputKeyClass() );\n      aConf.setOutputValueClass( jobEntry.getOutputValueClass() );\n      aConf.setOutputFormatClass( jobEntry.getOutputFormatClass() );\n      aConf.setNumMapTasks( jobEntry.getNumMapTasks() );\n      aConf.setNumReduceTasks( jobEntry.getNumReduceTasks() );\n    }\n  }\n\n  public void setJobMeta( JobMeta jobMeta ) {\n    this.jobMeta = jobMeta;\n  }\n\n  public void cancel() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n\n    Shell shell = (Shell) xulDialog.getRootObject();\n    if ( !shell.isDisposed() ) {\n      WindowProperty winprop = new WindowProperty( shell );\n      PropsUI.getInstance().setScreen( winprop );\n      ( (Composite) xulDialog.getManagedObject() ).dispose();\n      shell.dispose();\n    }\n  }\n\n  public void browseJar() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n    Shell shell = (Shell) xulDialog.getRootObject();\n    FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n    dialog.setFilterExtensions( new String[] { \"*.jar;*.zip\" } );\n    dialog.setFilterNames( new String[] { \"Java Archives (jar)\" } );\n    String prevName = jobEntry.environmentSubstitute( jarUrl );\n    String parentFolder = null;\n    Spoon spoon = Spoon.getInstance();\n    try {\n      parentFolder =\n        KettleVFS.getFilename( KettleVFS.getInstance( spoon.getExecutionBowl() )\n                                 .getFileObject( jobEntry.environmentSubstitute( jobEntry.getFilename() ) )\n          .getParent() );\n    } catch ( Exception e ) {\n      // not that important\n    }\n    if ( !Const.isEmpty( prevName ) ) {\n      try {\n        if ( KettleVFS.getInstance( spoon.getExecutionBowl() ).fileExists( prevName ) ) {\n          dialog.setFilterPath( KettleVFS.getFilename( KettleVFS.getInstance( spoon.getExecutionBowl() )\n                                                         .getFileObject( prevName ).getParent() ) );\n        } else {\n\n          if ( !prevName.endsWith( \".jar\" ) && !prevName.endsWith( \".zip\" ) ) {\n            prevName = \"${\" + Const.INTERNAL_VARIABLE_JOB_FILENAME_DIRECTORY + \"}/\" + Const.trim( jarUrl ) + \".jar\";\n          }\n          if ( KettleVFS.getInstance( spoon.getExecutionBowl() ).fileExists( prevName ) ) {\n            setJarUrl( prevName );\n            return;\n          }\n        }\n      } catch ( Exception e ) {\n        dialog.setFilterPath( parentFolder );\n      }\n    } else if ( !Const.isEmpty( parentFolder ) ) {\n      dialog.setFilterPath( parentFolder );\n    }\n\n    String fname = dialog.open();\n    if ( fname != null ) {\n      File file = new File( fname );\n      String name = file.getName();\n      String parentFolderSelection = file.getParentFile().toString();\n\n      if ( !Const.isEmpty( parentFolder ) && parentFolder.equals( parentFolderSelection ) ) {\n        setJarUrl( \"${\" + Const.INTERNAL_VARIABLE_JOB_FILENAME_DIRECTORY + \"}/\" + name );\n      } else {\n        setJarUrl( fname );\n      }\n\n      populateDriverMenuList();\n    }\n  }\n\n  public void newUserDefinedItem() {\n    userDefined.add( new UserDefinedItem() );\n  }\n\n  public SimpleConfiguration getSimpleConfiguration() {\n    return sConf;\n  }\n\n  public AdvancedConfiguration getAdvancedConfiguration() {\n    return aConf;\n  }\n\n  public AbstractModelList<UserDefinedItem> getUserDefined() {\n    return userDefined;\n  }\n\n  @Override\n  public String getName() {\n    return \"jobEntryController\"; //$NON-NLS-1$\n  }\n\n  public String getJobEntryName() {\n    return jobEntryName;\n  }\n\n  public void setJobEntryName( String jobEntryName ) {\n    String previousVal = this.jobEntryName;\n    String newVal = jobEntryName;\n\n    this.jobEntryName = jobEntryName;\n    firePropertyChange( JobEntryHadoopJobExecutorController.JOB_ENTRY_NAME, previousVal, newVal );\n  }\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    String previousVal = this.hadoopJobName;\n    String newVal = hadoopJobName;\n\n    this.hadoopJobName = hadoopJobName;\n    firePropertyChange( JobEntryHadoopJobExecutorController.HADOOP_JOB_NAME, previousVal, newVal );\n  }\n\n  public String getJarUrl() {\n    return jarUrl;\n  }\n\n  public void setJarUrl( String jarUrl ) {\n    String previousVal = this.jarUrl;\n    String newVal = jarUrl;\n\n    this.jarUrl = jarUrl;\n    firePropertyChange( JobEntryHadoopJobExecutorController.JAR_URL, previousVal, newVal );\n  }\n\n  public String getDriverClass() {\n    return driverClass;\n  }\n\n  public void setDriverClass( String driverClass ) {\n    String previousVal = this.driverClass;\n    String newVal = driverClass;\n\n    this.driverClass = driverClass;\n    firePropertyChange( JobEntryHadoopJobExecutorController.DRIVER_CLASS, previousVal, newVal );\n  }\n\n  public List<String> getDriverClasses() {\n    return driverClasses;\n  }\n\n  public void setDriverClasses( List<String> driverClasses ) {\n    List<String> previousVal = this.driverClasses;\n    List<String> newVal = driverClasses;\n\n    this.driverClasses = driverClasses;\n    firePropertyChange( JobEntryHadoopJobExecutorController.DRIVER_CLASSES, previousVal, newVal );\n  }\n\n  public boolean isSimple() {\n    return isSimple;\n  }\n\n  public void setSimple( boolean isSimple ) {\n    ( (XulVbox) getXulDomContainer().getDocumentRoot().getElementById( \"advanced-configuration\" ) )\n      .setVisible( !isSimple ); //$NON-NLS-1$\n    ( (XulVbox) getXulDomContainer().getDocumentRoot().getElementById( \"simple-configuration\" ) )\n      .setVisible( isSimple ); //$NON-NLS-1$\n\n    boolean previousVal = this.isSimple;\n    boolean newVal = isSimple;\n\n    this.isSimple = isSimple;\n    firePropertyChange( JobEntryHadoopJobExecutorController.IS_SIMPLE, previousVal, newVal );\n  }\n\n  public void invertSimpleBlocking() {\n    sConf.setSimpleBlocking( !sConf.isSimpleBlocking() );\n  }\n\n  public void invertBlocking() {\n    aConf.setBlocking( !aConf.isBlocking() );\n  }\n\n  public JobEntryHadoopJobExecutor getJobEntry() {\n    return jobEntry;\n  }\n\n  public void setJobEntry( JobEntryHadoopJobExecutor jobEntry ) {\n    this.jobEntry = jobEntry;\n  }\n\n  public List<NamedCluster> getNamedClusters() throws MetaStoreException {\n    return namedClusterService.list( jobMeta.getMetaStore() );\n  }\n\n  public void openErrorDialog( String title, String message ) {\n    XulDialog errorDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-dialog\" );\n    errorDialog.setTitle( title );\n\n    XulTextbox errorMessage =\n      (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-message\" );\n    errorMessage.setValue( message );\n\n    errorDialog.show();\n  }\n\n  public void closeErrorDialog() {\n    XulDialog errorDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-dialog\" );\n    errorDialog.hide();\n  }\n\n  public void editNamedCluster() {\n    try {\n      XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n      Shell shell = (Shell) xulDialog.getRootObject();\n      NamedCluster namedCluster;\n      if ( aConf.isSelectedNamedCluster() ) {\n        namedCluster = aConf.selectedNamedCluster;\n      } else {\n        namedCluster = namedClusterService.getClusterTemplate();\n      }\n      String clusterName = ncDelegate.editNamedCluster( null, namedCluster, shell );\n      if ( clusterName != null ) {\n        //cancel button on editing pressed, clusters not changed\n        firePropertyChange( \"namedClusters\", namedCluster, getNamedClusters() );\n        selectNamedCluster( clusterName );\n      }\n    } catch ( Throwable t ) {\n      t.printStackTrace();\n    }\n  }\n\n  public void newNamedCluster() {\n    try {\n      XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n      Shell shell = (Shell) xulDialog.getRootObject();\n      String newClusterName = ncDelegate.newNamedCluster( jobMeta, null, shell );\n      if ( newClusterName != null ) {\n        //cancel button on editing pressed, clusters not changed\n        firePropertyChange( \"namedClusters\", null, getNamedClusters() );\n        selectNamedCluster( newClusterName );\n      }\n    } catch ( Throwable t ) {\n      t.printStackTrace();\n    }\n  }\n\n  private void selectNamedCluster( String clusterName ) throws MetaStoreException {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedClusterMenu = (XulMenuList<NamedCluster>) getXulDomContainer().getDocumentRoot()\n      .getElementById( \"named-clusters\" );\n    for ( NamedCluster nc : getNamedClusters() ) {\n      if ( clusterName != null && clusterName.equals( nc.getName() ) ) {\n        namedClusterMenu.setSelectedItem( nc );\n        aConf.setSelectedNamedCluster( nc );\n      }\n    }\n  }\n\n  public void help() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getRootElement().getFirstChild();\n    Shell shell = (Shell) xulDialog.getRootObject();\n    PluginInterface plugin =\n      PluginRegistry.getInstance().findPluginWithId( JobEntryPluginType.class, jobEntry.getPluginId() );\n    HelpUtils.openHelpDialog( shell, plugin );\n  }\n\n  private void populateDriverMenuList() {\n    if ( Const.isEmpty( jarUrl ) ) {\n      return;\n    }\n    MapReduceService mapReduceService = null;\n    try {\n      mapReduceService = namedClusterServiceLocator.getService( aConf.selectedNamedCluster, MapReduceService.class );\n    } catch ( ClusterInitializationException e ) {\n      jobEntry.logError( \"Unable to locate map reduce service for cluster.\" );\n    }\n\n    MapReduceJarInfo mapReduceJarInfo = null;\n    try {\n      mapReduceJarInfo = mapReduceService != null ? mapReduceService.getJarInfo( JobEntryHadoopJobExecutor.resolveJarUrl( jarUrl, getVariableSpace() ) ) : null;\n    } catch ( Exception e ) {\n      jobEntry.logError( \"Unable to locate map reduce jar.\" );\n    }\n\n    List<String> driverClassesInJar = ( mapReduceJarInfo != null ? new ArrayList<>( mapReduceJarInfo.getClassesWithMain() ) : Collections.emptyList() );\n\n    if ( Const.isEmpty( driverClass ) ) {\n      setDriverClasses( driverClassesInJar );\n      String mainClass = mapReduceJarInfo != null ? mapReduceJarInfo.getMainClass() : null;\n      if ( mainClass != null ) {\n        setDriverClass( mainClass );\n      } else if ( !driverClassesInJar.isEmpty( ) ) {\n        setDriverClass( driverClassesInJar.get( 0 ) );\n      } else {\n        setDriverClass( \"\" );\n      }\n    } else {\n      String saveDriverClass = driverClass;\n      setDriverClasses( driverClassesInJar );\n      setDriverClass( saveDriverClass );\n    }\n  }\n\n  public class SimpleConfiguration extends XulEventSourceAdapter {\n    public static final String CMD_LINE_ARGS = \"commandLineArgs\"; //$NON-NLS-1$\n    public static final String BLOCKING = \"simpleBlocking\"; //$NON-NLS-1$\n    public static final String LOGGING_INTERVAL = \"simpleLoggingInterval\"; //$NON-NLS-1$\n\n    private String cmdLineArgs;\n    private boolean simpleBlocking;\n    private String simpleLoggingInterval = \"60\";\n\n    public String getCommandLineArgs() {\n      return cmdLineArgs;\n    }\n\n    public void setCommandLineArgs( String cmdLineArgs ) {\n      String previousVal = this.cmdLineArgs;\n      String newVal = cmdLineArgs;\n\n      this.cmdLineArgs = cmdLineArgs;\n\n      firePropertyChange( SimpleConfiguration.CMD_LINE_ARGS, previousVal, newVal );\n    }\n\n    public boolean isSimpleBlocking() {\n      return simpleBlocking;\n    }\n\n    public void setSimpleBlocking( boolean simpleBlocking ) {\n      boolean old = this.simpleBlocking;\n      this.simpleBlocking = simpleBlocking;\n      firePropertyChange( SimpleConfiguration.BLOCKING, old, this.simpleBlocking );\n    }\n\n    public String getSimpleLoggingInterval() {\n      return simpleLoggingInterval;\n    }\n\n    public void setSimpleLoggingInterval( String simpleLoggingInterval ) {\n      String old = this.simpleLoggingInterval;\n      this.simpleLoggingInterval = simpleLoggingInterval;\n      firePropertyChange( SimpleConfiguration.LOGGING_INTERVAL, old, this.simpleLoggingInterval );\n    }\n  }\n\n  public class AdvancedConfiguration extends XulEventSourceAdapter {\n    public static final String OUTPUT_KEY_CLASS = \"outputKeyClass\"; //$NON-NLS-1$\n    public static final String OUTPUT_VALUE_CLASS = \"outputValueClass\"; //$NON-NLS-1$\n    public static final String MAPPER_CLASS = \"mapperClass\"; //$NON-NLS-1$\n    public static final String COMBINER_CLASS = \"combinerClass\"; //$NON-NLS-1$\n    public static final String REDUCER_CLASS = \"reducerClass\"; //$NON-NLS-1$\n    public static final String INPUT_FORMAT_CLASS = \"inputFormatClass\"; //$NON-NLS-1$\n    public static final String OUTPUT_FORMAT_CLASS = \"outputFormatClass\"; //$NON-NLS-1$\n    public static final String INPUT_PATH = \"inputPath\"; //$NON-NLS-1$\n    public static final String OUTPUT_PATH = \"outputPath\"; //$NON-NLS-1$\n    public static final String BLOCKING = \"blocking\"; //$NON-NLS-1$\n    public static final String LOGGING_INTERVAL = \"loggingInterval\"; //$NON-NLS-1$\n    public static final String HDFS_HOSTNAME = \"hdfsHostname\"; //$NON-NLS-1$\n    public static final String HDFS_PORT = \"hdfsPort\"; //$NON-NLS-1$\n    public static final String JOB_TRACKER_HOSTNAME = \"jobTrackerHostname\"; //$NON-NLS-1$\n    public static final String JOB_TRACKER_PORT = \"jobTrackerPort\"; //$NON-NLS-1$\n    public static final String NUM_MAP_TASKS = \"numMapTasks\"; //$NON-NLS-1$\n    public static final String NUM_REDUCE_TASKS = \"numReduceTasks\"; //$NON-NLS-1$\n\n    private String outputKeyClass;\n    private String outputValueClass;\n    private String mapperClass;\n    private String combinerClass;\n    private String reducerClass;\n    private String inputFormatClass;\n    private String outputFormatClass;\n\n    private NamedCluster selectedNamedCluster;\n\n    private String inputPath;\n    private String outputPath;\n\n    private String numMapTasks = \"1\";\n    private String numReduceTasks = \"1\";\n\n    private boolean blocking;\n    private String loggingInterval = \"60\"; // 60 seconds\n\n    public String getOutputKeyClass() {\n      return outputKeyClass;\n    }\n\n    public void setOutputKeyClass( String outputKeyClass ) {\n      String previousVal = this.outputKeyClass;\n      String newVal = outputKeyClass;\n\n      this.outputKeyClass = outputKeyClass;\n      firePropertyChange( AdvancedConfiguration.OUTPUT_KEY_CLASS, previousVal, newVal );\n    }\n\n    public String getOutputValueClass() {\n      return outputValueClass;\n    }\n\n    public void setOutputValueClass( String outputValueClass ) {\n      String previousVal = this.outputValueClass;\n      String newVal = outputValueClass;\n\n      this.outputValueClass = outputValueClass;\n      firePropertyChange( AdvancedConfiguration.OUTPUT_VALUE_CLASS, previousVal, newVal );\n    }\n\n    public String getMapperClass() {\n      return mapperClass;\n    }\n\n    public void setMapperClass( String mapperClass ) {\n      String previousVal = this.mapperClass;\n      String newVal = mapperClass;\n\n      this.mapperClass = mapperClass;\n      firePropertyChange( AdvancedConfiguration.MAPPER_CLASS, previousVal, newVal );\n    }\n\n    public String getCombinerClass() {\n      return combinerClass;\n    }\n\n    public void setCombinerClass( String combinerClass ) {\n      String previousVal = this.combinerClass;\n      String newVal = combinerClass;\n\n      this.combinerClass = combinerClass;\n      firePropertyChange( AdvancedConfiguration.COMBINER_CLASS, previousVal, newVal );\n    }\n\n    public String getReducerClass() {\n      return reducerClass;\n    }\n\n    public void setReducerClass( String reducerClass ) {\n      String previousVal = this.reducerClass;\n      String newVal = reducerClass;\n\n      this.reducerClass = reducerClass;\n      firePropertyChange( AdvancedConfiguration.REDUCER_CLASS, previousVal, newVal );\n    }\n\n    public String getInputFormatClass() {\n      return inputFormatClass;\n    }\n\n    public void setInputFormatClass( String inputFormatClass ) {\n      String previousVal = this.inputFormatClass;\n      String newVal = inputFormatClass;\n\n      this.inputFormatClass = inputFormatClass;\n      firePropertyChange( AdvancedConfiguration.INPUT_FORMAT_CLASS, previousVal, newVal );\n    }\n\n    public String getOutputFormatClass() {\n      return outputFormatClass;\n    }\n\n    public void setOutputFormatClass( String outputFormatClass ) {\n      String previousVal = this.outputFormatClass;\n      String newVal = outputFormatClass;\n\n      this.outputFormatClass = outputFormatClass;\n      firePropertyChange( AdvancedConfiguration.OUTPUT_FORMAT_CLASS, previousVal, newVal );\n    }\n\n    public String getInputPath() {\n      return inputPath;\n    }\n\n    public void setInputPath( String inputPath ) {\n      String previousVal = this.inputPath;\n      String newVal = inputPath;\n\n      this.inputPath = inputPath;\n      firePropertyChange( AdvancedConfiguration.INPUT_PATH, previousVal, newVal );\n    }\n\n    public String getOutputPath() {\n      return outputPath;\n    }\n\n    public void setOutputPath( String outputPath ) {\n      String previousVal = this.outputPath;\n      String newVal = outputPath;\n\n      this.outputPath = outputPath;\n      firePropertyChange( AdvancedConfiguration.OUTPUT_PATH, previousVal, newVal );\n    }\n\n    public boolean isBlocking() {\n      return blocking;\n    }\n\n    public void setBlocking( boolean blocking ) {\n      boolean previousVal = this.blocking;\n      boolean newVal = blocking;\n\n      this.blocking = blocking;\n      firePropertyChange( AdvancedConfiguration.BLOCKING, previousVal, newVal );\n    }\n\n    public String getLoggingInterval() {\n      return loggingInterval;\n    }\n\n    public void setLoggingInterval( String loggingInterval ) {\n      String previousVal = this.loggingInterval;\n      String newVal = loggingInterval;\n\n      this.loggingInterval = loggingInterval;\n      firePropertyChange( AdvancedConfiguration.LOGGING_INTERVAL, previousVal, newVal );\n    }\n\n    public String getNumMapTasks() {\n      return numMapTasks;\n    }\n\n    public void setNumMapTasks( String numMapTasks ) {\n      String previousVal = this.numMapTasks;\n      String newVal = numMapTasks;\n\n      this.numMapTasks = numMapTasks;\n      firePropertyChange( AdvancedConfiguration.NUM_MAP_TASKS, previousVal, newVal );\n    }\n\n    public String getNumReduceTasks() {\n      return numReduceTasks;\n    }\n\n    public void setNumReduceTasks( String numReduceTasks ) {\n      String previousVal = this.numReduceTasks;\n      String newVal = numReduceTasks;\n\n      this.numReduceTasks = numReduceTasks;\n      firePropertyChange( AdvancedConfiguration.NUM_REDUCE_TASKS, previousVal, newVal );\n    }\n\n    public boolean isSelectedNamedCluster() {\n      return this.selectedNamedCluster != null;\n    }\n\n    public void setSelectedNamedCluster( NamedCluster namedCluster ) {\n      this.selectedNamedCluster = namedCluster;\n    }\n\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/hadoop/JobEntryHadoopJobExecutorDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.hadoop;\n\nimport org.dom4j.DocumentException;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop.JobEntryHadoopJobExecutor;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.XulRunner;\nimport org.pentaho.ui.xul.binding.Binding.Type;\nimport org.pentaho.ui.xul.binding.BindingConvertor;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.components.XulRadio;\nimport org.pentaho.ui.xul.components.XulTextbox;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.containers.XulVbox;\nimport org.pentaho.ui.xul.swt.SwtXulLoader;\nimport org.pentaho.ui.xul.swt.SwtXulRunner;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.util.ArrayList;\nimport java.util.Enumeration;\nimport java.util.List;\nimport java.util.ResourceBundle;\n\n@PluginDialog( id = \"HadoopJobExecutorPlugin\", image = \"HDE.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Hadoop+Job+Executor\" )\npublic class JobEntryHadoopJobExecutorDialog extends JobEntryDialog implements JobEntryDialogInterface {\n  private static final Class<?> CLZ = JobEntryHadoopJobExecutor.class;\n  private static final Logger logger = LogManager.getLogger( JobEntryHadoopJobExecutorDialog.class );\n  private final NamedClusterService namedClusterService;\n  private final JobEntryHadoopJobExecutorController controller;\n  private JobEntryHadoopJobExecutor jobEntry;\n  private XulDomContainer container;\n\n  private BindingFactory bf;\n\n  private ResourceBundle bundle = new ResourceBundle() {\n    @Override\n    public Enumeration<String> getKeys() {\n      return null;\n    }\n\n    @Override\n    protected Object handleGetObject( String key ) {\n      return BaseMessages.getString( CLZ, key );\n    }\n  };\n\n  public JobEntryHadoopJobExecutorDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException, DocumentException, Throwable {\n    super( parent, jobEntry, rep, jobMeta );\n    this.jobEntry = (JobEntryHadoopJobExecutor) jobEntry;\n    this.namedClusterService = this.jobEntry.getNamedClusterService();\n    controller = new JobEntryHadoopJobExecutorController(\n      new HadoopClusterDelegateImpl( Spoon.getInstance(), namedClusterService,\n        this.jobEntry.getRuntimeTestActionService(), this.jobEntry.getRuntimeTester() ), namedClusterService,\n      this.jobEntry.getNamedClusterServiceLocator() );\n\n    SwtXulLoader swtXulLoader = new SwtXulLoader();\n    swtXulLoader.registerClassLoader( getClass().getClassLoader() );\n    swtXulLoader.register( \"VARIABLETEXTBOX\", \"org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox\" );\n    swtXulLoader.register( \"VARIABLEMENULIST\", \"org.pentaho.di.ui.core.database.dialog.tags.ExtMenuList\" );\n    swtXulLoader.setOuterContext( shell );\n\n    container =\n      swtXulLoader\n        .loadXul( \"org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/JobEntryHadoopJobExecutorDialog.xul\",\n        bundle ); //$NON-NLS-1$\n\n    final XulRunner runner = new SwtXulRunner();\n    runner.addContainer( container );\n\n    container.addEventHandler( controller );\n\n    bf = new DefaultBindingFactory();\n    bf.setDocument( container.getDocumentRoot() );\n    bf.setBindingType( Type.BI_DIRECTIONAL );\n\n    bf.createBinding( \"jobentry-name\", \"value\", controller,\n      JobEntryHadoopJobExecutorController.JOB_ENTRY_NAME ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n\n    bf.createBinding( \"jobentry-hadoopjob-name\", \"value\", controller,\n      JobEntryHadoopJobExecutorController.HADOOP_JOB_NAME ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n    bf.createBinding( \"jar-url\", \"value\", controller,\n      JobEntryHadoopJobExecutorController.JAR_URL ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n    bf.createBinding( \"driver-class\", \"value\", controller,\n      JobEntryHadoopJobExecutorController.DRIVER_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n    bf.createBinding( \"driver-class\", \"selectedItem\", controller,\n      JobEntryHadoopJobExecutorController.DRIVER_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n    bf.createBinding( \"driver-class\", \"elements\", controller,\n      JobEntryHadoopJobExecutorController.DRIVER_CLASSES ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n    bf.createBinding( \"command-line-arguments\", \"value\", controller.getSimpleConfiguration(),\n      JobEntryHadoopJobExecutorController.SimpleConfiguration.CMD_LINE_ARGS ); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$\n\n    bf.createBinding( \"classes-output-key-class\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.OUTPUT_KEY_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-output-value-class\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.OUTPUT_VALUE_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-mapper-class\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.MAPPER_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-combiner-class\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.COMBINER_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-reducer-class\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.REDUCER_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-input-format\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.INPUT_FORMAT_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-output-format\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.OUTPUT_FORMAT_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    // bf.createBinding(\"num-map-tasks\", \"value\", controller.getAdvancedConfiguration(),\n    // AdvancedConfiguration.NUM_MAP_TASKS, bindingConverter); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"num-map-tasks\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.NUM_MAP_TASKS ); //$NON-NLS-1$ //$NON-NLS-2$\n    // bf.createBinding(\"num-reduce-tasks\", \"value\", controller.getAdvancedConfiguration(),\n    // AdvancedConfiguration.NUM_REDUCE_TASKS, bindingConverter); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"num-reduce-tasks\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.NUM_REDUCE_TASKS ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"simple-blocking\", \"selected\", controller.getSimpleConfiguration(),\n      JobEntryHadoopJobExecutorController.SimpleConfiguration.BLOCKING ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"blocking\", \"selected\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.BLOCKING ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"simple-logging-interval\", \"value\", controller.getSimpleConfiguration(),\n      JobEntryHadoopJobExecutorController.SimpleConfiguration.LOGGING_INTERVAL ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"logging-interval\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.LOGGING_INTERVAL ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"input-path\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.INPUT_PATH ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"output-path\", \"value\", controller.getAdvancedConfiguration(),\n      JobEntryHadoopJobExecutorController.AdvancedConfiguration.OUTPUT_PATH ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    ( (XulRadio) container.getDocumentRoot().getElementById( \"simpleRadioButton\" ) ).setSelected( this.jobEntry\n      .isSimple() ); //$NON-NLS-1$\n    ( (XulRadio) container.getDocumentRoot().getElementById( \"advancedRadioButton\" ) ).setSelected( !this.jobEntry\n      .isSimple() ); //$NON-NLS-1$\n\n    ( (XulVbox) container.getDocumentRoot().getElementById( \"advanced-configuration\" ) ).setVisible( !this.jobEntry\n      .isSimple() ); //$NON-NLS-1$\n\n    XulTextbox simpleLoggingInterval =\n      (XulTextbox) container.getDocumentRoot().getElementById( \"simple-logging-interval\" );\n    simpleLoggingInterval.setValue( \"\" + controller.getSimpleConfiguration().getSimpleLoggingInterval() );\n\n    XulTextbox loggingInterval = (XulTextbox) container.getDocumentRoot().getElementById( \"logging-interval\" );\n    loggingInterval.setValue( controller.getAdvancedConfiguration().getLoggingInterval() );\n\n    XulTextbox mapTasks = (XulTextbox) container.getDocumentRoot().getElementById( \"num-map-tasks\" );\n    mapTasks.setValue( controller.getAdvancedConfiguration().getNumMapTasks() );\n\n    XulTextbox reduceTasks = (XulTextbox) container.getDocumentRoot().getElementById( \"num-reduce-tasks\" );\n    reduceTasks.setValue( controller.getAdvancedConfiguration().getNumReduceTasks() );\n\n    XulTree variablesTree = (XulTree) container.getDocumentRoot().getElementById( \"fields-table\" ); //$NON-NLS-1$\n    bf.setBindingType( Type.ONE_WAY );\n    bf.createBinding( controller.getUserDefined(), \"children\", variablesTree, \"elements\" ); //$NON-NLS-1$//$NON-NLS-2$\n    bf.setBindingType( Type.BI_DIRECTIONAL );\n\n    controller.setJobMeta( jobMeta );\n    controller.setJobEntry( (JobEntryHadoopJobExecutor) jobEntry );\n    controller.init();\n\n    bf.setBindingType( Type.ONE_WAY );\n    bf.createBinding( controller, \"namedClusters\", \"named-clusters\", \"elements\" ).fireSourceChanged();\n    bf.setBindingType( Type.BI_DIRECTIONAL );\n    bf.createBinding( \"named-clusters\", \"selectedIndex\", controller.getAdvancedConfiguration(), \"selectedNamedCluster\",\n      new BindingConvertor<Integer, NamedCluster>() {\n        public NamedCluster sourceToTarget( final Integer index ) {\n          List<NamedCluster> clusters = new ArrayList<>();\n          try {\n            clusters = controller.getNamedClusters();\n          } catch ( MetaStoreException e ) {\n            logger.error( e.getMessage(), e );\n          }\n          if ( index == -1 || clusters.isEmpty() ) {\n            return null;\n          }\n          return clusters.get( index );\n        }\n\n        public Integer targetToSource( final NamedCluster value ) {\n          return null;\n        }\n      } ).fireSourceChanged();\n\n    selectNamedCluster();\n  }\n\n  private void selectNamedCluster() {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedClusterMenu =\n        (XulMenuList<NamedCluster>) container.getDocumentRoot().getElementById( \"named-clusters\" ); //$NON-NLS-1$\n\n    NamedCluster namedCluster = jobEntry.getNamedCluster();\n    if ( namedCluster != null && isKnownNamedCluster( namedCluster, controller ) ) {\n      namedClusterMenu.setSelectedItem( namedCluster );\n      controller.getAdvancedConfiguration().setSelectedNamedCluster( namedCluster );\n    }\n  }\n\n  public JobEntryInterface open() {\n    XulDialog dialog = (XulDialog) container.getDocumentRoot().getElementById( \"job-entry-dialog\" ); //$NON-NLS-1$\n    dialog.show();\n\n    return jobEntry;\n  }\n\n  private boolean isKnownNamedCluster( NamedCluster jobNameCluster, JobEntryHadoopJobExecutorController controller ) {\n    boolean result = false;\n    if ( jobNameCluster != null ) {\n      String jncName = jobNameCluster.getName();\n      List<NamedCluster> nClusters = null;\n      try {\n        nClusters = controller.getNamedClusters();\n      } catch ( MetaStoreException e ) {\n        logger.error( e.getMessage(), e );\n      }\n      if ( jncName != null && nClusters != null ) {\n        for ( NamedCluster nc : nClusters ) {\n          if ( jncName != null && jncName.equals( nc.getName() ) ) {\n            result = true;\n            break;\n          }\n        }\n      }\n    }\n    return result;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/pmr/JobEntryHadoopTransJobExecutorController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.pmr;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.UserDefinedItem;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.plugins.JobEntryPluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.repository.dialog.SelectObjectDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.components.XulTextbox;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.impl.AbstractXulEventHandler;\nimport org.pentaho.ui.xul.jface.tags.JfaceMenuList;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport com.google.common.annotations.VisibleForTesting;\n\nimport java.io.File;\nimport java.util.List;\n\n\npublic class JobEntryHadoopTransJobExecutorController extends AbstractXulEventHandler {\n\n  private static final Class<?> PKG = JobEntryHadoopTransJobExecutor.class;\n\n  public static final String JOB_ENTRY_NAME = \"jobEntryName\"; //$NON-NLS-1$\n  public static final String HADOOP_JOB_NAME = \"hadoopJobName\"; //$NON-NLS-1$\n  public static final String MAP_TRANS = \"mapTrans\"; //$NON-NLS-1$\n  public static final String COMBINER_TRANS = \"combinerTrans\"; //$NON-NLS-1$\n  public static final String REDUCE_TRANS = \"reduceTrans\"; //$NON-NLS-1$\n\n  public static final String MAP_TRANS_INPUT_STEP_NAME = \"mapTransInputStepName\"; //$NON-NLS-1$\n  public static final String MAP_TRANS_OUTPUT_STEP_NAME = \"mapTransOutputStepName\"; //$NON-NLS-1$\n  public static final String COMBINER_TRANS_INPUT_STEP_NAME = \"combinerTransInputStepName\"; //$NON-NLS-1$\n  public static final String COMBINER_TRANS_OUTPUT_STEP_NAME = \"combinerTransOutputStepName\"; //$NON-NLS-1$\n  public static final String COMBINING_SINGLE_THREADED = \"combiningSingleThreaded\"; //$NON-NLS-1$\n  public static final String REDUCE_TRANS_INPUT_STEP_NAME = \"reduceTransInputStepName\"; //$NON-NLS-1$\n  public static final String REDUCE_TRANS_OUTPUT_STEP_NAME = \"reduceTransOutputStepName\"; //$NON-NLS-1$\n  public static final String REDUCING_SINGLE_THREADED = \"reducingSingleThreaded\"; //$NON-NLS-1$\n\n  public static final String SUPPRESS_OUTPUT_MAP_KEY = \"suppressOutputOfMapKey\";\n  public static final String SUPPRESS_OUTPUT_MAP_VALUE = \"suppressOutputOfMapValue\";\n  public static final String SUPPRESS_OUTPUT_KEY = \"suppressOutputOfKey\";\n  public static final String SUPPRESS_OUTPUT_VALUE = \"suppressOutputOfValue\";\n  public static final String MAP_OUTPUT_KEY_CLASS = \"mapOutputKeyClass\"; //$NON-NLS-1$\n  public static final String MAP_OUTPUT_VALUE_CLASS = \"mapOutputValueClass\"; //$NON-NLS-1$\n  public static final String OUTPUT_KEY_CLASS = \"outputKeyClass\"; //$NON-NLS-1$\n  public static final String OUTPUT_VALUE_CLASS = \"outputValueClass\"; //$NON-NLS-1$\n  public static final String INPUT_FORMAT_CLASS = \"inputFormatClass\"; //$NON-NLS-1$\n  public static final String OUTPUT_FORMAT_CLASS = \"outputFormatClass\"; //$NON-NLS-1$\n  public static final String INPUT_PATH = \"inputPath\"; //$NON-NLS-1$\n  public static final String OUTPUT_PATH = \"outputPath\"; //$NON-NLS-1$\n  public static final String CLEAN_OUTPUT_PATH = \"cleanOutputPath\"; //$NON-NLS-1$\n  public static final String BLOCKING = \"blocking\"; //$NON-NLS-1$\n  public static final String LOGGING_INTERVAL = \"loggingInterval\"; //$NON-NLS-1$\n\n  public static final String HDFS_HOSTNAME = \"hdfsHostname\"; //$NON-NLS-1$\n  public static final String HDFS_PORT = \"hdfsPort\"; //$NON-NLS-1$\n  public static final String JOB_TRACKER_HOSTNAME = \"jobTrackerHostname\"; //$NON-NLS-1$\n  public static final String JOB_TRACKER_PORT = \"jobTrackerPort\"; //$NON-NLS-1$\n\n  public static final String NUM_MAP_TASKS = \"numMapTasks\"; //$NON-NLS-1$\n  public static final String NUM_REDUCE_TASKS = \"numReduceTasks\"; //$NON-NLS-1$\n\n  public static final String USER_DEFINED = \"userDefined\"; //$NON-NLS-1$\n\n  public static final String LOCAL = \"local\";\n  public static final String REPOSITORY = \"repository\";\n\n\n  private final NamedClusterService namedClusterService;\n\n  private final HadoopClusterDelegateImpl ncDelegate;\n\n  private String jobEntryName;\n  private String hadoopJobName;\n\n  private boolean suppressOutputMapKey;\n  private boolean suppressOutputMapValue;\n  private boolean suppressOutputKey;\n  private boolean suppressOutputValue;\n\n  private String inputFormatClass;\n  private String outputFormatClass;\n\n  private String inputPath;\n  private String outputPath;\n\n  private boolean cleanOutputPath;\n\n  private String numMapTasks = \"1\";\n  private String numReduceTasks = \"1\";\n\n  private boolean blocking;\n  private String loggingInterval = \"60\";\n\n  private String mapTrans = \"\";\n\n  private String combinerTrans = \"\";\n  private boolean combiningSingleThreaded;\n\n  private String reduceTrans = \"\";\n  private boolean reducingSingleThreaded;\n\n  private String mapTransInputStepName = \"\";\n  private String mapTransOutputStepName = \"\";\n  private String combinerTransInputStepName = \"\";\n  private String combinerTransOutputStepName = \"\";\n  private String reduceTransInputStepName = \"\";\n  private String reduceTransOutputStepName = \"\";\n\n  private static String storageType;\n  private List<NamedCluster> namedClusters;\n\n  protected Shell shell;\n  private Repository rep;\n  private JobMeta jobMeta;\n  private NamedCluster selectedNamedCluster;\n\n  private JobEntryHadoopTransJobExecutor jobEntry;\n\n  private AbstractModelList<UserDefinedItem> userDefined = new AbstractModelList<UserDefinedItem>();\n\n  public JobEntryHadoopTransJobExecutorController( HadoopClusterDelegateImpl ncDelegate,\n      NamedClusterService namedClusterService ) throws Throwable {\n    this.ncDelegate = ncDelegate;\n    this.namedClusterService = namedClusterService;\n  }\n\n  protected VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n  public void accept() {\n    ExtTextbox tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-hadoopjob-name\" );\n    this.hadoopJobName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-transformation\" );\n    this.mapTrans = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-input-stepname\" );\n    this.mapTransInputStepName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-output-stepname\" );\n    this.mapTransOutputStepName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-transformation\" );\n    this.combinerTrans = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-input-stepname\" );\n    this.combinerTransInputStepName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-output-stepname\" );\n    this.combinerTransOutputStepName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-transformation\" );\n    this.reduceTrans = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-input-stepname\" );\n    this.reduceTransInputStepName = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-output-stepname\" );\n    this.reduceTransOutputStepName = ( (Text) tempBox.getTextControl() ).getText();\n\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"input-path\" );\n    this.inputPath = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"output-path\" );\n    this.outputPath = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-input-format\" );\n    this.inputFormatClass = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-format\" );\n    this.outputFormatClass = ( (Text) tempBox.getTextControl() ).getText();\n\n    JfaceMenuList<?> ncBox = (JfaceMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById( \"named-clusters\" );\n\n    try {\n      selectedNamedCluster = namedClusterService.read( ncBox.getSelectedItem(), jobMeta.getMetaStore() );\n    } catch ( MetaStoreException e ) {\n      openErrorDialog( BaseMessages.getString( PKG, \"Dialog.Error\" ), e.getMessage() );\n    }\n\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-map-tasks\" );\n    this.numMapTasks = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-reduce-tasks\" );\n    this.numReduceTasks = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"logging-interval\" );\n    this.loggingInterval = ( (Text) tempBox.getTextControl() ).getText();\n\n    String validationErrors = \"\";\n    if ( StringUtil.isEmpty( jobEntryName ) ) {\n      validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.JobEntryName.Error\" ) + \"\\n\";\n    }\n    if ( selectedNamedCluster == null ) {\n      validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.NamedClusterNotProvided.Error\" ) + \"\\n\";\n    }\n    if ( StringUtil.isEmpty( hadoopJobName ) ) {\n      validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.HadoopJobName.Error\" ) + \"\\n\";\n    }\n    if ( !StringUtils.isEmpty( numReduceTasks ) ) {\n      String reduceS = getVariableSpace().environmentSubstitute( numReduceTasks );\n      try {\n        int numR = Integer.parseInt( reduceS );\n\n        if ( numR < 0 ) {\n          validationErrors +=\n              BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.NumReduceTasks.Error\" ) + \"\\n\";\n        }\n      } catch ( NumberFormatException e ) {\n        // omit\n      }\n    }\n    if ( !StringUtils.isEmpty( numMapTasks ) ) {\n      String mapS = getVariableSpace().environmentSubstitute( numMapTasks );\n\n      try {\n        int numM = Integer.parseInt( mapS );\n\n        if ( numM < 0 ) {\n          validationErrors += BaseMessages.getString( PKG, \"JobEntryHadoopTransJobExecutor.NumMapTasks.Error\" ) + \"\\n\";\n        }\n      } catch ( NumberFormatException e ) {\n        // omit\n      }\n    }\n\n    if ( !StringUtil.isEmpty( validationErrors ) ) {\n      openErrorDialog( BaseMessages.getString( PKG, \"Dialog.Error\" ), validationErrors );\n      // show validation errors dialog\n      return;\n    }\n\n    // common/simple\n    jobEntry.setName( jobEntryName );\n    jobEntry.setHadoopJobName( hadoopJobName );\n\n    jobEntry.setMapTrans( mapTrans );\n    jobEntry.setMapInputStepName( mapTransInputStepName );\n    jobEntry.setMapOutputStepName( mapTransOutputStepName );\n\n    jobEntry.setCombinerTrans( combinerTrans );\n    jobEntry.setCombinerInputStepName( combinerTransInputStepName );\n    jobEntry.setCombinerOutputStepName( combinerTransOutputStepName );\n    jobEntry.setCombiningSingleThreaded( combiningSingleThreaded );\n\n\n    jobEntry.setReduceTrans( reduceTrans );\n    jobEntry.setReduceInputStepName( reduceTransInputStepName );\n    jobEntry.setReduceOutputStepName( reduceTransOutputStepName );\n    jobEntry.setReducingSingleThreaded( reducingSingleThreaded );\n\n    // advanced config\n    jobEntry.setBlocking( isBlocking() );\n    jobEntry.setLoggingInterval( loggingInterval );\n    jobEntry.setInputPath( getInputPath() );\n    jobEntry.setInputFormatClass( getInputFormatClass() );\n    jobEntry.setOutputPath( getOutputPath() );\n    jobEntry.setCleanOutputPath( isCleanOutputPath() );\n\n    jobEntry.setSuppressOutputOfMapKey( isSuppressOutputOfMapKey() );\n    jobEntry.setSuppressOutputOfMapValue( isSuppressOutputOfMapValue() );\n\n    jobEntry.setSuppressOutputOfKey( isSuppressOutputOfKey() );\n    jobEntry.setSuppressOutputOfValue( isSuppressOutputOfValue() );\n\n    jobEntry.setOutputFormatClass( getOutputFormatClass() );\n    jobEntry.setNamedCluster( selectedNamedCluster );\n    jobEntry.setNumMapTasks( getNumMapTasks() );\n    jobEntry.setNumReduceTasks( getNumReduceTasks() );\n    jobEntry.setUserDefined( userDefined );\n\n    jobEntry.setChanged();\n\n    cancel();\n  }\n\n  @SuppressWarnings( { \"rawtypes\" } )\n  public void init() throws Throwable {\n    if ( jobEntry != null ) {\n      // common/simple\n      setName( jobEntry.getName() );\n      setJobEntryName( jobEntry.getName() );\n      setHadoopJobName( jobEntry.getHadoopJobName() );\n\n      // set variables\n      VariableSpace varSpace = getVariableSpace();\n      ExtTextbox tempBox;\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-hadoopjob-name\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-transformation\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-input-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-map-output-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-transformation\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-input-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-combiner-output-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-transformation\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-input-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"jobentry-reduce-output-stepname\" );\n      tempBox.setVariableSpace( varSpace );\n\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"input-path\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"output-path\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-input-format\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"classes-output-format\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-map-tasks\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"num-reduce-tasks\" );\n      tempBox.setVariableSpace( varSpace );\n      tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"logging-interval\" );\n      tempBox.setVariableSpace( varSpace );\n\n      setCombinerTransInputStepName( jobEntry.getCombinerInputStepName() );\n      setCombinerTransOutputStepName( jobEntry.getCombinerOutputStepName() );\n      setCombiningSingleThreaded( jobEntry.isCombiningSingleThreaded() );\n\n      // Load the map transformation into the UI\n      if ( jobEntry.getMapTrans() != null || rep == null ) {\n        setMapTrans( jobEntry.getMapTrans() );\n      } else if ( jobEntry.getMapRepositoryReference() != null ) {\n        // Load the repository directory and file for displaying to the user\n        try {\n          TransMeta transMeta = rep.loadTransformation( jobEntry.getMapRepositoryReference(), null );\n          if ( transMeta != null && transMeta.getRepositoryDirectory() != null ) {\n            setMapTrans( buildRepositoryPath( transMeta.getRepositoryDirectory().getPath(), transMeta.getName() ) );\n          }\n        } catch ( KettleException e ) {\n          // The transformation cannot be loaded from the repository\n          setMapTrans( null );\n        }\n      } else {\n        setMapTrans( buildRepositoryPath( jobEntry.getMapRepositoryDir(), jobEntry.getMapRepositoryFile() ) );\n      }\n      setMapTransInputStepName( jobEntry.getMapInputStepName() );\n      setMapTransOutputStepName( jobEntry.getMapOutputStepName() );\n\n      // Load the combiner transformation into the UI\n      if ( jobEntry.getCombinerTrans() != null || rep == null ) {\n        setCombinerTrans( jobEntry.getCombinerTrans() );\n      } else if ( jobEntry.getCombinerRepositoryReference() != null ) {\n        // Load the repository directory and file for displaying to the user\n        try {\n          TransMeta transMeta = rep.loadTransformation( jobEntry.getCombinerRepositoryReference(), null );\n          if ( transMeta != null && transMeta.getRepositoryDirectory() != null ) {\n            setCombinerTrans( buildRepositoryPath( transMeta.getRepositoryDirectory().getPath(), transMeta.getName() ) );\n          }\n        } catch ( KettleException e ) {\n          // The transformation cannot be loaded from the repository\n          setCombinerTrans( null );\n        }\n      } else {\n        setCombinerTrans( buildRepositoryPath( jobEntry.getCombinerRepositoryDir(), jobEntry.getCombinerRepositoryFile() ) );\n      }\n\n      // Load the reduce transformation into the UI\n      if ( jobEntry.getReduceTrans() != null || rep == null ) {\n        setReduceTrans( jobEntry.getReduceTrans() );\n      } else if ( jobEntry.getReduceRepositoryReference() != null ) {\n        // Load the repository directory and file for displaying to the user\n        try {\n          TransMeta transMeta = rep.loadTransformation( jobEntry.getReduceRepositoryReference(), null );\n          if ( transMeta != null && transMeta.getRepositoryDirectory() != null ) {\n            setReduceTrans( buildRepositoryPath( transMeta.getRepositoryDirectory().getPath(), transMeta.getName() ) );\n          }\n        } catch ( KettleException e ) {\n          // The transformation cannot be loaded from the repository\n          setReduceTrans( null );\n        }\n      } else {\n        setReduceTrans( buildRepositoryPath( jobEntry.getReduceRepositoryDir(), jobEntry.getReduceRepositoryFile() ) );\n      }\n\n      setReduceTransInputStepName( jobEntry.getReduceInputStepName() );\n      setReduceTransOutputStepName( jobEntry.getReduceOutputStepName() );\n      setReducingSingleThreaded( jobEntry.isReducingSingleThreaded() );\n\n      userDefined.clear();\n      if ( jobEntry.getUserDefined() != null ) {\n        userDefined.addAll( jobEntry.getUserDefined() );\n      }\n      setBlocking( jobEntry.isBlocking() );\n      setLoggingInterval( jobEntry.getLoggingInterval() );\n      setInputPath( jobEntry.getInputPath() );\n      setInputFormatClass( jobEntry.getInputFormatClass() );\n      setOutputPath( jobEntry.getOutputPath() );\n      setCleanOutputPath( jobEntry.isCleanOutputPath() );\n\n      setSuppressOutputOfMapKey( jobEntry.getSuppressOutputOfMapKey() );\n      setSuppressOutputOfMapValue( jobEntry.getSuppressOutputOfMapValue() );\n\n      setSuppressOutputOfKey( jobEntry.getSuppressOutputOfKey() );\n      setSuppressOutputOfValue( jobEntry.getSuppressOutputOfValue() );\n      setOutputFormatClass( jobEntry.getOutputFormatClass() );\n\n      selectedNamedCluster = jobEntry.getNamedCluster();\n\n      setNumMapTasks( jobEntry.getNumMapTasks() );\n      setNumReduceTasks( jobEntry.getNumReduceTasks() );\n      if ( Spoon.getInstance().getRepository() != null ) {\n        storageType = REPOSITORY;\n      } else {\n        storageType = LOCAL;\n      }\n    }\n  }\n\n  public void setShell( Shell shell ) {\n    this.shell = shell;\n  }\n\n  public void closeErrorDialog() {\n    XulDialog errorDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-dialog\" );\n    errorDialog.hide();\n  }\n\n  public void setRepository( Repository rep ) {\n    this.rep = rep;\n  }\n\n  public void setJobMeta( JobMeta jobMeta ) {\n    this.jobMeta = jobMeta;\n  }\n\n  public void cancel() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n\n    Shell shell = (Shell) xulDialog.getRootObject();\n    if ( !shell.isDisposed() ) {\n      WindowProperty winprop = new WindowProperty( shell );\n      PropsUI.getInstance().setScreen( winprop );\n      ( (Composite) xulDialog.getManagedObject() ).dispose();\n      shell.dispose();\n    }\n  }\n\n  private interface StringResultSetter {\n    public void set( String val );\n  }\n\n  private interface ObjectIdResultSetter {\n    public void set( ObjectId val );\n  }\n\n  public void mapTransBrowse() {\n    if ( storageType.equalsIgnoreCase( LOCAL ) ) { //$NON-NLS-1$\n      browseLocalFilesystem( JobEntryHadoopTransJobExecutorController.this::setMapTrans, mapTrans );\n    } else if ( storageType.equalsIgnoreCase( REPOSITORY ) ) { //$NON-NLS-1$\n      browseRepository( JobEntryHadoopTransJobExecutorController.this::setMapTrans );\n    }\n  }\n\n  public void combinerTransBrowse() {\n    if ( storageType.equalsIgnoreCase( LOCAL ) ) { //$NON-NLS-1$\n      browseLocalFilesystem( JobEntryHadoopTransJobExecutorController.this::setCombinerTrans, mapTrans );\n    } else if ( storageType.equalsIgnoreCase( REPOSITORY ) ) { //$NON-NLS-1$\n      browseRepository( JobEntryHadoopTransJobExecutorController.this::setCombinerTrans );\n    }\n  }\n\n  public void reduceTransBrowse() {\n    if ( storageType.equalsIgnoreCase( LOCAL ) ) { //$NON-NLS-1$\n      browseLocalFilesystem( JobEntryHadoopTransJobExecutorController.this::setReduceTrans, mapTrans );\n    } else if ( storageType.equalsIgnoreCase( REPOSITORY ) ) { //$NON-NLS-1$\n      browseRepository( JobEntryHadoopTransJobExecutorController.this::setReduceTrans );\n    }\n  }\n\n  public void browseLocalFilesystem( StringResultSetter setter, String originalTransformationName ) {\n    Shell shell = getJobEntryDialog();\n\n    FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n    dialog.setFilterExtensions( Const.STRING_TRANS_FILTER_EXT );\n    dialog.setFilterNames( Const.getTransformationFilterNames() );\n    String prevName = jobEntry.environmentSubstitute( originalTransformationName );\n    String parentFolder = null;\n    Spoon spoon = Spoon.getInstance();\n    try {\n      parentFolder =\n          KettleVFS.getFilename( KettleVFS.getInstance( spoon.getExecutionBowl() )\n              .getFileObject( jobEntry.environmentSubstitute( jobEntry.getFilename() ) ).getParent() );\n    } catch ( Exception e ) {\n      // not that important\n    }\n    if ( !StringUtils.isEmpty( prevName ) ) {\n      try {\n        if ( KettleVFS.getInstance( spoon.getExecutionBowl() ).fileExists( prevName ) ) {\n          dialog.setFilterPath( KettleVFS.getFilename( KettleVFS.getInstance( spoon.getExecutionBowl() )\n              .getFileObject( prevName ).getParent() ) );\n        } else {\n\n          if ( !prevName.endsWith( \".ktr\" ) ) {\n            prevName =\n                \"${\" + Const.INTERNAL_VARIABLE_JOB_FILENAME_DIRECTORY + \"}/\" + Const.trim( originalTransformationName )\n                    + \".ktr\";\n          }\n          if ( KettleVFS.getInstance( spoon.getExecutionBowl() ).fileExists( prevName ) ) {\n            setter.set( prevName );\n            return;\n          }\n        }\n      } catch ( Exception e ) {\n        dialog.setFilterPath( parentFolder );\n      }\n    } else if ( !StringUtils.isEmpty( parentFolder ) ) {\n      dialog.setFilterPath( parentFolder );\n    }\n\n    String fname = dialog.open();\n    if ( fname != null ) {\n      File file = new File( fname );\n      String name = file.getName();\n      String parentFolderSelection = file.getParentFile().toString();\n\n      if ( !StringUtils.isEmpty( parentFolder ) && parentFolder.equals( parentFolderSelection ) ) {\n        setter.set( \"${\" + Const.INTERNAL_VARIABLE_JOB_FILENAME_DIRECTORY + \"}/\" + name );\n      } else {\n        setter.set( fname );\n      }\n    }\n  }\n\n  private void browseRepository( StringResultSetter transSetter ) {\n    if ( rep != null ) {\n      Shell shell = getJobEntryDialog();\n      SelectObjectDialog sod = new SelectObjectDialog( shell, rep, true, false );\n      String transname = sod.open();\n      if ( transname != null ) {\n        if ( transSetter != null ) {\n          transSetter.set( buildRepositoryPath( sod.getDirectory().getPath(), sod.getObjectName() ) );\n        }\n      }\n    }\n  }\n\n /**\n   * This method exists for consistency\n   *\n   * @param dir\n   *          Null is unacceptable input, a blank string will be returned\n   * @param file\n   *          Null is unacceptable input, a blank string will be returned\n   * @return\n   */\n  private String buildRepositoryPath( String dir, String file ) {\n    if ( dir == null || file == null ) {\n      return \"\";\n    }\n\n    if ( dir.endsWith( \"/\" ) ) {\n      return dir + file;\n    }\n\n    return dir + \"/\" + file;\n  }\n\n  public void newUserDefinedItem() {\n    userDefined.add( new UserDefinedItem() );\n  }\n\n  public AbstractModelList<UserDefinedItem> getUserDefined() {\n    return userDefined;\n  }\n\n  @Override\n  public String getName() {\n    return \"jobEntryController\"; //$NON-NLS-1$\n  }\n\n  public String getJobEntryName() {\n    return jobEntryName;\n  }\n\n  public void setJobEntryName( String jobEntryName ) {\n    String previousVal = this.jobEntryName;\n    String newVal = jobEntryName;\n\n    this.jobEntryName = jobEntryName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.JOB_ENTRY_NAME, previousVal, newVal );\n  }\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    String previousVal = this.hadoopJobName;\n    String newVal = hadoopJobName;\n\n    this.hadoopJobName = hadoopJobName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.HADOOP_JOB_NAME, previousVal, newVal );\n  }\n\n  public String getMapTrans() {\n    return mapTrans;\n  }\n\n  public void setMapTrans( String mapTrans ) {\n    String previousVal = this.mapTrans;\n    String newVal = mapTrans;\n\n    this.mapTrans = mapTrans;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.MAP_TRANS, previousVal, newVal );\n  }\n\n  public String getCombinerTrans() {\n    return combinerTrans;\n  }\n\n  public void setCombinerTrans( String combinerTrans ) {\n    String previousVal = this.combinerTrans;\n    String newVal = combinerTrans;\n\n    this.combinerTrans = combinerTrans;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.COMBINER_TRANS, previousVal, newVal );\n  }\n\n  public String getReduceTrans() {\n    return reduceTrans;\n  }\n\n  public void setReduceTrans( String reduceTrans ) {\n    String previousVal = this.reduceTrans;\n    String newVal = reduceTrans;\n\n    this.reduceTrans = reduceTrans;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.REDUCE_TRANS, previousVal, newVal );\n  }\n\n  public String getMapTransInputStepName() {\n    return mapTransInputStepName;\n  }\n\n  public void setMapTransInputStepName( String mapTransInputStepName ) {\n    String previousVal = this.mapTransInputStepName;\n    String newVal = mapTransInputStepName;\n\n    this.mapTransInputStepName = mapTransInputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.MAP_TRANS_INPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public String getMapTransOutputStepName() {\n    return mapTransOutputStepName;\n  }\n\n  public void setMapTransOutputStepName( String mapTransOutputStepName ) {\n    String previousVal = this.mapTransOutputStepName;\n    String newVal = mapTransOutputStepName;\n\n    this.mapTransOutputStepName = mapTransOutputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.MAP_TRANS_OUTPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public String getCombinerTransInputStepName() {\n    return combinerTransInputStepName;\n  }\n\n  public void setCombinerTransInputStepName( String combinerTransInputStepName ) {\n    String previousVal = this.combinerTransInputStepName;\n    String newVal = combinerTransInputStepName;\n\n    this.combinerTransInputStepName = combinerTransInputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.COMBINER_TRANS_INPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public String getCombinerTransOutputStepName() {\n    return combinerTransOutputStepName;\n  }\n\n  public void setCombinerTransOutputStepName( String combinerTransOutputStepName ) {\n    String previousVal = this.combinerTransOutputStepName;\n    String newVal = combinerTransOutputStepName;\n\n    this.combinerTransOutputStepName = combinerTransOutputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.COMBINER_TRANS_OUTPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public String getReduceTransInputStepName() {\n    return reduceTransInputStepName;\n  }\n\n  public void setReduceTransInputStepName( String reduceTransInputStepName ) {\n    String previousVal = this.reduceTransInputStepName;\n    String newVal = reduceTransInputStepName;\n\n    this.reduceTransInputStepName = reduceTransInputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.REDUCE_TRANS_INPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public String getReduceTransOutputStepName() {\n    return reduceTransOutputStepName;\n  }\n\n  public void setReduceTransOutputStepName( String reduceTransOutputStepName ) {\n    String previousVal = this.reduceTransOutputStepName;\n    String newVal = reduceTransOutputStepName;\n\n    this.reduceTransOutputStepName = reduceTransOutputStepName;\n    firePropertyChange( JobEntryHadoopTransJobExecutorController.REDUCE_TRANS_OUTPUT_STEP_NAME, previousVal, newVal );\n  }\n\n  public void invertBlocking() {\n    setBlocking( !isBlocking() );\n  }\n\n  public JobEntryHadoopTransJobExecutor getJobEntry() {\n    return jobEntry;\n  }\n\n  public void setJobEntry( JobEntryHadoopTransJobExecutor jobEntry ) {\n    this.jobEntry = jobEntry;\n  }\n\n  public void invertSuppressOutputOfMapKey() {\n    setSuppressOutputOfMapKey( !isSuppressOutputOfMapKey() );\n  }\n\n  public boolean isSuppressOutputOfMapKey() {\n    return this.suppressOutputMapKey;\n  }\n\n  public void setSuppressOutputOfMapKey( boolean suppress ) {\n    boolean previousVal = this.suppressOutputMapKey;\n    boolean newVal = suppress;\n\n    this.suppressOutputMapKey = suppress;\n    firePropertyChange( SUPPRESS_OUTPUT_MAP_KEY, previousVal, newVal );\n  }\n\n  public void invertSuppressOutputOfMapValue() {\n    setSuppressOutputOfMapValue( !isSuppressOutputOfMapValue() );\n  }\n\n  public boolean isSuppressOutputOfMapValue() {\n    return this.suppressOutputMapValue;\n  }\n\n  public void setSuppressOutputOfMapValue( boolean suppress ) {\n    boolean previousVal = this.suppressOutputMapValue;\n    boolean newVal = suppress;\n\n    this.suppressOutputMapValue = suppress;\n    firePropertyChange( SUPPRESS_OUTPUT_MAP_VALUE, previousVal, newVal );\n  }\n\n  public void invertSuppressOutputOfKey() {\n    setSuppressOutputOfKey( !isSuppressOutputOfKey() );\n  }\n\n  public boolean isSuppressOutputOfKey() {\n    return this.suppressOutputKey;\n  }\n\n  public void setSuppressOutputOfKey( boolean suppress ) {\n    boolean previousVal = this.suppressOutputKey;\n    boolean newVal = suppress;\n\n    this.suppressOutputKey = suppress;\n    firePropertyChange( SUPPRESS_OUTPUT_KEY, previousVal, newVal );\n  }\n\n  public void invertSuppressOutputOfValue() {\n    setSuppressOutputOfValue( !isSuppressOutputOfValue() );\n  }\n\n  public boolean isSuppressOutputOfValue() {\n    return this.suppressOutputValue;\n  }\n\n  public void setSuppressOutputOfValue( boolean suppress ) {\n    boolean previousVal = this.suppressOutputValue;\n    boolean newVal = suppress;\n\n    this.suppressOutputValue = suppress;\n    firePropertyChange( SUPPRESS_OUTPUT_VALUE, previousVal, newVal );\n  }\n\n  public String getInputFormatClass() {\n    return inputFormatClass;\n  }\n\n  public void setInputFormatClass( String inputFormatClass ) {\n    String previousVal = this.inputFormatClass;\n    String newVal = inputFormatClass;\n\n    this.inputFormatClass = inputFormatClass;\n    firePropertyChange( INPUT_FORMAT_CLASS, previousVal, newVal );\n  }\n\n  public String getOutputFormatClass() {\n    return outputFormatClass;\n  }\n\n  public void setOutputFormatClass( String outputFormatClass ) {\n    String previousVal = this.outputFormatClass;\n    String newVal = outputFormatClass;\n\n    this.outputFormatClass = outputFormatClass;\n    firePropertyChange( OUTPUT_FORMAT_CLASS, previousVal, newVal );\n  }\n\n  public String getInputPath() {\n    return inputPath;\n  }\n\n  public void setInputPath( String inputPath ) {\n    String previousVal = this.inputPath;\n    String newVal = inputPath;\n\n    this.inputPath = inputPath;\n    firePropertyChange( INPUT_PATH, previousVal, newVal );\n  }\n\n  public String getOutputPath() {\n    return outputPath;\n  }\n\n  public void setOutputPath( String outputPath ) {\n    String previousVal = this.outputPath;\n    String newVal = outputPath;\n\n    this.outputPath = outputPath;\n    firePropertyChange( OUTPUT_PATH, previousVal, newVal );\n  }\n\n  public void invertCleanOutputPath() {\n    setCleanOutputPath( !isCleanOutputPath() );\n  }\n\n  public boolean isCleanOutputPath() {\n    return cleanOutputPath;\n  }\n\n  public void setCleanOutputPath( boolean cleanOutputPath ) {\n    boolean old = this.cleanOutputPath;\n    this.cleanOutputPath = cleanOutputPath;\n    firePropertyChange( CLEAN_OUTPUT_PATH, old, this.cleanOutputPath );\n  }\n\n  public boolean isBlocking() {\n    return blocking;\n  }\n\n  public void setBlocking( boolean blocking ) {\n    boolean previousVal = this.blocking;\n    boolean newVal = blocking;\n\n    this.blocking = blocking;\n    firePropertyChange( BLOCKING, previousVal, newVal );\n  }\n\n  public void setReducingSingleThreaded( boolean reducingSingleThreaded ) {\n    boolean previousVal = this.reducingSingleThreaded;\n    boolean newVal = reducingSingleThreaded;\n\n    this.reducingSingleThreaded = reducingSingleThreaded;\n    firePropertyChange( REDUCING_SINGLE_THREADED, previousVal, newVal );\n  }\n\n  public String getLoggingInterval() {\n    return loggingInterval;\n  }\n\n  public void setLoggingInterval( String loggingInterval ) {\n    String previousVal = this.loggingInterval;\n    String newVal = loggingInterval;\n\n    this.loggingInterval = loggingInterval;\n    firePropertyChange( LOGGING_INTERVAL, previousVal, newVal );\n  }\n\n  public String getNumMapTasks() {\n    return numMapTasks;\n  }\n\n  public void setNumMapTasks( String numMapTasks ) {\n    String previousVal = this.numMapTasks;\n    String newVal = numMapTasks;\n\n    this.numMapTasks = numMapTasks;\n    firePropertyChange( NUM_MAP_TASKS, previousVal, newVal );\n  }\n\n  public String getNumReduceTasks() {\n    return numReduceTasks;\n  }\n\n  public void setNumReduceTasks( String numReduceTasks ) {\n    String previousVal = this.numReduceTasks;\n    String newVal = numReduceTasks;\n\n    this.numReduceTasks = numReduceTasks;\n    firePropertyChange( NUM_REDUCE_TASKS, previousVal, newVal );\n  }\n\n  public List<NamedCluster> getNamedClusters() throws MetaStoreException {\n    return namedClusterService.list( jobMeta.getMetaStore() );\n  }\n\n  public void setNamedClusters( List<NamedCluster> namedClusters ) {\n    this.namedClusters = namedClusters;\n  }\n\n  public void openErrorDialog( String title, String message ) {\n    XulDialog errorDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-dialog\" );\n    errorDialog.setTitle( title );\n\n    XulTextbox errorMessage =\n        (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById( \"hadoop-error-message\" );\n    errorMessage.setValue( message );\n\n    errorDialog.show();\n  }\n\n  public void invertReducingSingleThreaded() {\n    setReducingSingleThreaded( !isReducingSingleThreaded() );\n  }\n\n  public boolean isReducingSingleThreaded() {\n    return reducingSingleThreaded;\n  }\n\n  public void invertCombiningSingleThreaded() {\n    setCombiningSingleThreaded( !isCombiningSingleThreaded() );\n  }\n\n  public boolean isCombiningSingleThreaded() {\n    return combiningSingleThreaded;\n  }\n\n  public void setCombiningSingleThreaded( boolean combiningSingleThreaded ) {\n    boolean old = this.combiningSingleThreaded;\n    this.combiningSingleThreaded = combiningSingleThreaded;\n    firePropertyChange( COMBINING_SINGLE_THREADED, old, this.combiningSingleThreaded );\n  }\n\n  public void help() {\n    Shell shell = getJobEntryDialog();\n    PluginInterface plugin =\n        PluginRegistry.getInstance().findPluginWithId( JobEntryPluginType.class, jobEntry.getPluginId() );\n    HelpUtils.openHelpDialog( shell, plugin );\n  }\n\n  public void editNamedCluster() throws MetaStoreException {\n    if ( isSelectedNamedCluster() ) {\n      String newNcName = ncDelegate.editNamedCluster( null, getSelectedNamedCluster(), getJobEntryDialog() );\n      if ( newNcName != null ) {\n        //cancel button on editing pressed, clusters not changed\n        namedClustersChanged();\n        selectedNamedClusterChanged( getNamedClusterName( getSelectedNamedCluster() ), newNcName );\n      }\n    }\n  }\n\n  public void newNamedCluster() throws MetaStoreException {\n    String newNcName = ncDelegate.newNamedCluster( jobMeta, null, getJobEntryDialog() );\n    if ( newNcName != null ) {\n      //cancel button on editing pressed, clusters not changed\n      namedClustersChanged();\n      selectedNamedClusterChanged( getNamedClusterName( getSelectedNamedCluster() ), newNcName );\n    }\n  }\n\n  private Shell getJobEntryDialog() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"job-entry-dialog\" );\n    Shell shell = (Shell) xulDialog.getRootObject();\n    return shell;\n  }\n\n  private String getNamedClusterName( NamedCluster namedCluster ) {\n    return namedCluster != null ? namedCluster.getName() : null;\n  }\n\n  /**\n   * Reports the named clusters list has been changed.\n   *\n   * @throws MetaStoreException\n   *           if the exception occurs\n   */\n  @VisibleForTesting\n    void namedClustersChanged() throws MetaStoreException {\n    firePropertyChange( \"namedClusters\", null, getNamedClusters() );\n  }\n\n  /**\n   * Reports that the selected named cluster has been changed.\n   *\n   * @param ncVal\n   *          the old value of the selected named cluster\n   * @param newNcVal\n   *          the new value of the selected named cluster\n   * @throws MetaStoreException\n   *           if the exception occurs\n   */\n  @VisibleForTesting\n    void selectedNamedClusterChanged( String ncVal, String newNcVal ) throws MetaStoreException {\n    if ( newNcVal != null ) {\n      ncVal = newNcVal;\n    }\n    if ( ncVal != null ) {\n      for ( NamedCluster nc : getNamedClusters() ) {\n        if ( nc.getName().equals( ncVal ) ) {\n          firePropertyChange( \"selectedNamedCluster\", null, nc );\n          return;\n        }\n      }\n    }\n  }\n\n  public void setSelectedNamedCluster( NamedCluster namedCluster ) {\n    this.selectedNamedCluster = namedCluster;\n  }\n\n  public NamedCluster getSelectedNamedCluster() {\n    return this.selectedNamedCluster;\n  }\n\n  public boolean isSelectedNamedCluster() {\n    return this.selectedNamedCluster != null;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/pmr/JobEntryHadoopTransJobExecutorDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.pmr;\n\nimport org.dom4j.DocumentException;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.XulRunner;\nimport org.pentaho.ui.xul.binding.Binding.Type;\nimport org.pentaho.ui.xul.binding.BindingConvertor;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.components.XulTextbox;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.swt.SwtXulLoader;\nimport org.pentaho.ui.xul.swt.SwtXulRunner;\n\nimport java.util.Collections;\nimport java.util.Enumeration;\nimport java.util.List;\nimport java.util.ResourceBundle;\n\n@PluginDialog( id = \"HadoopTransJobExecutorPlugin\", image = \"HDT.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/pentaho-mapreduce\" )\npublic class JobEntryHadoopTransJobExecutorDialog extends JobEntryDialog implements JobEntryDialogInterface {\n  private static final Class<?> CLZ = JobEntryHadoopTransJobExecutor.class;\n\n  private JobEntryHadoopTransJobExecutor jobEntry;\n\n  private final JobEntryHadoopTransJobExecutorController controller;\n\n  private XulDomContainer container;\n\n  private BindingFactory bf;\n\n  private ResourceBundle bundle = new ResourceBundle() {\n    @Override\n    public Enumeration<String> getKeys() {\n      return null;\n    }\n\n    @Override\n    protected Object handleGetObject( String key ) {\n      return BaseMessages.getString( CLZ, key );\n    }\n  };\n\n  public JobEntryHadoopTransJobExecutorDialog(\n      Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException, DocumentException, Throwable {\n    super( parent, jobEntry, rep, jobMeta );\n\n    this.jobEntry = (JobEntryHadoopTransJobExecutor) jobEntry;\n    controller = new JobEntryHadoopTransJobExecutorController( new HadoopClusterDelegateImpl( Spoon\n      .getInstance(), this.jobEntry.getNamedClusterService(),\n      this.jobEntry.getRuntimeTestActionService(), this.jobEntry.getRuntimeTester() ), this.jobEntry.getNamedClusterService() );\n\n    SwtXulLoader swtXulLoader = new SwtXulLoader();\n    swtXulLoader.registerClassLoader( getClass().getClassLoader() );\n    swtXulLoader.register( \"VARIABLETEXTBOX\", \"org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox\" );\n    swtXulLoader.setOuterContext( shell );\n\n    container =\n        swtXulLoader.loadXul(\n            \"org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/JobEntryHadoopTransJobExecutorDialog.xul\", bundle ); //$NON-NLS-1$\n\n    final XulRunner runner = new SwtXulRunner();\n    runner.addContainer( container );\n\n    container.addEventHandler( controller );\n\n    bf = new DefaultBindingFactory();\n    bf.setDocument( container.getDocumentRoot() );\n    bf.setBindingType( Type.BI_DIRECTIONAL );\n\n    bf.createBinding( \"jobentry-name\", \"value\", controller, JobEntryHadoopTransJobExecutorController.JOB_ENTRY_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-hadoopjob-name\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.HADOOP_JOB_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-map-transformation\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.MAP_TRANS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-combiner-transformation\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.COMBINER_TRANS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-reduce-transformation\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.REDUCE_TRANS ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"jobentry-map-input-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.MAP_TRANS_INPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-map-output-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.MAP_TRANS_OUTPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-combiner-input-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.COMBINER_TRANS_INPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-combiner-output-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.COMBINER_TRANS_OUTPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-combiner-single-threaded\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.COMBINING_SINGLE_THREADED ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-reduce-input-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.REDUCE_TRANS_INPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-reduce-output-stepname\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.REDUCE_TRANS_OUTPUT_STEP_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"jobentry-reduce-single-threaded\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.REDUCING_SINGLE_THREADED ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"classes-suppress-output-map-key\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.SUPPRESS_OUTPUT_MAP_KEY ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-suppress-output-map-value\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.SUPPRESS_OUTPUT_MAP_VALUE ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"classes-suppress-output-key\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.SUPPRESS_OUTPUT_KEY ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-suppress-output-value\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.SUPPRESS_OUTPUT_VALUE ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"classes-input-format\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.INPUT_FORMAT_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"classes-output-format\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.OUTPUT_FORMAT_CLASS ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    /*\n     * final BindingConvertor<String, Integer> bindingConverter = new BindingConvertor<String, Integer>() {\n     *\n     * public Integer sourceToTarget(String value) { return Integer.parseInt(value); }\n     *\n     * public String targetToSource(Integer value) { return value.toString(); }\n     *\n     * };\n     */\n\n    bf.createBinding( \"num-map-tasks\", \"value\", controller, JobEntryHadoopTransJobExecutorController.NUM_MAP_TASKS ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"num-reduce-tasks\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.NUM_REDUCE_TASKS ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    bf.createBinding( \"blocking\", \"selected\", controller, JobEntryHadoopTransJobExecutorController.BLOCKING ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"logging-interval\", \"value\", controller,\n        JobEntryHadoopTransJobExecutorController.LOGGING_INTERVAL ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"input-path\", \"value\", controller, JobEntryHadoopTransJobExecutorController.INPUT_PATH ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"output-path\", \"value\", controller, JobEntryHadoopTransJobExecutorController.OUTPUT_PATH ); //$NON-NLS-1$ //$NON-NLS-2$\n    bf.createBinding( \"clean-output-path\", \"selected\", controller,\n        JobEntryHadoopTransJobExecutorController.CLEAN_OUTPUT_PATH ); //$NON-NLS-1$ //$NON-NLS-2$\n\n    XulTree variablesTree = (XulTree) container.getDocumentRoot().getElementById( \"fields-table\" ); //$NON-NLS-1$\n    bf.setBindingType( Type.ONE_WAY );\n    bf.createBinding( controller.getUserDefined(), \"children\", variablesTree, \"elements\" ); //$NON-NLS-1$//$NON-NLS-2$\n    bf.setBindingType( Type.BI_DIRECTIONAL );\n\n    XulTextbox loggingInterval = (XulTextbox) container.getDocumentRoot().getElementById( \"logging-interval\" ); //$NON-NLS-1$\n    loggingInterval.setValue( \"\" + controller.getLoggingInterval() ); //$NON-NLS-1$\n\n    XulTextbox mapTasks = (XulTextbox) container.getDocumentRoot().getElementById( \"num-map-tasks\" ); //$NON-NLS-1$\n    mapTasks.setValue( \"\" + controller.getNumMapTasks() ); //$NON-NLS-1$\n\n    XulTextbox reduceTasks = (XulTextbox) container.getDocumentRoot().getElementById( \"num-reduce-tasks\" ); //$NON-NLS-1$\n    reduceTasks.setValue( \"\" + controller.getNumReduceTasks() ); //$NON-NLS-1$\n\n    controller.setJobEntry( (JobEntryHadoopTransJobExecutor) jobEntry );\n    controller.setShell( parent );\n    controller.setRepository( rep );\n    controller.setJobMeta( jobMeta );\n    controller.init();\n\n    bf.createBinding( controller, \"namedClusters\", \"named-clusters\", \"elements\" ).fireSourceChanged();\n    bf.createBinding( \"named-clusters\", \"selectedIndex\", controller, \"selectedNamedCluster\", new BindingConvertor<Integer, NamedCluster>() {\n      public NamedCluster sourceToTarget( final Integer index ) {\n        List<NamedCluster> clusters = Collections.emptyList();\n        try {\n          clusters = controller.getNamedClusters();\n        } catch ( MetaStoreException e ) {\n          // Ignore\n        }\n        if ( index == -1 || clusters.isEmpty() ) {\n          return null;\n        }\n        return clusters.get( index );\n      }\n\n      public Integer targetToSource( final NamedCluster value ) {\n        List<NamedCluster> clusters = Collections.emptyList();\n        try {\n          clusters = controller.getNamedClusters();\n        } catch ( MetaStoreException e ) {\n          // Ignore\n        }\n        return clusters.indexOf( value );\n      }\n    } ).fireSourceChanged();\n\n    selectNamedCluster();\n\n  }\n\n  private void selectNamedCluster() throws MetaStoreException {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedClusterMenu = (XulMenuList<NamedCluster>) container.getDocumentRoot().getElementById( \"named-clusters\" ); //$NON-NLS-1$\n    String cn = null;\n    NamedCluster namedCluster = jobEntry.getNamedCluster();\n    if ( namedCluster != null ) {\n      cn = namedCluster.getName();\n    }\n    for ( NamedCluster nc : controller.getNamedClusters() ) {\n      if ( cn != null && cn.equals( nc.getName() ) ) {\n        namedClusterMenu.setSelectedItem( nc );\n        controller.setSelectedNamedCluster( nc );\n      }\n    }\n  }\n\n  public JobEntryInterface open() {\n    XulDialog dialog = (XulDialog) container.getDocumentRoot().getElementById( \"job-entry-dialog\" ); //$NON-NLS-1$\n    dialog.show();\n    return jobEntry;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/enter/HadoopEnterDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.enter;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.enter.HadoopEnterMeta;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.ui.trans.step.BaseStepXulDialog;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.components.XulTextbox;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n@PluginDialog( id = \"HadoopEnterPlugin\", image = \"MRI.svg\", pluginType = PluginDialog.PluginType.STEP,\n  documentationUrl = \"pdi-transformation-steps-reference-overview/mapreduce-input\" )\npublic class HadoopEnterDialog extends BaseStepXulDialog implements StepDialogInterface {\n  private static final Class<?> PKG = HadoopEnterMeta.class;\n\n  private String workingStepname;\n\n  private HadoopEnterMetaMapper metaMapper;\n\n  private List<String> typeList;\n\n  public HadoopEnterDialog( Shell parent, Object in, TransMeta tr, String sname ) throws Throwable {\n    super( \"org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/enter/dialog.xul\", parent, (BaseStepMeta) in, tr, sname );\n\n    typeList = new ArrayList<String>();\n    for ( String type : ValueMeta.getAllTypes() ) {\n      typeList.add( type );\n    }\n\n    init();\n  }\n\n  public void init() throws Throwable {\n    workingStepname = stepname;\n\n    metaMapper = new HadoopEnterMetaMapper();\n    metaMapper.loadMeta( (HadoopEnterMeta) baseStepMeta );\n\n    bf.setBindingType( Binding.Type.ONE_WAY );\n\n    setTextBoxValue( \"input-key-length\", metaMapper.getInKeyLength() );\n    setTextBoxValue( \"input-key-precision\", metaMapper.getInKeyPrecision() );\n    setTextBoxValue( \"input-value-length\", metaMapper.getInValueLength() );\n    setTextBoxValue( \"input-value-precision\", metaMapper.getInValuePrecision() );\n\n    bf.createBinding( \"step-name\", \"value\", this, \"stepName\" );\n    bf.createBinding( this, \"stepName\", \"step-name\", \"value\" ).fireSourceChanged();\n    bf.createBinding( this, \"types\", \"input-key-type\", \"elements\" ).fireSourceChanged();\n    bf.createBinding( this, \"types\", \"input-value-type\", \"elements\" ).fireSourceChanged();\n\n    if ( metaMapper.getInKeyType() >= 0 ) {\n      ( (XulMenuList<String>) getXulDomContainer().getDocumentRoot().getElementById( \"input-key-type\" ) )\n          .setSelectedItem( ValueMeta.getTypeDesc( metaMapper.getInKeyType() ) );\n    }\n    if ( metaMapper.getInValueType() >= 0 ) {\n      ( (XulMenuList<String>) getXulDomContainer().getDocumentRoot().getElementById( \"input-value-type\" ) )\n          .setSelectedItem( ValueMeta.getTypeDesc( metaMapper.getInValueType() ) );\n    }\n  }\n\n  @Override\n  protected Class<?> getClassForMessages() {\n    return HadoopEnterMeta.class;\n  }\n\n  @Override\n  public void onAccept() {\n    metaMapper.setInKeyType( fetchValue( (XulMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-key-type\" ) ) );\n    metaMapper.setInKeyLength( fetchValue( (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-key-length\" ) ) );\n    metaMapper.setInKeyPrecision( fetchValue( (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-key-precision\" ) ) );\n\n    metaMapper.setInValueType( fetchValue( (XulMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-value-type\" ) ) );\n    metaMapper.setInValueLength( fetchValue( (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-value-length\" ) ) );\n    metaMapper.setInValuePrecision( fetchValue( (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById(\n        \"input-value-precision\" ) ) );\n\n    if ( !workingStepname.equals( stepname ) ) {\n      stepname = workingStepname;\n      baseStepMeta.setChanged();\n    }\n\n    metaMapper.saveMeta( (HadoopEnterMeta) baseStepMeta );\n    dispose();\n  }\n\n  private int fetchValue( XulTextbox textbox ) {\n    int result = -1;\n\n    if ( textbox != null && !StringUtil.isEmpty( textbox.getValue() ) ) {\n      try {\n        result = Integer.parseInt( textbox.getValue() );\n      } catch ( NumberFormatException e ) {\n        log.logError( BaseMessages.getString( \"HadoopEnter.Error.ParseInteger\", textbox.getValue() ) );\n      }\n    }\n\n    return result;\n  }\n\n  private int fetchValue( XulMenuList<?> menulist ) {\n    int result = -1;\n\n    if ( menulist != null && menulist.getValue() != null ) {\n      result = ValueMeta.getType( menulist.getValue() );\n    }\n\n    return result;\n  }\n\n  private void setTextBoxValue( String textbox, int value ) {\n    String v = \"\";\n\n    if ( value >= 0 ) {\n      v = Integer.toString( value );\n    }\n\n    ( (XulTextbox) getXulDomContainer().getDocumentRoot().getElementById( textbox ) ).setValue( v );\n  }\n\n  @Override\n  public void onCancel() {\n    setStepName( null );\n    dispose();\n  }\n\n  public void setStepName( String stepname ) {\n    workingStepname = stepname;\n  }\n\n  public String getStepName() {\n    return workingStepname;\n  }\n\n  public List<String> getTypes() {\n    return typeList;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/enter/HadoopEnterMetaMapper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.enter;\n\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.enter.HadoopEnterMeta;\nimport org.pentaho.ui.xul.XulEventSourceAdapter;\n\npublic class HadoopEnterMetaMapper extends XulEventSourceAdapter {\n\n  private class FieldPositions {\n    private int key;\n    private int value;\n\n    public FieldPositions( String[] fieldnames ) {\n      setKeyIndex( -1 );\n      setValueIndex( -1 );\n\n      // Determine the key and value field indices\n      if ( fieldnames != null && fieldnames.length == 2 ) {\n        for ( int index = 0; index < fieldnames.length; index++ ) {\n          if ( fieldnames[index].equals( HadoopEnterMeta.KEY_FIELDNAME ) ) {\n            setKeyIndex( index );\n          } else if ( fieldnames[index].equals( HadoopEnterMeta.VALUE_FIELDNAME ) ) {\n            setValueIndex( index );\n          }\n        }\n      }\n    }\n\n    public void setKeyIndex( int key ) {\n      this.key = key;\n    }\n\n    public int getKeyIndex() {\n      return key;\n    }\n\n    public void setValueIndex( int value ) {\n      this.value = value;\n    }\n\n    public int getValueIndex() {\n      return value;\n    }\n\n    public boolean isValid() {\n      return ( ( getKeyIndex() >= 0 && getValueIndex() >= 0 ) && getKeyIndex() != getValueIndex() );\n    }\n  }\n\n  public static String IN_KEY_TYPE = \"in-key-type\";\n  public static String IN_KEY_LENGTH = \"in-key-length\";\n  public static String IN_KEY_PRECISION = \"in-key-precision\";\n\n  public static String IN_VALUE_TYPE = \"in-value-type\";\n  public static String IN_VALUE_LENGTH = \"in-value-length\";\n  public static String IN_VALUE_PRECISION = \"in-value-precision\";\n\n  private int inKeyType = -1;\n  private int inKeyLength = -1;\n  private int inKeyPrecision = -1;\n\n  private int inValueType = -1;\n  private int inValueLength = -1;\n  private int inValuePrecision = -1;\n\n  public void setInKeyType( int arg ) {\n    int previousVal = inKeyType;\n    inKeyType = arg;\n    firePropertyChange( IN_KEY_TYPE, previousVal, inKeyType );\n  }\n\n  public void setInKeyLength( int arg ) {\n    int previousVal = inKeyLength;\n    inKeyLength = arg;\n    firePropertyChange( IN_KEY_LENGTH, previousVal, inKeyLength );\n  }\n\n  public void setInKeyPrecision( int arg ) {\n    int previousVal = inKeyPrecision;\n    inKeyPrecision = arg;\n    firePropertyChange( IN_KEY_PRECISION, previousVal, inKeyPrecision );\n  }\n\n  public void setInValueType( int arg ) {\n    int previousVal = inValueType;\n    inValueType = arg;\n    firePropertyChange( IN_VALUE_TYPE, previousVal, inValueType );\n  }\n\n  public void setInValueLength( int arg ) {\n    int previousVal = inValueLength;\n    inValueLength = arg;\n    firePropertyChange( IN_VALUE_LENGTH, previousVal, inValueLength );\n  }\n\n  public void setInValuePrecision( int arg ) {\n    int previousVal = inValuePrecision;\n    inValuePrecision = arg;\n    firePropertyChange( IN_VALUE_PRECISION, previousVal, inValuePrecision );\n  }\n\n  public int getInKeyType() {\n    return inKeyType;\n  }\n\n  public int getInKeyLength() {\n    return inKeyLength;\n  }\n\n  public int getInKeyPrecision() {\n    return inKeyPrecision;\n  }\n\n  public int getInValueType() {\n    return inValueType;\n  }\n\n  public int getInValueLength() {\n    return inValueLength;\n  }\n\n  public int getInValuePrecision() {\n    return inValuePrecision;\n  }\n\n  /**\n   * Load data into the MetaMapper from the HadoopExitMeta\n   * \n   * @param meta\n   */\n  public void loadMeta( HadoopEnterMeta meta ) {\n    FieldPositions fields = new FieldPositions( meta.getFieldname() );\n\n    if ( !fields.isValid() ) {\n      // We require both the key and value fields to be present\n      return;\n    }\n\n    int[] type = meta.getType();\n    int[] length = meta.getLength();\n    int[] precision = meta.getPrecision();\n\n    setInKeyType( type[fields.getKeyIndex()] );\n    setInKeyLength( length[fields.getKeyIndex()] );\n    setInKeyPrecision( precision[fields.getKeyIndex()] );\n\n    setInValueType( type[fields.getValueIndex()] );\n    setInValueLength( length[fields.getValueIndex()] );\n    setInValuePrecision( precision[fields.getValueIndex()] );\n  }\n\n  /**\n   * Save data from the MetaMapper into the HadoopExitMeta\n   * \n   * @param meta\n   */\n  public void saveMeta( HadoopEnterMeta meta ) {\n    // Set outKey\n    FieldPositions fields = new FieldPositions( meta.getFieldname() );\n\n    if ( !fields.isValid() ) {\n      // Replace the field names with the key / value names\n      meta.allocate( 2 );\n\n      fields.setKeyIndex( 0 );\n      fields.setValueIndex( 1 );\n\n      ( meta.getFieldname() )[fields.getKeyIndex()] = HadoopEnterMeta.KEY_FIELDNAME;\n      ( meta.getFieldname() )[fields.getValueIndex()] = HadoopEnterMeta.VALUE_FIELDNAME;\n\n      meta.setChanged();\n    }\n\n    int[] type = new int[2];\n    int[] length = new int[2];\n    int[] precision = new int[2];\n\n    // Set Types\n    if ( getInKeyType() >= 0 ) {\n      type[fields.getKeyIndex()] = getInKeyType();\n    }\n\n    if ( getInValueType() >= 0 ) {\n      type[fields.getValueIndex()] = getInValueType();\n    }\n\n    int[] metaType = meta.getType();\n    if ( metaType == null || metaType.length != 2 ) {\n      meta.setChanged();\n    }\n\n    for ( int index = 0; index < type.length; index++ ) {\n      if ( type[index] != metaType[index] ) {\n        meta.setChanged( true );\n      }\n    }\n\n    meta.setType( type );\n\n    // Set Lengths\n    if ( getInKeyLength() >= 0 ) {\n      length[fields.getKeyIndex()] = getInKeyLength();\n    }\n\n    if ( getInValueLength() >= 0 ) {\n      length[fields.getValueIndex()] = getInValueLength();\n    }\n\n    int[] metaLength = meta.getLength();\n    if ( metaLength == null || metaLength.length != 2 ) {\n      meta.setChanged();\n    }\n\n    for ( int index = 0; index < length.length; index++ ) {\n      if ( length[index] != metaLength[index] ) {\n        meta.setChanged( true );\n      }\n    }\n\n    meta.setLength( length );\n\n    // Set Precisions\n    if ( getInKeyPrecision() >= 0 ) {\n      precision[fields.getKeyIndex()] = getInKeyPrecision();\n    }\n\n    if ( getInValuePrecision() >= 0 ) {\n      precision[fields.getValueIndex()] = getInValuePrecision();\n    }\n\n    int[] metaPrecision = meta.getPrecision();\n    if ( metaPrecision == null || metaPrecision.length != 2 ) {\n      meta.setChanged();\n    }\n\n    for ( int index = 0; index < precision.length; index++ ) {\n      if ( precision[index] != metaPrecision[index] ) {\n        meta.setChanged( true );\n      }\n    }\n\n    meta.setPrecision( type );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/exit/HadoopExitDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.exit;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExit;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.ui.trans.step.BaseStepXulDialog;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.components.XulMenuList;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n@PluginDialog( id = \"HadoopExitPlugin\", image = \"MRO.svg\", pluginType = PluginDialog.PluginType.STEP,\n  documentationUrl = \"pdi-transformation-steps-reference-overview/mapreduce-output\" )\npublic class HadoopExitDialog extends BaseStepXulDialog implements StepDialogInterface {\n  @SuppressWarnings( \"unused\" )\n  private static final Class<?> PKG = HadoopExit.class;\n\n  private XulMenuList<?> outKeyFieldnames;\n  private XulMenuList<?> outValueFieldnames;\n\n  private HadoopExitMetaMapper metaMapper;\n  private String workingStepname;\n\n  private List<ValueMetaInterface> outKeyFields = new ArrayList<ValueMetaInterface>();\n  private List<ValueMetaInterface> outValueFields = new ArrayList<ValueMetaInterface>();\n\n  public HadoopExitDialog( Shell parent, Object in, TransMeta tr, String sname ) throws Throwable {\n    super( \"org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/exit/dialog.xul\", parent, (BaseStepMeta) in, tr, sname );\n    init();\n  }\n\n  public void init() throws Throwable {\n    workingStepname = stepname;\n\n    metaMapper = new HadoopExitMetaMapper();\n    metaMapper.loadMeta( (HadoopExitMeta) baseStepMeta );\n\n    // Get input fields to generate drop down lists\n    RowMetaInterface inputRow = null;\n    try {\n      inputRow = transMeta.getPrevStepFields( stepMeta );\n    } catch ( KettleStepException e ) {\n      // No previous step found, leave list empty\n    }\n\n    // Seed the lists with the previously selected fields: This is done first so the last selection is at the top\n    if ( !StringUtil.isEmpty( metaMapper.getOutKeyFieldname() ) ) {\n      outKeyFields.add( new ValueMeta( metaMapper.getOutKeyFieldname() ) );\n    }\n    if ( !StringUtil.isEmpty( metaMapper.getOutValueFieldname() ) ) {\n      outValueFields.add( new ValueMeta( metaMapper.getOutValueFieldname() ) );\n    }\n\n    if ( inputRow != null ) {\n      for ( ValueMetaInterface field : inputRow.getValueMetaList() ) {\n        // Avoid adding duplicates\n        if ( StringUtil.isEmpty( metaMapper.getOutKeyFieldname() )\n            || !metaMapper.getOutKeyFieldname().equals( field.getName() ) ) {\n          outKeyFields.add( new ValueMeta( field.getName() ) );\n        }\n\n        // Avoid adding duplicates\n        if ( StringUtil.isEmpty( metaMapper.getOutValueFieldname() )\n            || !metaMapper.getOutValueFieldname().equals( field.getName() ) ) {\n          outValueFields.add( new ValueMeta( field.getName() ) );\n        }\n      }\n    }\n\n    // Populate outKey menulist\n    bf.setBindingType( Binding.Type.ONE_WAY );\n\n    bf.createBinding( \"step-name\", \"value\", this, \"stepName\" );\n    bf.createBinding( this, \"stepName\", \"step-name\", \"value\" ).fireSourceChanged();\n    bf.createBinding( this, \"outKeyFields\", \"output-key-fieldname\", \"elements\" ).fireSourceChanged();\n    bf.createBinding( this, \"outValueFields\", \"output-value-fieldname\", \"elements\" ).fireSourceChanged();\n\n    outKeyFieldnames = (XulMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById( \"output-key-fieldname\" );\n    outValueFieldnames =\n        (XulMenuList<?>) getXulDomContainer().getDocumentRoot().getElementById( \"output-value-fieldname\" );\n\n    if ( ( outKeyFieldnames != null ) && ( outKeyFieldnames.getElements().size() > 0 ) ) {\n      outKeyFieldnames.setSelectedIndex( 0 );\n    }\n\n    if ( ( outValueFieldnames != null ) && ( outValueFieldnames.getElements().size() > 0 ) ) {\n      outValueFieldnames.setSelectedIndex( 0 );\n    }\n  }\n\n  @Override\n  protected Class<?> getClassForMessages() {\n    return HadoopExit.class;\n  }\n\n  @Override\n  public void onAccept() {\n    metaMapper.setOutKeyFieldname( outKeyFieldnames.getValue() );\n    metaMapper.setOutValueFieldname( outValueFieldnames.getValue() );\n\n    if ( !workingStepname.equals( stepname ) ) {\n      stepname = workingStepname;\n      baseStepMeta.setChanged();\n    }\n\n    metaMapper.saveMeta( (HadoopExitMeta) baseStepMeta );\n    dispose();\n  }\n\n  @Override\n  public void onCancel() {\n    setStepName( null );\n    dispose();\n  }\n\n  public void setStepName( String stepname ) {\n    workingStepname = stepname;\n  }\n\n  public String getStepName() {\n    return workingStepname;\n  }\n\n  public List<ValueMetaInterface> getOutKeyFields() {\n    return outKeyFields;\n  }\n\n  public List<ValueMetaInterface> getOutValueFields() {\n    return outValueFields;\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/exit/HadoopExitMetaMapper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.exit;\n\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta;\nimport org.pentaho.ui.xul.XulEventSourceAdapter;\n\npublic class HadoopExitMetaMapper extends XulEventSourceAdapter {\n  public static String OUT_KEY_FIELDNAME = \"out-key-fieldname\";\n  public static String OUT_VALUE_FIELDNAME = \"out-value-fieldname\";\n\n  protected String outKeyFieldname;\n  protected String outValueFieldname;\n\n  public void setOutKeyFieldname( String arg ) {\n    String previousVal = outKeyFieldname;\n    outKeyFieldname = arg;\n    firePropertyChange( OUT_KEY_FIELDNAME, previousVal, outKeyFieldname );\n  }\n\n  public String getOutKeyFieldname() {\n    return outKeyFieldname;\n  }\n\n  public void setOutValueFieldname( String arg ) {\n    String previousVal = outValueFieldname;\n    outValueFieldname = arg;\n    firePropertyChange( OUT_VALUE_FIELDNAME, previousVal, outValueFieldname );\n  }\n\n  public String getOutValueFieldname() {\n    return outValueFieldname;\n  }\n\n  /**\n   * Load data into the MetaMapper from the HadoopExitMeta\n   * \n   * @param meta\n   */\n  public void loadMeta( HadoopExitMeta meta ) {\n    setOutKeyFieldname( meta.getOutKeyFieldname() );\n    setOutValueFieldname( meta.getOutValueFieldname() );\n  }\n\n  /**\n   * Save data from the MetaMapper into the HadoopExitMeta\n   * \n   * @param meta\n   */\n  public void saveMeta( HadoopExitMeta meta ) {\n    // Set outKey\n    if ( meta.getOutKeyFieldname() == null && getOutKeyFieldname() != null ) {\n      meta.setOutKeyFieldname( getOutKeyFieldname() );\n      meta.setChanged();\n    } else if ( meta.getOutKeyFieldname() != null && !meta.getOutKeyFieldname().equals( getOutKeyFieldname() ) ) {\n      meta.setOutKeyFieldname( getOutKeyFieldname() );\n      meta.setChanged();\n    }\n\n    // Set outValue\n    if ( meta.getOutValueFieldname() == null && getOutValueFieldname() != null ) {\n      meta.setOutValueFieldname( getOutValueFieldname() );\n      meta.setChanged();\n    } else if ( meta.getOutValueFieldname() != null && !meta.getOutValueFieldname().equals( getOutValueFieldname() ) ) {\n      meta.setOutValueFieldname( getOutValueFieldname() );\n      meta.setChanged();\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/OSGI-INF/blueprint/blueprint.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"\n           xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n           xmlns:pen=\"http://www.pentaho.com/xml/schemas/pentaho-blueprint\"\n           xsi:schemaLocation=\"\n            http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd\n            http://www.pentaho.com/xml/schemas/pentaho-blueprint http://www.pentaho.com/xml/schemas/pentaho-blueprint.xsd\">\n\n  <bean id=\"jobEntryHadoopJobExecutor\" class=\"org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop.JobEntryHadoopJobExecutor\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.JobEntryPluginType\"/>\n  </bean>\n\n  <bean id=\"jobEntryHadoopTransJobExecutor\" class=\"org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor\" scope=\"prototype\">\n    <argument ref=\"namedClusterService\"/>\n    <argument ref=\"runtimeTestActionService\"/>\n    <argument ref=\"runtimeTester\"/>\n    <argument ref=\"namedClusterServiceLocator\"/>\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.JobEntryPluginType\"/>\n  </bean>\n\n  <bean id=\"hadoopEnterMeta\" class=\"org.pentaho.big.data.kettle.plugins.mapreduce.step.enter.HadoopEnterMeta\" scope=\"prototype\">\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n\n  <bean id=\"hadoopExitMeta\" class=\"org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta\" scope=\"prototype\">\n    <pen:di-plugin type=\"org.pentaho.di.core.plugins.StepPluginType\"/>\n  </bean>\n\n  <reference id=\"namedClusterService\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterService\"/>\n  <reference id=\"runtimeTester\" interface=\"org.pentaho.runtime.test.RuntimeTester\"/>\n  <reference id=\"namedClusterServiceLocator\" interface=\"org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator\"/>\n  <reference id=\"runtimeTestActionService\" interface=\"org.pentaho.runtime.test.action.RuntimeTestActionService\"/>\n</blueprint>"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/entry/hadoop/messages/messages_en_US.properties",
    "content": "HadoopJobExecutorPlugin.Name=Hadoop job executor \nHadoopJobExecutorPlugin.Description=Execute MapReduce jobs in Hadoop \n\n\nJobEntryDialog.Title=Hadoop job executor\nJobEntry.Name.Label=Entry Name:\n\nJobEntryHadoopJobExecutor.Name.Label=Hadoop Job Name:\nJobEntryHadoopJobExecutor.JarUrl.Label=Jar:\nJobEntryHadoopJobExecutor.JarUrl.Browse=Browse...\nJobEntryHadoopJobExecutor.Driver.Class.Label=Driver Class:\n\nJobEntryHadoopJobExecutor.ModeSimple.Label=Simple\nJobEntryHadoopJobExecutor.ModeAdvanced.Label=Advanced\nJobEntryHadoopJobExecutor.Configuration.Label=Configuration\n\nJobEntryHadoopJobExecutor.ModeSimple.AssumptionsText.Label=Assumptions text here...\nJobEntryHadoopJobExecutor.ModeSimple.CommandLineArguments.Label=Command Line Arguments:\n\nJobEntryHadoopJobExecutor.ModeAdvanced.Blocking.Label=Enable Blocking:\nJobEntryHadoopJobExecutor.ModeAdvanced.Logging.Interval.Label=Logging Interval:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.Label=Job Setup\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.Label=Cluster\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.Label=User Defined\n\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputKeyClass.Label=Output Key Class:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputValueClass.Label=Output Value Class:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.MapperClass.Label=Mapper Class:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.CombinerClass.Label=Combiner Class:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.ReducerClass.Label=Reducer Class:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.InputFormat.Label=Input Format:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputFormat.Label=Output Format:\n\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.WorkingDirectory.Label=Working Directory:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.HDFSHostname.Label=HDFS Hostname:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.HDFSPort.Label=HDFS Port:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerHostname.Label=Job Tracker Hostname:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerPort.Label=Job Tracker Port:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.InputPath.Label=Input Path:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.OutputPath.Label=Output Path:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Label=Hadoop Cluster:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Edit=Edit...\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.New=New...\n\n\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.NameColumn.Label=Name\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.ValueColumn.Label=Value\n\nJobEntryHadoopJobExecutor.ResolvedJar=Using jar path: {0}\nJobEntryHadoopJobExecutor.RunningPercent=Setup Complete: {0} Mapper Completion: {1} Reducer Completion: {2}\nJobEntryHadoopJobExecutor.TaskDetails=[{0}] -- Task: {1}  Attempt: {2}  Event: {3} {4}\nJobEntryHadoopJobExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}],{1}\nJobEntryHadoopJobExecutor.SimpleMode=Running Hadoop Job in Simple Mode\nJobEntryHadoopJobExecutor.AdvancedMode=Running Hadoop Job in Advanced Mode\n\nJobEntryHadoopJobExecutor.ErrorExecutingClass=Error executing class {0}.\nJobEntryHadoopJobExecutor.FailedToExecuteClass=Failed to execute class {0} successfully. Exited with status {1}.\nJobEntryHadoopJobExecutor.Blocking=Waiting for execution of {0} to finish...\n\nJobEntryHadoopJobExecutor.ModeAdvanced.NumMapTasks.Label=Number of Mapper Tasks:\nJobEntryHadoopJobExecutor.ModeAdvanced.NumReduceTasks.Label=Number of Reducer Tasks:\n\nJobEntryHadoopJobExecutor.Error.JarDoesNotExist=Specified jar [{0}] does not exist.\n\nJobEntryHadoopJobExecutor.SecurityManagerUpdatedDuringExecution=Security Manager updated during execution of the Hadoop Job Executor. Unable to restore previous Security Manager.\n\nJobEntryHadoopJobExecutor.JobEntryName.Error=Job Entry name missing.\nJobEntryHadoopJobExecutor.NamedClusterNotProvided.Error=Hadoop cluster not selected.\nJobEntryHadoopJobExecutor.NamedClusterPropertyMissing.Error=Selected Hadoop cluster is missing required settings.\nJobEntryHadoopJobExecutor.HadoopJobName.Error=Hadoop Job name missing.\n\nDialog.Help=Help\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nNoSystemExit=JVM will not halt at this time. Runtime.exit() prevented.\nErrorParsingLogInterval=Can't parse logging interval '{0}'. Using {1} seconds.\nJobEntryHadoopJobExecutor.ErrorDriverClassNotSpecified=Driver Class not specified.\nJobEntryHadoopJobExecutor.ErrorMultipleDriverClasses=Multiple Driver Classes found.  Please select one.\nJobEntryHadoopJobExecutor.UsingDriverClass=Using Driver Class {0}."
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/entry/hadoop/messages/messages_ko_KR.properties",
    "content": "\nDialog.Accept=\\uD655\\uC778\nDialog.Cancel=\\uCDE8\\uC18C\nDialog.Error =\\uC624\\uB958\n\nJobEntry.Name.Label=Job \\uC5D4\\uB4DC\\uB9AC \\uC774\\uB984:\n\nJobEntryHadoopJobExecutor.AdvancedMode=Hodoop Job\\uC744 \\uACE0\\uAE09 \\uBAA8\\uB4DC\\uC5D0\\uC11C \\uC2E4\\uD589\nJobEntryHadoopJobExecutor.Configuration.Label=\\uC124\\uC815\nJobEntryHadoopJobExecutor.Error.JarDoesNotExist=\\uC9C0\\uC815\\uD55C jar [{0}] \\uD30C\\uC77C\\uC774 \\uC874\\uC7AC\\uD558\\uC9C0 \\uC54A\\uC2B5\\uB2C8\\uB2E4.\nJobEntryHadoopJobExecutor.JarUrl.Browse=\\uCC3E\\uC544\\uBCF4\\uAE30...\nJobEntryHadoopJobExecutor.JarUrl.Label=Jar:\nJobEntryHadoopJobExecutor.ModeAdvanced.Label=\\uACE0\\uAE09\nJobEntryHadoopJobExecutor.ModeAdvanced.Logging.Interval.Label=\\uB85C\\uAE45 \\uC8FC\\uAE30\nJobEntryHadoopJobExecutor.ModeAdvanced.NumMapTasks.Label=Mapper \\uD0DC\\uC2A4\\uD06C \\uC218:\nJobEntryHadoopJobExecutor.ModeAdvanced.NumReduceTasks.Label=Reducer \\uD0DC\\uC2A4\\uD06C \\uC218:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.HDFSHostname.Label=HDFS \\uD638\\uC2A4\\uD2B8:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.HDFSPort.Label=HDFS \\uD3EC\\uD2B8:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.InputPath.Label=Input \\uACBD\\uB85C:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerHostname.Label=Job Tracker \\uD638\\uC2A4\\uD2B8:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerPort.Label=Job Tracker \\uD3EC\\uD2B8:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.Label=\\uD074\\uB7EC\\uC2A4\\uD130\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.OutputPath.Label=Output \\uACBD\\uB85C:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.WorkingDirectory.Label=\\uC791\\uC5C5 \\uB514\\uB809\\uD1A0\\uB9AC:\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.Label=\\uC0AC\\uC6A9\\uC790 \\uC815\\uC758\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.NameColumn.Label=\\uC774\\uB984\nJobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.ValueColumn.Label=\\uAC12\nJobEntryHadoopJobExecutor.ModeSimple.CommandLineArguments.Label=\\uBA85\\uB839\\uD589 \\uC778\\uC790:\nJobEntryHadoopJobExecutor.Name.Label  =Hadoop Job \\uC774\\uB984:\nJobEntryHadoopJobExecutor.ResolvedJar =jar \\uACBD\\uB85C \\uC0AC\\uC6A9: {0}\nJobEntryHadoopJobExecutor.SimpleMode  =Hadoop Job\\uC744 \\uC2EC\\uD50C \\uBAA8\\uB4DC\\uC5D0\\uC11C \\uC2E4\\uD589\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/entry/pmr/messages/messages_en_US.properties",
    "content": "HadoopTransJobExecutorPlugin.Name=Pentaho MapReduce\nHadoopTransJobExecutorPlugin.Description=Execute Transformation Based MapReduce Jobs in Hadoop \n\n\nJobEntryDialog.Title=Pentaho MapReduce\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduce.Label=MapReduce\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceMapper.Label=Mapper\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceReducer.Label=Reducer\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceCombiner.Label=Combiner\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.JobSetup.Label=Job Setup\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Cluster.Label=Cluster\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.Label=User Defined\n\nJobEntry.Name.Label=Entry Name:\nJobEntryHadoopTransJobExecutor.Name.Label=Hadoop job name:\nJobEntryHadoopTransJobExecutor.MapTrans.Label=Transformation:\nJobEntryHadoopTransJobExecutor.CombinerTrans.Label=Transformation:\nJobEntryHadoopTransJobExecutor.ReduceTrans.Label=Transformation:\nJobEntryHadoopTransJobExecutor.MapTrans.Browse=Browse...\nJobEntryHadoopTransJobExecutor.CombinerTrans.Browse=Browse...\nJobEntryHadoopTransJobExecutor.ReduceTrans.Browse=Browse...\nJobEntryHadoopTransJobExecutor.MapInputStepName.Label=Input step name:\nJobEntryHadoopTransJobExecutor.MapOutputStepName.Label=Output step name:\nJobEntryHadoopTransJobExecutor.CombinerInputStepName.Label=Input step name:\nJobEntryHadoopTransJobExecutor.CombinerOutputStepName.Label=Output step name:\nJobEntryHadoopTransJobExecutor.ReduceInputStepName.Label=Input step name:\nJobEntryHadoopTransJobExecutor.ReduceOutputStepName.Label=Output step name:\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Blocking.Label=Enable blocking\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Logging.Interval.Label=Logging interval:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressMapOutputKey.Label=Ignore output of map key\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressMapOutputValue.Label=Ignore output of map value\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressOutputKey.Label=Ignore output of reduce key\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressOutputValue.Label=Ignore output of reduce value\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.MapOutputKeyClass.Label=Map Output Key Class:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.MapOutputValueClass.Label=Map Output Value Class:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.OutputKeyClass.Label=Output Key Class:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.OutputValueClass.Label=Output Value Class:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.InputFormat.Label=Input format:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.OutputFormat.Label=Output format:\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.WorkingDirectory.Label=Working Directory:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.HDFSHostname.Label=HDFS Hostname:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.HDFSPort.Label=HDFS Port:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerHostname.Label=Job Tracker Hostname:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.JobTrackerPort.Label=Job Tracker Port:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.InputPath.Label=Input path:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.OutputPath.Label=Output path:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Label=Hadoop cluster:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Edit=Edit...\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.New=New...\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.CleanOutputPath.Label=Remove output path before job\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.NameColumn.Label=Name\nJobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.ValueColumn.Label=Value\n\nJobEntryHadoopTransJobExecutor.ResolvedJar=Using jar path: {0}\nJobEntryHadoopTransJobExecutor.RunningPercent=Setup Complete: {0} Mapper Completion: {1} Reducer Completion: {2}\nJobEntryHadoopTransJobExecutor.SimpleMode=Running Hadoop Job in Simple Mode\nJobEntryHadoopTransJobExecutor.AdvancedMode=Running Hadoop Job in Advanced Mode\n\nJobEntryHadoopTransJobExecutor.ModeAdvanced.NumMapTasks.Label=Number of mapper tasks:\nJobEntryHadoopTransJobExecutor.ModeAdvanced.NumReduceTasks.Label=Number of reducer tasks:\n\nJobEntryHadoopTransJobExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}],{1}\nJobEntryHadoopTransJobExecutor.TaskDetails=[{0}] -- Task: {1}  Attempt: {2}  Event: {3} {4}\n\nJobEntryHadoopTransJobExecutor.GroupBox.MapLabel=Map\nJobEntryHadoopTransJobExecutor.GroupBox.CombinerLabel=Combiner\nJobEntryHadoopTransJobExecutor.GroupBox.ReduceLabel=Reduce\n\nJobEntryHadoopTransJobExecutor.StorageType.Label=Look in:\nJobEntryHadoopTransJobExecutor.StorageType.Local=Local\nJobEntryHadoopTransJobExecutor.StorageType.Repository.Location=Repository by name\nJobEntryHadoopTransJobExecutor.StorageType.Repository.Reference=Repository by reference\n\nJobEntryHadoopTransJobExecutor.JobEntryName.Error=Job Entry name missing.\nJobEntryHadoopTransJobExecutor.NamedClusterNotProvided.Error=Hadoop cluster not selected.\nJobEntryHadoopTransJobExecutor.NamedClusterPropertyMissing.Error=Selected Hadoop cluster is missing required settings.\nJobEntryHadoopTransJobExecutor.HadoopJobName.Error=Hadoop Job name missing.\nJobEntryHadoopTransJobExecutor.NumReduceTasks.Error=Number of reducer tasks must be 0 or greater.\nJobEntryHadoopTransJobExecutor.NumMapTasks.Error=Number of map tasks must be 0 or greater.\nJobEntryHadoopTransJobExecutor.NoMapOutputKeyDefined.Error=No output key field defined for the mapper transformation\nJobEntryHadoopTransJobExecutor.NoMapOutputValueDefined.Error=No output value field defined for the mapper transformation\nJobEntryHadoopTransJobExecutor.NoOutputKeyDefined.Error=No output key field defined for the reducer transformation\nJobEntryHadoopTransJobExecutor.NoOutputValueDefined.Error=No output value field defined for the reducer transformation\nJobEntryHadoopTransJobExecutor.MapConfiguration.Error=Error in mapper configuration\nJobEntryHadoopTransJobExecutor.CombinerConfiguration.Error=Error in combiner configuration\nJobEntryHadoopTransJobExecutor.ReducerConfiguration.Error=Error in reducer configuration\n\nJobEntryHadoopTransJobExecutor.Message.DistroConfigMessage=Configuring for Hadoop distribution: {0}\nJobEntryHadoopTransJobExecutor.Message.MapOutputKeyMessage=Using {0} for the map output key\nJobEntryHadoopTransJobExecutor.Message.MapOutputValueMessage=Using {0} for the map output value\nJobEntryHadoopTransJobExecutor.Message.OutputKeyMessage=Using {0} for the output key\nJobEntryHadoopTransJobExecutor.Message.OutputValueMessage=Using {0} for the output value\n\n\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\nHelpImage.Url=help_web.png\n\nJobEntryHadoopTransJobExecutor.ReduceSingleThreaded.Label=Use single threaded transformation engine\nJobEntryHadoopTransJobExecutor.CombinerSingleThreaded.Label=Use single threaded transformation engine\n\n\nJobEntryHadoopTransJobExecutor.ReferencedObject.Mapper=Mapper\nJobEntryHadoopTransJobExecutor.ReferencedObject.Combiner=Combiner\nJobEntryHadoopTransJobExecutor.ReferencedObject.Reducer=Reducer\n\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/step/enter/messages/messages_en_US.properties",
    "content": "HadoopEnterPlugin.Name=MapReduce input\nHadoopEnterPlugin.Description=Enter a Hadoop Mapper or Reducer transformation\n\nStepConfigruationDialog.Title=MapReduce input\nStep.Name.Label=Step name\nHadoopEnter.InKey.Label=Key field\nHadoopEnter.InValue.Label=Value field\nHadoopEnter.Type.Label=Type\nHadoopEnter.Length.Label=Length\nHadoopEnter.Precision.Label=Precision\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Help=Help\n\nHadoopEnter.Error.ParseInteger=The text {0} could not be parsed as an integer\n\nHadoopEnterPlugin.Injection.KEY_TYPE=The data type of the key field.\nHadoopEnterPlugin.Injection.KEY_LENGTH=The length of the key field.\nHadoopEnterPlugin.Injection.KEY_PRECISION=Specify how many digits after a decimal will be used for the key field.\nHadoopEnterPlugin.Injection.VALUE_TYPE=The data type of the value field.\nHadoopEnterPlugin.Injection.VALUE_LENGTH=The length of the value field.\nHadoopEnterPlugin.Injection.VALUE_PRECISION=Specify how many digits after a decimal will be used for the value field.\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/messages/messages_en_US.properties",
    "content": "HadoopExitPlugin.Name=MapReduce output\nHadoopExitPlugin.Description=Exit a Hadoop Mapper or Reducer transformation \n\nStepConfigruationDialog.Title=MapReduce output\nStep.Name.Label=Step name\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Help=Help\nHadoopExit.OutKey.Label=Key field\nHadoopExit.OutValue.Label=Value field\nHadoopExit.Linenr=Linenr {0}\n\nError.InvalidKeyField=Key field does not exist on input stream: \\\"{0}\\\".\nError.InvalidValueField=Value field does not exist on input stream: \\\"{0}\\\".\n\nHadoopExitPlugin.Injection.KEY_FIELD=The name of the key field.\nHadoopExitPlugin.Injection.VALUE_FIELD=The name of the value field.\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/JobEntryHadoopJobExecutorDialog.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n\n<window id=\"hadoop-window-wrapper\">\n\n<dialog id=\"job-entry-dialog\"\n\txmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n\txmlns:pen=\"http://www.pentaho.org/2008/xul\"\n\ttitle=\"${JobEntryDialog.Title}\"\n\tresizable=\"true\"\n\theight=\"600\" width=\"650\"\n\tappicon=\"ui/images/spoon.ico\"\n\tbuttons=\"\">\n\n\t<vbox>\n\t\t<grid>\n\t\t\t<columns>\n\t\t\t\t<column/>\n\t\t\t\t<column flex=\"1\"/>\n\t\t\t</columns>\n\t\t\t<rows>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${JobEntry.Name.Label}\"/>\n\t\t\t\t\t<textbox id=\"jobentry-name\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.Name.Label}\" />\n\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-name\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Label}\" width=\"200\"/>\n\t\t\t\t\t<grid>\n\t\t\t\t\t\t<columns>\n\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t</columns>\n\t\t\t\t\t\t<rows>\n\t\t\t\t\t\t\t<row>\n\t\t\t\t\t\t\t\t<menulist id=\"named-clusters\" pen:binding=\"name\">\n\t\t\t\t\t\t\t\t\t<menupopup>\n\t\t\t\t\t\t\t\t\t</menupopup>\n\t\t\t\t\t\t\t\t</menulist>\n\t\t\t\t\t\t\t\t<button id=\"editNamedCluster\" label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Edit}\" onclick=\"jobEntryController.editNamedCluster()\"/>\n\t\t\t\t\t\t\t\t<button id=\"newNamedCluster\" label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.New}\" onclick=\"jobEntryController.newNamedCluster()\"/>\n\t\t\t\t\t\t\t</row>\n\t\t\t\t\t\t</rows>\n\t\t\t\t\t</grid>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.JarUrl.Label}\" />\n\t\t\t\t\t<hbox>\n\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"jar-url\" flex=\"1\" width=\"400\" multiline=\"false\" />\n\t\t\t\t\t\t<button id=\"browseJarUrl\" label=\"${JobEntryHadoopJobExecutor.JarUrl.Browse}\" onclick=\"jobEntryController.browseJar()\"/>\n\t\t\t\t\t</hbox>\n\t\t\t\t</row>\n                <row>\n                    <label value=\"${JobEntryHadoopJobExecutor.Driver.Class.Label}\" />\n                    <menulist pen:customclass=\"variablemenulist\" id=\"driver-class\" editable=\"true\" flex=\"1\">\n                        <menupopup>\n                        </menupopup>\n                    </menulist>\n                </row>\n\t\t\t</rows>\n\t\t</grid>\n\n\t\t<groupbox>\n\t\t\t<caption label=\"${JobEntryHadoopJobExecutor.Configuration.Label}\"/>\n\t\t\t<hbox>\n\t\t\t  <radio id=\"simpleRadioButton\" label=\"${JobEntryHadoopJobExecutor.ModeSimple.Label}\" command=\"jobEntryController.setSimple(true)\" selected=\"true\" />\n\t\t\t  <radio id=\"advancedRadioButton\" label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Label}\" command=\"jobEntryController.setSimple(false)\" />\n\t\t\t</hbox>\n\t\t\t<vbox id=\"content-pane\" />\n\t\t</groupbox>\n\t</vbox>\n\n\t<vbox id=\"simple-configuration\" flex=\"1\" hidden=\"true\">\n\t <grid>\n\t   <columns>\n\t     <column />\n\t     <column flex=\"1\" />\n     </columns>\n\t   <rows>\n\t     <row>\n        <label value=\"${JobEntryHadoopJobExecutor.ModeSimple.CommandLineArguments.Label}\"/>\n        <textbox pen:customclass=\"variabletextbox\" id=\"command-line-arguments\" flex=\"1\" multiline=\"false\"/>\n\t     </row>\n\t     <row>\n        <label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Blocking.Label}\" />\n        <checkbox id=\"simple-blocking\" flex=\"1\" command=\"jobEntryController.invertSimpleBlocking()\"/>\n\t     </row>\n\t     <row>\n        <label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Logging.Interval.Label}\" />\n        <textbox pen:customclass=\"variabletextbox\" id=\"simple-logging-interval\" width=\"80\" flex=\"1\" multiline=\"false\"/>\n\t     </row>\n     </rows>\n   </grid>\n\t</vbox>\n\n\t<vbox id=\"advanced-configuration\" flex=\"1\" hidden=\"true\">\n\t\t<tabbox flex=\"1\">\n\t\t\t<tabs>\n\t\t\t\t<tab label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.Label}\"/>\n\t\t\t\t<tab label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.Label}\"/>\n\t\t\t\t<tab label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.Label}\"/>\n\t\t\t</tabs>\n\t\t\t<tabpanels>\n    \t\t\t<tabpanel style=\"overflow: auto\">\n    \t\t\t\t<grid>\n    \t\t\t\t\t<columns>\n    \t\t\t\t\t\t<column />\n    \t\t\t\t\t\t<column flex=\"1\"/>\n    \t\t\t\t\t</columns>\n    \t\t\t\t\t<rows>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputKeyClass.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-output-key-class\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputValueClass.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-output-value-class\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.MapperClass.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-mapper-class\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.CombinerClass.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-combiner-class\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.ReducerClass.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-reducer-class\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.InputPath.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"input-path\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Paths.OutputPath.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"output-path\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.InputFormat.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-input-format\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.Classes.OutputFormat.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"classes-output-format\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t</rows>\n    \t\t\t\t</grid>\n    \t\t\t</tabpanel>\n    \t\t\t<tabpanel>\n    \t\t\t\t<grid>\n    \t\t\t\t\t<columns>\n    \t\t\t\t\t\t<column />\n    \t\t\t\t\t\t<column flex=\"1\"/>\n    \t\t\t\t\t</columns>\n    \t\t\t\t\t<rows>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.NumMapTasks.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"num-map-tasks\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.NumReduceTasks.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"num-reduce-tasks\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Blocking.Label}\" />\n    \t\t\t\t\t\t\t<checkbox id=\"blocking\" flex=\"1\" command=\"jobEntryController.invertBlocking()\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t\t<row>\n    \t\t\t\t\t\t\t<label value=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Logging.Interval.Label}\" />\n    \t\t\t\t\t\t\t<textbox pen:customclass=\"variabletextbox\" id=\"logging-interval\" width=\"80\" flex=\"1\" multiline=\"false\"/>\n    \t\t\t\t\t\t</row>\n    \t\t\t\t\t</rows>\n    \t\t\t\t</grid>\n    \t\t\t</tabpanel>\n    \t\t\t<tabpanel>\n    \t\t\t\t<tree id=\"fields-table\" flex=\"1\" hidecolumnpicker=\"true\" autocreatenewrows=\"true\" newitembinding=\"jobEntryController.newUserDefinedItem()\">\n\t\t\t\t\t\t<treecols>\n\t\t\t\t\t\t\t<treecol id=\"name-col\" editable=\"true\" flex=\"1\" label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.NameColumn.Label}\" pen:binding=\"name\"/>\n\t\t\t\t\t\t\t<treecol id=\"value-col\" editable=\"true\" flex=\"1\" label=\"${JobEntryHadoopJobExecutor.ModeAdvanced.Tab.UserDefined.ValueColumn.Label}\" pen:binding=\"value\"/>\n\t\t\t\t\t\t</treecols>\n\t\t\t\t\t\t<treechildren />\n\t\t\t\t\t</tree>\n    \t\t\t</tabpanel>\n\t\t\t</tabpanels>\n\t\t</tabbox>\n\t</vbox>\n\n\t<vbox height=\"7\"></vbox>\n\n\t<hbox>\n\t\t<hbox width=\"9\"></hbox>\n\t\t<separator padding=\"0\" flex=\"1\" orient=\"HORIZONTAL\"/>\n\t\t<hbox width=\"9\"></hbox>\n\t</hbox>\n\n\t<vbox height=\"6\"></vbox>\n\n\t<hbox padding=\"0\">\n\t\t<hbox width=\"11\"></hbox>\n\t\t<button label=\"${Dialog.Help}\" image=\"help_web.png\" onclick=\"jobEntryController.help()\"/>\n\t\t<spacer flex=\"1\"/>\n\t\t<button label=\"${Dialog.Accept}\" width=\"75\" onclick=\"jobEntryController.accept()\"/>\n\t\t<hbox width=\"1\"></hbox>\n\t\t<button label=\"${Dialog.Cancel}\" width=\"75\" onclick=\"jobEntryController.cancel()\"/>\n\t\t<hbox width=\"11\"></hbox>\n\t</hbox>\n\n\t<vbox padding=\"0\" height=\"11\"></vbox>\n\n</dialog>\n\n  <!--  ###############################################################################   -->\n  <!--     ERROR DIALOG: Dialog to display error text                                     -->\n  <!--  ###############################################################################   -->   \n  <dialog id=\"hadoop-error-dialog\" title=\"${Dialog.Error}\" buttonlabelaccept=\"${Dialog.Accept}\" buttons=\"accept\" ondialogaccept=\"jobEntryController.closeErrorDialog()\" width=\"600\" height=\"300\" buttonalign=\"center\">\n        <textbox id=\"hadoop-error-message\" value=\"${errorDialog.errorOccurred}\" multiline=\"true\" readonly=\"true\" flex=\"1\" />\n  </dialog>\n  \n</window>"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/JobEntryHadoopTransJobExecutorDialog.xul",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<window id=\"hadoop-window-wrapper\" xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\" xmlns:pen=\"http://www.pentaho.org/2008/xul\">\n   <dialog id=\"job-entry-dialog\" title=\"${JobEntryDialog.Title}\" resizable=\"true\" buttons=\"\" padding=\"15\"\n           ondialogaccept=\"jobEntryController.accept()\" ondialogcancel=\"jobEntryController.cancel()\" width=\"625\"\n           appicon=\"ui/images/spoon.ico\">\n      <vbox>\n         <grid>\n            <columns>\n               <column />\n               <column flex=\"1\" />\n               <column />\n            </columns>\n            <rows>\n               <row>\n                  <vbox flex=\"1\">\n                     <label value=\"${JobEntry.Name.Label}\" />\n                     <textbox id=\"jobentry-name\" width=\"265\" flex=\"1\" multiline=\"false\" />\n                  </vbox>\n                  <spacer flex=\"2\" />\n                  <image src=\"HDT.png\" width=\"32\" height=\"32\" />\n               </row>\n            </rows>\n         </grid>\n         <separator class=\"groove-thin\" height=\"20\" flex=\"1\" />\n      </vbox>\n      <vbox id=\"advanced-configuration\" flex=\"5\" hidden=\"false\">\n         <tabbox flex=\"2\">\n            <tabs>\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceMapper.Label}\" />\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceCombiner.Label}\" />\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.MapReduceReducer.Label}\" />\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.JobSetup.Label}\" />\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Cluster.Label}\" />\n               <tab label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.Label}\" />\n            </tabs>\n            <tabpanels>\n               <!-- The Mapper tab -->\n               <tabpanel padding=\"8\">\n                  <vbox flex=\"1\">\n                     <grid>\n                        <columns>\n                           <column />\n                           <column flex=\"1\" />\n                        </columns>\n                        <rows>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.MapTrans.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-map-transformation\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                              <button id=\"browse\" label=\"${JobEntryHadoopTransJobExecutor.MapTrans.Browse}\" onclick=\"jobEntryController.mapTransBrowse()\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.MapInputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-map-input-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.MapOutputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-map-output-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                        </rows>\n                     </grid>\n                  </vbox>\n               </tabpanel>\n               <!-- The Combiner tab -->\n               <tabpanel padding=\"8\">\n                  <vbox flex=\"1\">\n                     <grid>\n                        <columns>\n                           <column />\n                           <column flex=\"1\" />\n                        </columns>\n                        <rows>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.CombinerTrans.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-combiner-transformation\" flex=\"1\" width=\"365\" multiline=\"false\" />\n                              <button id=\"browse\" label=\"${JobEntryHadoopTransJobExecutor.CombinerTrans.Browse}\" onclick=\"jobEntryController.combinerTransBrowse()\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.CombinerInputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-combiner-input-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.CombinerOutputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-combiner-output-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"jobentry-combiner-single-threaded\" flex=\"1\" command=\"jobEntryController.invertCombiningSingleThreaded()\" label=\"${JobEntryHadoopTransJobExecutor.CombinerSingleThreaded.Label}\" />\n                              </hbox>\n                              <spacer />\n                           </row>\n                        </rows>\n                     </grid>\n                  </vbox>\n               </tabpanel>\n               <!-- The Reducer tab -->\n               <tabpanel padding=\"8\">\n                  <vbox flex=\"1\">\n                     <grid>\n                        <columns>\n                           <column />\n                           <column flex=\"1\" />\n                        </columns>\n                        <rows>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ReduceTrans.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-reduce-transformation\" flex=\"1\" width=\"365\" multiline=\"false\" />\n                              <button id=\"browse\" label=\"${JobEntryHadoopTransJobExecutor.ReduceTrans.Browse}\" onclick=\"jobEntryController.reduceTransBrowse()\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ReduceInputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-reduce-input-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ReduceOutputStepName.Label}\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-reduce-output-stepname\" flex=\"1\" width=\"265\" multiline=\"false\" />\n                              <spacer />\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"jobentry-reduce-single-threaded\" flex=\"1\" command=\"jobEntryController.invertReducingSingleThreaded()\" label=\"${JobEntryHadoopTransJobExecutor.ReduceSingleThreaded.Label}\" />\n                              </hbox>\n                              <spacer />\n                           </row>\n                        </rows>\n                     </grid>\n                  </vbox>\n               </tabpanel>\n               <!-- Job Setup Tab -->\n               <tabpanel padding=\"8\">\n                  <vbox flex=\"1\">\n                     <grid>\n                        <columns>\n                           <column />\n                        </columns>\n                        <rows>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.InputPath.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"input-path\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.OutputPath.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"output-path\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"clean-output-path\" flex=\"1\" command=\"jobEntryController.invertCleanOutputPath()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.CleanOutputPath.Label}\" />\n                              </hbox>\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.InputFormat.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"classes-input-format\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Classes.OutputFormat.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"classes-output-format\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"classes-suppress-output-map-key\" flex=\"1\" command=\"jobEntryController.invertSuppressOutputOfMapKey()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressMapOutputKey.Label}\" />\n                              </hbox>\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"classes-suppress-output-map-value\" flex=\"1\" command=\"jobEntryController.invertSuppressOutputOfMapValue()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressMapOutputValue.Label}\" />\n                              </hbox>\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"classes-suppress-output-key\" flex=\"1\" command=\"jobEntryController.invertSuppressOutputOfKey()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressOutputKey.Label}\" />\n                              </hbox>\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"classes-suppress-output-value\" flex=\"1\" command=\"jobEntryController.invertSuppressOutputOfValue()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.SuppressOutputValue.Label}\" />\n                              </hbox>\n                           </row>\n                        </rows>\n                     </grid>\n                  </vbox>\n               </tabpanel>\n               <!-- Cluster Tab -->\n               <tabpanel padding=\"8\">\n                  <vbox flex=\"1\">\n                     <grid>\n                        <columns>\n                           <column />\n                        </columns>\n                        <rows>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.Name.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-name\" width=\"365\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Label}\" width=\"200\" />\n                           </row>\n                           <row>\n                           \t  <hbox>\n                                 <menulist id=\"named-clusters\" pen:binding=\"name\" width=\"200\">\n                                    <menupopup />\n                                 </menulist>\n                                 <button id=\"editNamedCluster\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.Edit}\" onclick=\"jobEntryController.editNamedCluster()\" />\n                                 <button id=\"newNamedCluster\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.Paths.NamedCluster.New}\" onclick=\"jobEntryController.newNamedCluster()\" />\n                              </hbox>\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.NumMapTasks.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"num-map-tasks\" flex=\"1\" width=\"140\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.NumReduceTasks.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"num-reduce-tasks\" flex=\"1\" width=\"140\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <label value=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Logging.Interval.Label}\" />\n                           </row>\n                           <row>\n                              <textbox pen:customclass=\"variabletextbox\" id=\"logging-interval\" width=\"140\" flex=\"1\" multiline=\"false\" />\n                           </row>\n                           <row>\n                              <hbox>\n                                 <checkbox id=\"blocking\" flex=\"1\" command=\"jobEntryController.invertBlocking()\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Blocking.Label}\" />\n                              </hbox>\n                           </row>\n                           <row />\n                        </rows>\n                     </grid>\n                  </vbox>\n               </tabpanel>\n               <tabpanel padding=\"12\">\n                  <tree id=\"fields-table\" flex=\"1\" hidecolumnpicker=\"true\" autocreatenewrows=\"true\" newitembinding=\"jobEntryController.newUserDefinedItem()\">\n                     <treecols>\n                        <treecol id=\"name-col\" editable=\"true\" width=\"270\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.NameColumn.Label}\" pen:binding=\"name\" />\n                        <treecol id=\"value-col\" editable=\"true\" width=\"270\" label=\"${JobEntryHadoopTransJobExecutor.ModeAdvanced.Tab.UserDefined.ValueColumn.Label}\" pen:binding=\"value\" />\n                     </treecols>\n                     <treechildren />\n                  </tree>\n               </tabpanel>\n            </tabpanels>\n         </tabbox>\n      </vbox>\n      <separator class=\"groove-thin\" height=\"25\" flex=\"1\" />\n      <hbox>\n         <button label=\"${Dialog.Help}\" image=\"${HelpImage.Url}\" onclick=\"jobEntryController.help()\" />\n         <spacer flex=\"1\" />\n         <button label=\"${Dialog.Accept}\" onclick=\"jobEntryController.accept()\" width=\"80\" />\n         <button label=\"${Dialog.Cancel}\" onclick=\"jobEntryController.cancel()\" width=\"80\" />\n      </hbox>\n   </dialog>\n   <!--  ###############################################################################   -->\n   <!--     ERROR DIALOG: Dialog to display error text                                     -->\n   <!--  ###############################################################################   -->\n   <dialog id=\"hadoop-error-dialog\" title=\"${Dialog.Error}\" buttonlabelaccept=\"${Dialog.Accept}\" buttons=\"accept\" ondialogaccept=\"jobEntryController.closeErrorDialog()\" width=\"600\" height=\"300\" buttonalign=\"center\">\n      <textbox id=\"hadoop-error-message\" value=\"${errorDialog.errorOccurred}\" multiline=\"true\" readonly=\"true\" flex=\"1\" />\n   </dialog>\n</window>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/enter/dialog.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<dialog id=\"hadoop-enter-dialog\"\n\tpack=\"true\"\n    xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n    xmlns:pen=\"http://www.pentaho.org/2008/xul\" \n\ttitle=\"${StepConfigruationDialog.Title}\"\n\tresizable=\"true\"\n    height=\"250\" width=\"600\"\n\tappicon=\"ui/images/spoon.ico\"\n\tbuttons=\"extra1,accept,cancel\"\n\tbuttonalign=\"end\" \n\tbuttonlabelextra1=\"${Dialog.Help}\"\n\tbuttonlabelaccept=\"${Dialog.Accept}\"\n\tbuttonlabelcancel=\"${Dialog.Cancel}\"\n\tondialogextra1=\"handler.onHelp()\" \n\tondialogaccept=\"handler.onAccept()\"\n\tondialogcancel=\"handler.onCancel()\">\n\n\t<vbox>\n\t\t<hbox>\n\t\t\t<label value=\"${Step.Name.Label}\"/>\n\t\t\t<textbox id=\"step-name\" flex=\"1\" multiline=\"false\"/>\n\t\t</hbox>\n\t\t<grid>\n\t\t\t<columns>\n\t\t\t\t<column/>\n\t\t\t\t<column flex=\"1\"/>\n\t\t\t\t<column flex=\"1\"/>\n\t\t\t\t<column flex=\"1\"/>\n\t\t\t</columns>\n\t\t\t<rows>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"\" />\n\t\t\t\t\t<label value=\"${HadoopEnter.Type.Label}\" />\n\t\t\t\t\t<label value=\"${HadoopEnter.Length.Label}\" />\n\t\t\t\t\t<label value=\"${HadoopEnter.Precision.Label}\" />\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${HadoopEnter.InKey.Label}\" />\n\t\t\t\t\t<menulist id=\"input-key-type\" flex=\"1\" editable=\"true\">\n\t\t\t\t\t\t<menupopup>\n\t\t\t\t\t\t</menupopup>\n\t\t\t\t\t</menulist>\n\t\t\t\t\t<textbox id=\"input-key-length\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t\t<textbox id=\"input-key-precision\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${HadoopEnter.InValue.Label}\" />\n\t\t\t\t\t<menulist id=\"input-value-type\" flex=\"1\" editable=\"true\">\n\t\t\t\t\t\t<menupopup>\n\t\t\t\t\t\t</menupopup>\n\t\t\t\t\t</menulist>\n\t\t\t\t\t<textbox id=\"input-value-length\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t\t<textbox id=\"input-value-precision\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t</row>\n\t\t\t</rows>\n\t\t</grid>\n\t</vbox>\n</dialog>"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/main/resources/org/pentaho/big/data/kettle/plugins/mapreduce/ui/step/exit/dialog.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<dialog id=\"hadoop-exit-dialog\"\n\tpack=\"true\"\n    xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n    xmlns:pen=\"http://www.pentaho.org/2008/xul\" \n\ttitle=\"${StepConfigruationDialog.Title}\"\n\tresizable=\"true\"\n    height=\"250\" width=\"600\"\n\tappicon=\"ui/images/spoon.ico\"\n\tbuttons=\"extra1,accept,cancel\"\n\tbuttonalign=\"end\" \n\tbuttonlabelextra1=\"${Dialog.Help}\"\n\tbuttonlabelaccept=\"${Dialog.Accept}\"\n\tbuttonlabelcancel=\"${Dialog.Cancel}\" \n\tondialogextra1=\"handler.onHelp()\"\n\tondialogaccept=\"handler.onAccept()\"\n\tondialogcancel=\"handler.onCancel()\">\n\n\t<vbox>\n\t\t<grid>\n\t\t\t<columns>\n\t\t\t\t<column/>\n\t\t\t\t<column flex=\"1\"/>\n\t\t\t</columns>\n\t\t\t<rows>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${Step.Name.Label}\"/>\n\t\t\t\t\t<textbox id=\"step-name\" flex=\"1\" multiline=\"false\"/>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${HadoopExit.OutKey.Label}\" />\n\t\t\t\t\t<menulist id=\"output-key-fieldname\" flex=\"1\" editable=\"true\" pen:binding=\"name\">\n\t\t\t\t\t\t<menupopup>\n\t\t\t\t\t\t</menupopup>\n\t\t\t\t\t</menulist>\n\t\t\t\t</row>\n\t\t\t\t<row>\n\t\t\t\t\t<label value=\"${HadoopExit.OutValue.Label}\" />\n\t\t\t\t\t<menulist id=\"output-value-fieldname\" flex=\"1\" editable=\"true\" pen:binding=\"name\">\n\t\t\t\t\t\t<menupopup>\n\t\t\t\t\t\t</menupopup>\n\t\t\t\t\t</menulist>\n\t\t\t\t</row>\n\t\t\t</rows>\n\t\t</grid>\n\t</vbox>\n</dialog>"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/DialogClassUtilTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop.JobEntryHadoopJobExecutor;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.enter.HadoopEnterMeta;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.hadoop.JobEntryHadoopJobExecutorDialog;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.pmr.JobEntryHadoopTransJobExecutorDialog;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.enter.HadoopEnterDialog;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.exit.HadoopExitDialog;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNotNull;\n\n/**\n * Created by bryan on 1/15/16.\n */\npublic class DialogClassUtilTest {\n  @Test\n  public void testConstructor() {\n    assertNotNull( new DialogClassUtil() );\n  }\n\n  @Test\n  public void testJobEntryHadoopTransJobExecutor() {\n    assertEquals( JobEntryHadoopTransJobExecutorDialog.class.getCanonicalName(), DialogClassUtil.getDialogClassName(\n      JobEntryHadoopTransJobExecutor.class ) );\n  }\n\n  @Test\n  public void testJobEntryHadoopJobExecutor() {\n    assertEquals( JobEntryHadoopJobExecutorDialog.class.getCanonicalName(), DialogClassUtil.getDialogClassName(\n      JobEntryHadoopJobExecutor.class ) );\n  }\n\n  @Test\n  public void testHadoopExit() {\n    assertEquals( HadoopExitDialog.class.getCanonicalName(), DialogClassUtil.getDialogClassName(\n      HadoopExitMeta.class ) );\n  }\n\n  @Test\n  public void testHadoopEnter() {\n    assertEquals( HadoopEnterDialog.class.getCanonicalName(), DialogClassUtil.getDialogClassName(\n      HadoopEnterMeta.class ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/JobEntryHadoopTransJobExecutorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce;\n\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\npublic class JobEntryHadoopTransJobExecutorTest {\n  private static NamedClusterService namedClusterService;\n  private static RuntimeTestActionService runtimeTestActionService;\n  private static RuntimeTester runtimeTester;\n  private static NamedClusterServiceLocator namedClusterServiceLocator;\n  private static JobEntryHadoopTransJobExecutor exec;\n  private static Repository rep;\n  private static ObjectId oid;\n  private static IMetaStore metaStore;\n\n  @BeforeClass\n  public static final void setup() throws Throwable {\n    namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( mock( NamedCluster.class ) );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    runtimeTester = mock(\n      RuntimeTester.class );\n    namedClusterServiceLocator = mock( NamedClusterServiceLocator.class );\n    rep = mock( Repository.class );\n    metaStore = mock( IMetaStore.class );\n    oid = mock( ObjectId.class );\n    exec = new JobEntryHadoopTransJobExecutor( namedClusterService, runtimeTestActionService, runtimeTester,\n      namedClusterServiceLocator );\n  }\n\n  @Test\n  public void loadRep_num_map_tasks_null() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_map_tasks\" ) ).thenReturn( null );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( null, exec.getNumMapTasks() );\n  }\n\n  @Test\n  public void loadRep_num_map_tasks_empty() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_map_tasks\" ) ).thenReturn( \"\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"\", exec.getNumMapTasks() );\n  }\n\n  @Test\n  public void loadRep_num_map_tasks_variable() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_map_tasks\" ) ).thenReturn( \"${test}\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"${test}\", exec.getNumMapTasks() );\n  }\n\n  @Test\n  public void loadRep_num_map_tasks_number() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_map_tasks\" ) ).thenReturn( \"5\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"5\", exec.getNumMapTasks() );\n  }\n\n  @Test\n  public void loadRep_num_reduce_tasks_null() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_reduce_tasks\" ) ).thenReturn( null );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( null, exec.getNumReduceTasks() );\n  }\n\n  @Test\n  public void loadRep_num_reduce_tasks_empty() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_reduce_tasks\" ) ).thenReturn( \"\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"\", exec.getNumReduceTasks() );\n  }\n\n  @Test\n  public void loadRep_num_reduce_tasks_variable() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_reduce_tasks\" ) ).thenReturn( \"${test}\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"${test}\", exec.getNumReduceTasks() );\n  }\n\n  @Test\n  public void loadRep_num_reduce_tasks_number() throws Throwable {\n    when( rep.getJobEntryAttributeString( oid, \"num_reduce_tasks\" ) ).thenReturn( \"5\" );\n    exec.loadRep( rep, metaStore, oid, null, null );\n    assertEquals( \"5\", exec.getNumReduceTasks() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/NamedClusterLoadSaveUtilTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop.JobEntryHadoopJobExecutor;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.w3c.dom.Element;\nimport org.w3c.dom.Node;\nimport org.xml.sax.InputSource;\nimport org.xml.sax.SAXException;\n\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport javax.xml.parsers.ParserConfigurationException;\nimport java.io.ByteArrayInputStream;\nimport java.io.IOException;\nimport java.io.StringReader;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.eq;\n\n/**\n * Created by bryan on 1/13/16.\n */\npublic class NamedClusterLoadSaveUtilTest {\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n  private IMetaStore metaStore;\n  private NamedCluster namedCluster;\n  private Repository repository;\n  private ObjectId objectId;\n  private NamedClusterLoadSaveUtil namedClusterLoadSaveUtil;\n  private LogChannelInterface logChannelInterface;\n  private ObjectId id_job;\n\n  @Before\n  public void setup() {\n    namedClusterService = mock( NamedClusterService.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    runtimeTester = mock( RuntimeTester.class );\n    namedClusterServiceLocator = mock( NamedClusterServiceLocator.class );\n    metaStore = mock( IMetaStore.class );\n    namedCluster = mock( NamedCluster.class );\n    repository = mock( Repository.class );\n    objectId = mock( ObjectId.class );\n    id_job = mock( ObjectId.class );\n    logChannelInterface = mock( LogChannelInterface.class );\n    namedClusterLoadSaveUtil = new NamedClusterLoadSaveUtil();\n  }\n\n  private void addTag( StringBuilder builder, String tag, String value ) {\n    builder.append( \"<\" ).append( tag ).append( \">\" );\n    builder.append( value );\n    builder.append( \"</\" ).append( tag ).append( \">\" );\n  }\n\n  private Node parseNamedClusterXml( String namedClusterXml )\n    throws ParserConfigurationException, IOException, SAXException {\n    return DocumentBuilderFactory.newInstance().newDocumentBuilder()\n      .parse( new InputSource( new StringReader( \"<nc>\" + namedClusterXml + \"</nc>\" ) ) ).getFirstChild();\n  }\n\n  @Test\n  public void testLoadClusterConfigFoundXml()\n    throws ParserConfigurationException, IOException, SAXException, MetaStoreException {\n    String testName = \"testName\";\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testName, metaStore ) ).thenReturn( namedCluster );\n    StringBuilder stringBuilder = new StringBuilder( \"<job>\" );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.CLUSTER_NAME, testName );\n    stringBuilder.append( \"</job>\" );\n    String xml = stringBuilder.toString();\n    Element documentElement =\n      DocumentBuilderFactory.newInstance().newDocumentBuilder().parse( new ByteArrayInputStream( xml.getBytes() ) )\n        .getDocumentElement();\n    assertEquals( namedCluster, namedClusterLoadSaveUtil\n      .loadClusterConfig( namedClusterService, null, null, metaStore, documentElement, logChannelInterface ) );\n  }\n\n  @Test\n  public void testLoadClusterConfigNotFoundXml()\n    throws ParserConfigurationException, IOException, SAXException, MetaStoreException {\n    String testName = \"testName\";\n    String testHost = \"testHost\";\n    String hdfsPort = \"8080\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"8081\";\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( false );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    StringBuilder stringBuilder = new StringBuilder( \"<job>\" );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.CLUSTER_NAME, testName );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.HDFS_HOSTNAME, testHost );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.HDFS_PORT, hdfsPort );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.JOB_TRACKER_HOSTNAME, jobTrackerHost );\n    addTag( stringBuilder, JobEntryHadoopJobExecutor.JOB_TRACKER_PORT, jobTrackerPort );\n    stringBuilder.append( \"</job>\" );\n    String xml = stringBuilder.toString();\n    Element documentElement =\n      DocumentBuilderFactory.newInstance().newDocumentBuilder().parse( new ByteArrayInputStream( xml.getBytes() ) )\n        .getDocumentElement();\n    assertEquals( namedCluster, namedClusterLoadSaveUtil\n      .loadClusterConfig( namedClusterService, null, null, metaStore, documentElement, logChannelInterface ) );\n    verify( namedCluster ).setHdfsHost( testHost );\n    verify( namedCluster ).setHdfsPort( hdfsPort );\n    verify( namedCluster ).setJobTrackerHost( jobTrackerHost );\n    verify( namedCluster ).setJobTrackerPort( jobTrackerPort );\n  }\n\n  @Test\n  public void testLoadClusterConfigFoundRepo()\n    throws ParserConfigurationException, IOException, SAXException, MetaStoreException, KettleException {\n    String testName = \"testName\";\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testName, metaStore ) ).thenReturn( namedCluster );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.CLUSTER_NAME ) )\n      .thenReturn( testName );\n    assertEquals( namedCluster, namedClusterLoadSaveUtil\n      .loadClusterConfig( namedClusterService, objectId, repository, metaStore, null, logChannelInterface ) );\n  }\n\n  @Test\n  public void testLoadClusterConfigNotFoundRepo() throws ParserConfigurationException, IOException, SAXException,\n    MetaStoreException, KettleException {\n    String testName = \"testName\";\n    String testHost = \"testHost\";\n    String hdfsPort = \"8080\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"8081\";\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( false );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.CLUSTER_NAME ) ).thenReturn(\n      testName );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_HOSTNAME ) ).thenReturn(\n      testHost );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_PORT ) ).thenReturn(\n      hdfsPort );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_HOSTNAME ) )\n      .thenReturn( jobTrackerHost );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_PORT ) ).thenReturn(\n      jobTrackerPort );\n    assertEquals( namedCluster, namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, objectId, repository,\n      metaStore, null, logChannelInterface ) );\n    verify( namedCluster ).setHdfsHost( testHost );\n    verify( namedCluster ).setHdfsPort( hdfsPort );\n    verify( namedCluster ).setJobTrackerHost( jobTrackerHost );\n    verify( namedCluster ).setJobTrackerPort( jobTrackerPort );\n  }\n\n  @Test\n  public void testLoadClusterConfigNoNodeOrRep() {\n    assertNull( namedClusterLoadSaveUtil\n      .loadClusterConfig( namedClusterService, id_job, null, metaStore, null, logChannelInterface ) );\n  }\n\n  @Test\n  public void testLoadClusterConfigExceptions() throws MetaStoreException, KettleException {\n    String testName = \"testName\";\n    String testHost = \"testHost\";\n    String hdfsPort = \"8080\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"8081\";\n    MetaStoreException metaStoreException = new MetaStoreException( \"msg\" );\n\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testName, metaStore ) ).thenThrow( metaStoreException );\n\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.CLUSTER_NAME ) ).thenReturn(\n      testName );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_HOSTNAME ) ).thenReturn(\n      testHost );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_PORT ) ).thenReturn(\n      hdfsPort );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_HOSTNAME ) )\n      .thenReturn( jobTrackerHost );\n    KettleException kettleException = new KettleException( \"msg2\" );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_PORT ) ).thenThrow(\n      kettleException );\n    assertEquals( namedCluster, namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, objectId, repository,\n      metaStore, null, logChannelInterface ) );\n    verify( logChannelInterface ).logDebug( metaStoreException.getMessage(), metaStoreException );\n    verify( logChannelInterface ).logError( kettleException.getMessage(), kettleException );\n    verify( namedCluster ).setHdfsHost( testHost );\n    verify( namedCluster ).setHdfsPort( hdfsPort );\n    verify( namedCluster ).setJobTrackerHost( jobTrackerHost );\n  }\n\n  @Test\n  public void testLoadClusterConfigNoMetastore() throws MetaStoreException, KettleException {\n    String testName = \"testName\";\n    String testHost = \"testHost\";\n    String hdfsPort = \"8080\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"8081\";\n    MetaStoreException metaStoreException = new MetaStoreException( \"msg\" );\n\n    when( namedClusterService.contains( testName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testName, metaStore ) ).thenThrow( metaStoreException );\n\n    when( namedClusterService.getClusterTemplate() ).thenReturn( namedCluster );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.CLUSTER_NAME ) ).thenReturn(\n      testName );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_HOSTNAME ) ).thenReturn(\n      testHost );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.HDFS_PORT ) ).thenReturn(\n      hdfsPort );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_HOSTNAME ) )\n      .thenReturn( jobTrackerHost );\n    when( repository.getJobEntryAttributeString( objectId, JobEntryHadoopJobExecutor.JOB_TRACKER_PORT ) )\n      .thenReturn( jobTrackerPort );\n    assertEquals( namedCluster, namedClusterLoadSaveUtil.loadClusterConfig( namedClusterService, objectId, repository,\n      null, null, logChannelInterface ) );\n    verify( namedClusterService, never() ).read( eq( testName ), any( IMetaStore.class ) );\n    verify( namedCluster ).setHdfsHost( testHost );\n    verify( namedCluster ).setHdfsPort( hdfsPort );\n    verify( namedCluster ).setJobTrackerHost( jobTrackerHost );\n    verify( namedCluster ).setJobTrackerPort( jobTrackerPort );\n  }\n\n  @Test\n  public void testGetXmlNamedCluster_NoNPEWhenNCIsNull() {\n    StringBuilder ncString = new StringBuilder();\n    try {\n      namedClusterLoadSaveUtil.getXmlNamedCluster( null, namedClusterService, null, logChannelInterface, ncString );\n      assertEquals( \"It should not be added any NamedCluster info but it was:\" + ncString.toString(), 0, ncString\n        .length() );\n    } catch ( NullPointerException ex ) {\n      fail( \"NPE occured but should not: \" + ex );\n    }\n  }\n\n  @Test\n  public void testSaveNamedClusterRep_NoNPEWhenNCIsNull() throws KettleException {\n    try {\n      namedClusterLoadSaveUtil.saveNamedClusterRep( null, namedClusterService, repository, metaStore, objectId,\n        objectId, logChannelInterface );\n      verify( repository, never() ).saveJobEntryAttribute( any( ObjectId.class ), any( ObjectId.class ), anyString(),\n        anyString() );\n    } catch ( NullPointerException ex ) {\n      fail( \"NPE occured but should not: \" + ex );\n    }\n  }\n\n  @Test\n  public void testSaveNamedClusterRep() throws KettleException, MetaStoreException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    NamedCluster readNamedCluster = mock( NamedCluster.class );\n\n    when( readNamedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( readNamedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( readNamedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( readNamedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testNcName, metaStore ) ).thenReturn( readNamedCluster );\n\n    namedClusterLoadSaveUtil\n      .saveNamedClusterRep( namedCluster, namedClusterService, repository, metaStore, id_job, objectId,\n        logChannelInterface );\n\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.CLUSTER_NAME, testNcName );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_HOSTNAME, hdfsHost );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_PORT, hdfsPort );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME, jobTrackerHost );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT, jobTrackerPort );\n  }\n\n  @Test\n  public void testSaveNamedClusterRepNoName() throws KettleException, MetaStoreException {\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    namedClusterLoadSaveUtil\n      .saveNamedClusterRep( namedCluster, namedClusterService, repository, metaStore, id_job, objectId,\n        logChannelInterface );\n\n    verify( repository, never() )\n      .saveJobEntryAttribute( eq( id_job ), eq( objectId ), eq( NamedClusterLoadSaveUtil.CLUSTER_NAME ), anyString() );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_HOSTNAME, hdfsHost );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_PORT, hdfsPort );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME, jobTrackerHost );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT, jobTrackerPort );\n  }\n\n  @Test\n  public void testSaveNamedClusterRepExceptionReading() throws KettleException, MetaStoreException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( true );\n    MetaStoreException metaStoreException = new MetaStoreException( \"msg\" );\n    when( namedClusterService.read( testNcName, metaStore ) ).thenThrow( metaStoreException );\n\n    namedClusterLoadSaveUtil\n      .saveNamedClusterRep( namedCluster, namedClusterService, repository, metaStore, id_job, objectId,\n        logChannelInterface );\n\n    verify( logChannelInterface ).logDebug( metaStoreException.getMessage(), metaStoreException );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.CLUSTER_NAME, testNcName );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_HOSTNAME, hdfsHost );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_PORT, hdfsPort );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME, jobTrackerHost );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT, jobTrackerPort );\n  }\n\n  @Test\n  public void testSaveNamedClusterRepNotContains() throws KettleException, MetaStoreException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( false );\n\n    namedClusterLoadSaveUtil\n      .saveNamedClusterRep( namedCluster, namedClusterService, repository, metaStore, id_job, objectId,\n        logChannelInterface );\n\n    verify( namedClusterService, never() ).read( testNcName, metaStore );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.CLUSTER_NAME, testNcName );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_HOSTNAME, hdfsHost );\n    verify( repository ).saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.HDFS_PORT, hdfsPort );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME, jobTrackerHost );\n    verify( repository )\n      .saveJobEntryAttribute( id_job, objectId, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT, jobTrackerPort );\n  }\n\n  @Test\n  public void testGetXmlNamedCluster()\n    throws KettleException, MetaStoreException, IOException, SAXException, ParserConfigurationException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    NamedCluster readNamedCluster = mock( NamedCluster.class );\n\n    when( readNamedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( readNamedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( readNamedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( readNamedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( true );\n    when( namedClusterService.read( testNcName, metaStore ) ).thenReturn( readNamedCluster );\n\n    StringBuilder stringBuilder = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXmlNamedCluster( namedCluster, namedClusterService, metaStore, logChannelInterface, stringBuilder );\n\n    Node node = parseNamedClusterXml( stringBuilder.toString() );\n    assertEquals( testNcName, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.CLUSTER_NAME ) );\n    assertEquals( hdfsHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_HOSTNAME ) );\n    assertEquals( hdfsPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_PORT ) );\n    assertEquals( jobTrackerHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME ) );\n    assertEquals( jobTrackerPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT ) );\n  }\n\n  @Test\n  public void testGetXmlNamedClusterEmptyName()\n    throws KettleException, MetaStoreException, IOException, SAXException, ParserConfigurationException {\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    StringBuilder stringBuilder = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXmlNamedCluster( namedCluster, namedClusterService, metaStore, logChannelInterface, stringBuilder );\n\n    Node node = parseNamedClusterXml( stringBuilder.toString() );\n    assertEquals( null, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.CLUSTER_NAME ) );\n    assertEquals( hdfsHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_HOSTNAME ) );\n    assertEquals( hdfsPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_PORT ) );\n    assertEquals( jobTrackerHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME ) );\n    assertEquals( jobTrackerPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT ) );\n  }\n\n  @Test\n  public void testGetXmlNamedClusterNullMetastore()\n    throws KettleException, MetaStoreException, IOException, SAXException, ParserConfigurationException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    StringBuilder stringBuilder = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXmlNamedCluster( namedCluster, namedClusterService, null, logChannelInterface, stringBuilder );\n\n    Node node = parseNamedClusterXml( stringBuilder.toString() );\n    assertEquals( testNcName, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.CLUSTER_NAME ) );\n    assertEquals( hdfsHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_HOSTNAME ) );\n    assertEquals( hdfsPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_PORT ) );\n    assertEquals( jobTrackerHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME ) );\n    assertEquals( jobTrackerPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT ) );\n  }\n\n  @Test\n  public void testGetXmlNamedClusterNotContains()\n    throws KettleException, MetaStoreException, IOException, SAXException, ParserConfigurationException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( false );\n\n    StringBuilder stringBuilder = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXmlNamedCluster( namedCluster, namedClusterService, metaStore, logChannelInterface, stringBuilder );\n\n    Node node = parseNamedClusterXml( stringBuilder.toString() );\n    verify( namedClusterService, never() ).read( anyString(), eq( metaStore ) );\n    assertEquals( testNcName, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.CLUSTER_NAME ) );\n    assertEquals( hdfsHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_HOSTNAME ) );\n    assertEquals( hdfsPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_PORT ) );\n    assertEquals( jobTrackerHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME ) );\n    assertEquals( jobTrackerPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT ) );\n  }\n\n  @Test\n  public void testGetXmlNamedClusterMetastoreException()\n    throws KettleException, MetaStoreException, IOException, SAXException, ParserConfigurationException {\n    String testNcName = \"testNcName\";\n    String hdfsHost = \"hdfsHost\";\n    String hdfsPort = \"hdfsPort\";\n    String jobTrackerHost = \"jobTrackerHost\";\n    String jobTrackerPort = \"jobTrackerPort\";\n\n    when( namedCluster.getName() ).thenReturn( testNcName );\n\n    when( namedCluster.getHdfsHost() ).thenReturn( hdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( hdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( jobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( jobTrackerPort );\n\n    when( namedClusterService.contains( testNcName, metaStore ) ).thenReturn( true );\n    MetaStoreException metaStoreException = new MetaStoreException( \"msg\" );\n    when( namedClusterService.read( testNcName, metaStore ) ).thenThrow( metaStoreException );\n\n    StringBuilder stringBuilder = new StringBuilder();\n    namedClusterLoadSaveUtil\n      .getXmlNamedCluster( namedCluster, namedClusterService, metaStore, logChannelInterface, stringBuilder );\n\n    Node node = parseNamedClusterXml( stringBuilder.toString() );\n    verify( logChannelInterface ).logDebug( metaStoreException.getMessage(), metaStoreException );\n    assertEquals( testNcName, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.CLUSTER_NAME ) );\n    assertEquals( hdfsHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_HOSTNAME ) );\n    assertEquals( hdfsPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.HDFS_PORT ) );\n    assertEquals( jobTrackerHost, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_HOSTNAME ) );\n    assertEquals( jobTrackerPort, XMLHandler.getTagValue( node, NamedClusterLoadSaveUtil.JOB_TRACKER_PORT ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/UserDefinedItemTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport java.beans.PropertyChangeListener;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\n\n/**\n * Created by bryan on 1/14/16.\n */\npublic class UserDefinedItemTest {\n  private UserDefinedItem userDefinedItem;\n\n  @Before\n  public void setup() {\n    userDefinedItem = new UserDefinedItem();\n  }\n\n  @Test\n  public void testGetSetName() {\n    String testName = \"testName\";\n    userDefinedItem.setName( testName );\n    assertEquals( testName, userDefinedItem.getName() );\n  }\n\n  @Test\n  public void testGetSetValue() {\n    String testValue = \"testValue\";\n    userDefinedItem.setValue( testValue );\n    assertEquals( testValue, userDefinedItem.getValue() );\n  }\n\n  @Test\n  public void testAddRemovePropertyChangeListener() {\n    PropertyChangeListener propertyChangeListener = mock( PropertyChangeListener.class );\n    userDefinedItem.addPropertyChangeListener( propertyChangeListener );\n    userDefinedItem.setName( \"test\" );\n    userDefinedItem.setValue( \"test2\" );\n    userDefinedItem.removePropertyChangeListener( propertyChangeListener );\n    verifyNoMoreInteractions( propertyChangeListener );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/hadoop/JobEntryHadoopJobExecutorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry.hadoop;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.net.MalformedURLException;\nimport java.net.URL;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * User: Dzmitry Stsiapanau Date: 9/11/14 Time: 1:36 PM\n */\npublic class JobEntryHadoopJobExecutorTest {\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private RuntimeTester runtimeTester;\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n  private JobEntryHadoopJobExecutor jobExecutor;\n  private IMetaStore metaStore;\n  private NamedCluster namedCluster;\n  private Repository repository;\n  private ObjectId objectId;\n\n  @Before\n  public void setup() {\n    namedClusterService = mock( NamedClusterService.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    runtimeTester = mock( RuntimeTester.class );\n    namedClusterServiceLocator = mock( NamedClusterServiceLocator.class );\n    metaStore = mock( IMetaStore.class );\n    namedCluster = mock( NamedCluster.class );\n    repository = mock( Repository.class );\n    objectId = mock( ObjectId.class );\n\n    jobExecutor = new JobEntryHadoopJobExecutor( namedClusterService, runtimeTestActionService, runtimeTester,\n      namedClusterServiceLocator );\n  }\n\n  @Test\n  public void testResolveJobUrl() throws MalformedURLException {\n    String variableValue = \"http://jar.net/url\";\n    String testvar = \"testvar\";\n    jobExecutor.setVariable( testvar, variableValue );\n    assertEquals( new URL( variableValue ), jobExecutor.resolveJarUrl( \"${\" + testvar + \"}\" ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/entry/pmr/JobEntryHadoopTransJobExecutorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.KettleClientEnvironment;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.repository.RepositoryDirectoryInterface;\nimport org.pentaho.di.trans.TransMeta;\n\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.spy;\n\npublic class JobEntryHadoopTransJobExecutorTest {\n\n  VariableSpace space;\n  Repository repository;\n  ObjectId objectId;\n  RepositoryDirectoryInterface directoryInterface;\n\n  @Before\n  public void setup() throws KettleException {\n    KettleClientEnvironment.init();\n    space = mock( VariableSpace.class );\n    repository = mock( Repository.class );\n    objectId = mock( ObjectId.class );\n    directoryInterface = mock( RepositoryDirectoryInterface.class );\n  }\n\n  @Test\n  public void testLoadTransMetaLocal() throws Exception {\n    String testPath = \"src/test/resources/testTrans.ktr\";\n    FileObject fileObject = KettleVFS.getInstance( DefaultBowl.getInstance() ).getFileObject( testPath,\n      (VariableSpace) null );\n    when( space.environmentSubstitute( testPath ) ).thenReturn( testPath );\n    TransMeta transMeta = JobEntryHadoopTransJobExecutor.loadTransMeta( DefaultBowl.getInstance(), space, null,\n      testPath, objectId, null, null );\n    Assert.assertEquals( fileObject.getURI().toString(), transMeta.getFilename() );\n  }\n\n  @Test\n  public void testLoadTransMetaRepo() throws Exception {\n    String dir = \"/repo/path\";\n    String file = \"testTrans\";\n    when( space.environmentSubstitute( dir ) ).thenReturn( dir );\n    when( space.environmentSubstitute( file ) ).thenReturn( file );\n    when( repository.loadRepositoryDirectoryTree() ).thenReturn( directoryInterface );\n    when( directoryInterface.findDirectory( dir ) ).thenReturn( directoryInterface );\n    JobEntryHadoopTransJobExecutor.loadTransMeta( DefaultBowl.getInstance(), space, repository, dir + \"/\" + file, null,\n      null, null );\n    verify( repository ).loadTransformation( file, directoryInterface, null, true, null );\n  }\n\n  //for backward compatibility\n  @Test\n  public void testLoadTransMetaRepoReference() throws Exception {\n    String dir = \"/repo/path\";\n    String file = \"testTrans\";\n    when( space.environmentSubstitute( dir ) ).thenReturn( dir );\n    when( space.environmentSubstitute( file ) ).thenReturn( file );\n    when( repository.loadRepositoryDirectoryTree() ).thenReturn( directoryInterface );\n    when( directoryInterface.findDirectory( dir ) ).thenReturn( directoryInterface );\n    JobEntryHadoopTransJobExecutor.loadTransMeta( DefaultBowl.getInstance(), space, repository, dir + \"/\" + file, null,\n      null, null );\n    verify( repository ).loadTransformation( file, directoryInterface, null, true, null );\n  }\n\n  // for backward compatibility\n  @Test\n  public void testLoadTransMetaRepoDirFile() throws Exception {\n    String dir = \"/repo/path\";\n    String file = \"testTrans\";\n    when( space.environmentSubstitute( dir ) ).thenReturn( dir );\n    when( space.environmentSubstitute( file ) ).thenReturn( file );\n    when( repository.loadRepositoryDirectoryTree() ).thenReturn( directoryInterface );\n    when( directoryInterface.findDirectory( dir ) ).thenReturn( directoryInterface );\n    JobEntryHadoopTransJobExecutor.loadTransMeta( DefaultBowl.getInstance(), space, repository, null, null, dir, file );\n    verify( repository ).loadTransformation( file, directoryInterface, null, true, null );\n  }\n\n  @Test\n  public void testProperVariableSpaceWhenLoadTransMetaFromRepo() throws Throwable {\n    JobEntryHadoopTransJobExecutor jobEntry = spy( new JobEntryHadoopTransJobExecutor( null, null, null, null ) );\n    String dir = \"repo/path\";\n    String file = \"testName\";\n    String dirVar = \"TestVariablePath\";\n    String fileVar = \"TestVariableName\";\n    jobEntry.setVariable( dirVar, dir );\n    jobEntry.setVariable( fileVar, file );\n    when( jobEntry.getParentJob() ).thenReturn( mock( Job.class ) );\n    JobMeta parentJobMeta = mock( JobMeta.class );\n    when( parentJobMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n    when( jobEntry.getParentJobMeta() ).thenReturn( parentJobMeta );\n    when( repository.loadRepositoryDirectoryTree() ).thenReturn( directoryInterface );\n    when( directoryInterface.findDirectory( \"/\" + dir ) ).thenReturn( directoryInterface );\n    JobEntryHadoopTransJobExecutor.loadTransMeta( DefaultBowl.getInstance(), jobEntry, repository, null, null,\n      \"/${\" + dirVar + \"}\", \"${\" + fileVar + \"}\" );\n    verify( repository ).loadTransformation( file, directoryInterface, null, true, null );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/HadoopExitMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step;\n\nimport static org.junit.Assert.*;\n\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.step.exit.HadoopExitMeta;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\n\npublic class HadoopExitMetaTest {\n\n  @Test\n  public void getFields() throws Throwable {\n    HadoopExitMeta meta = new HadoopExitMeta();\n    meta.setOutKeyFieldname( \"key\" );\n    meta.setOutValueFieldname( \"value\" );\n\n    RowMeta rowMeta = new RowMeta();\n    ValueMeta valueMeta0 = new ValueMeta( \"key\" );\n    ValueMeta valueMeta1 = new ValueMeta( \"value\" );\n    rowMeta.addValueMeta( valueMeta0 );\n    rowMeta.addValueMeta( valueMeta1 );\n\n    meta.getFields( DefaultBowl.getInstance(), rowMeta, null, null, null, null );\n\n    assertEquals( 2, rowMeta.getValueMetaList().size() );\n    ValueMetaInterface vmi = rowMeta.getValueMeta( 0 );\n    assertEquals( \"outKey\", vmi.getName() );\n    vmi = rowMeta.getValueMeta( 1 );\n    assertEquals( \"outValue\", vmi.getName() );\n  }\n\n  @Test\n  public void getFields_invalid_key() throws Throwable {\n    HadoopExitMeta meta = new HadoopExitMeta();\n    meta.setOutKeyFieldname( \"invalid\" );\n    meta.setOutValueFieldname( \"value\" );\n\n    RowMeta rowMeta = new RowMeta();\n    ValueMeta valueMeta0 = new ValueMeta( \"key\" );\n    ValueMeta valueMeta1 = new ValueMeta( \"value\" );\n    rowMeta.addValueMeta( valueMeta0 );\n    rowMeta.addValueMeta( valueMeta1 );\n\n    try {\n      meta.getFields( DefaultBowl.getInstance(), rowMeta, null, null, null, null );\n      fail( \"expected exception\" );\n    } catch ( Exception ex ) {\n      assertEquals(\n          BaseMessages.getString( HadoopExitMeta.class, \"Error.InvalidKeyField\", meta.getOutKeyFieldname() ).trim(),\n          ex.getMessage().trim() );\n    }\n\n    // Check that the meta was not modified\n    assertEquals( 2, rowMeta.getValueMetaList().size() );\n    ValueMetaInterface vmi = rowMeta.getValueMeta( 0 );\n    assertEquals( \"key\", vmi.getName() );\n    vmi = rowMeta.getValueMeta( 1 );\n    assertEquals( \"value\", vmi.getName() );\n  }\n\n  @Test\n  public void getFields_invalid_value() throws Throwable {\n    HadoopExitMeta meta = new HadoopExitMeta();\n    meta.setOutKeyFieldname( \"key\" );\n    meta.setOutValueFieldname( \"invalid\" );\n\n    RowMeta rowMeta = new RowMeta();\n    ValueMeta valueMeta0 = new ValueMeta( \"key\" );\n    ValueMeta valueMeta1 = new ValueMeta( \"value\" );\n    rowMeta.addValueMeta( valueMeta0 );\n    rowMeta.addValueMeta( valueMeta1 );\n\n    try {\n      meta.getFields( DefaultBowl.getInstance(), rowMeta, null, null, null, null );\n      fail( \"expected exception\" );\n    } catch ( Exception ex ) {\n      assertEquals(\n          BaseMessages.getString( HadoopExitMeta.class, \"Error.InvalidValueField\", meta.getOutValueFieldname() ).trim(),\n          ex.getMessage().trim() );\n    }\n\n    // Check that the meta was not modified\n    assertEquals( 2, rowMeta.getValueMetaList().size() );\n    ValueMetaInterface vmi = rowMeta.getValueMeta( 0 );\n    assertEquals( \"key\", vmi.getName() );\n    vmi = rowMeta.getValueMeta( 1 );\n    assertEquals( \"value\", vmi.getName() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/enter/HadoopEnterMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.enter;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\n\npublic class HadoopEnterMetaInjectionTest extends BaseMetadataInjectionTest<HadoopEnterMeta> {\n  @Before\n  public void setup() throws Throwable {\n    setup( new HadoopEnterMeta() );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"KEY_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.getType()[0];\n      }\n    } );\n    check( \"KEY_LENGTH\", new IntGetter() {\n      public int get() {\n        return meta.getLength()[0];\n      }\n    } );\n    check( \"KEY_PRECISION\", new IntGetter() {\n      public int get() {\n        return meta.getPrecision()[0];\n      }\n    } );\n    check( \"VALUE_TYPE\", new IntGetter() {\n      public int get() {\n        return meta.getType()[1];\n      }\n    } );\n    check( \"VALUE_LENGTH\", new IntGetter() {\n      public int get() {\n        return meta.getLength()[1];\n      }\n    } );\n    check( \"VALUE_PRECISION\", new IntGetter() {\n      public int get() {\n        return meta.getPrecision()[1];\n      }\n    } );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/enter/HadoopEnterMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.enter;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.enter.HadoopEnterDialog;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * Created by bryan on 1/15/16.\n */\npublic class HadoopEnterMetaTest {\n\n  private HadoopEnterMeta hadoopEnterMeta;\n\n  @Before\n  public void setup() throws Throwable {\n    hadoopEnterMeta = new HadoopEnterMeta();\n  }\n\n  @Test\n  public void testConstructor() {\n    assertEquals( HadoopEnterMeta.KEY_FIELDNAME, hadoopEnterMeta.getFieldname()[ 0 ] );\n    assertEquals( HadoopEnterMeta.VALUE_FIELDNAME, hadoopEnterMeta.getFieldname()[ 1 ] );\n  }\n\n  @Test\n  public void testSetDefault() {\n    hadoopEnterMeta.setFieldname( new String[ 0 ] );\n    hadoopEnterMeta.setDefault();\n    testConstructor();\n  }\n\n  @Test\n  public void testGetDialogClassName() {\n    assertEquals( HadoopEnterDialog.class.getCanonicalName(), hadoopEnterMeta.getDialogClassName() );\n  }\n\n  @Test\n  public void testSetters() {\n    hadoopEnterMeta.setKeyType( 1 );\n    assertEquals( 1, hadoopEnterMeta.getType()[0] );\n    hadoopEnterMeta.setKeyLength( 2 );\n    assertEquals( 2, hadoopEnterMeta.getLength()[0] );\n    hadoopEnterMeta.setKeyPrecision( 3 );\n    assertEquals( 3, hadoopEnterMeta.getPrecision()[0] );\n    hadoopEnterMeta.setValueType( 1 );\n    assertEquals( 1, hadoopEnterMeta.getType()[1] );\n    hadoopEnterMeta.setValueLength( 2 );\n    assertEquals( 2, hadoopEnterMeta.getLength()[1] );\n    hadoopEnterMeta.setValuePrecision( 3 );\n    assertEquals( 3, hadoopEnterMeta.getPrecision()[1] );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitDataTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.*;\n\n/**\n * Created by bryan on 1/15/16.\n */\npublic class HadoopExitDataTest {\n  private HadoopExitData hadoopExitData;\n  private RowMetaInterface rowMetaInterface;\n\n  @Before\n  public void setup() {\n    rowMetaInterface = mock( RowMetaInterface.class );\n    hadoopExitData = new HadoopExitData();\n  }\n\n  @Test\n  public void testInitNullRowMeta() throws KettleException {\n    // This would npe if the rowmeta check wasn't there\n    hadoopExitData.init( DefaultBowl.getInstance(), null, null, null );\n  }\n\n  @Test\n  public void testInit() throws KettleException {\n    String name = \"hadoopExitMeta\";\n    String outKeyFieldName = \"outKeyFieldName\";\n    String outValueFieldName = \"outValueFieldName\";\n\n    RowMetaInterface outputRowMeta = mock( RowMetaInterface.class );\n    HadoopExitMeta hadoopExitMeta = mock( HadoopExitMeta.class );\n    when( hadoopExitMeta.getName() ).thenReturn( name );\n    when( hadoopExitMeta.getOutKeyFieldname() ).thenReturn( outKeyFieldName );\n    when( hadoopExitMeta.getOutValueFieldname() ).thenReturn( outValueFieldName );\n    VariableSpace space = mock( VariableSpace.class );\n\n    when( rowMetaInterface.clone() ).thenReturn( outputRowMeta );\n    when( rowMetaInterface.indexOfValue( outKeyFieldName ) ).thenReturn( 5 );\n    when( rowMetaInterface.indexOfValue( outValueFieldName ) ).thenReturn( 6 );\n\n    hadoopExitData.init( DefaultBowl.getInstance(), rowMetaInterface, hadoopExitMeta, space );\n\n    assertEquals( outputRowMeta, hadoopExitData.getOutputRowMeta() );\n    verify( hadoopExitMeta ).getFields( DefaultBowl.getInstance(), outputRowMeta, name, null, null, space );\n    assertEquals( 5, hadoopExitData.getInKeyOrdinal() );\n    assertEquals( 6, hadoopExitData.getInValueOrdinal() );\n  }\n\n  @Test\n  public void testGetOutKeyOrdinal() {\n    assertEquals( HadoopExitData.outKeyOrdinal, HadoopExitData.getOutKeyOrdinal() );\n  }\n\n  @Test\n  public void testGetOutValueOrdinal() {\n    assertEquals( HadoopExitData.outValueOrdinal, HadoopExitData.getOutValueOrdinal() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitMetaInjectionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.injection.BaseMetadataInjectionTest;\n\npublic class HadoopExitMetaInjectionTest extends BaseMetadataInjectionTest<HadoopExitMeta> {\n  @Before\n  public void setup() throws Throwable {\n    setup( new HadoopExitMeta() );\n  }\n\n  @Test\n  public void test() throws Exception {\n    check( \"KEY_FIELD\", new StringGetter() {\n      public String get() {\n        return meta.getOutKeyFieldname();\n      }\n    } );\n    check( \"VALUE_FIELD\", new StringGetter() {\n      public String get() {\n        return meta.getOutValueFieldname();\n      }\n    } );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.ui.step.exit.HadoopExitDialog;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\nimport org.xml.sax.InputSource;\n\nimport javax.xml.parsers.DocumentBuilderFactory;\nimport java.io.StringReader;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.*;\nimport static org.mockito.Mockito.*;\n\n/**\n * Created by bryan on 1/15/16.\n */\npublic class HadoopExitMetaTest {\n  private HadoopExitMeta hadoopExitMeta;\n  private IMetaStore metaStore;\n\n  @Before\n  public void setup() throws Throwable {\n    metaStore = mock( IMetaStore.class );\n    hadoopExitMeta = new HadoopExitMeta();\n  }\n\n  @Test\n  public void testLoadSaveXml() throws Throwable {\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n\n    Node node = DocumentBuilderFactory.newInstance().newDocumentBuilder()\n      .parse( new InputSource( new StringReader( \"<hem>\" + hadoopExitMeta.getXML() + \"</hem>\" ) ) ).getFirstChild();\n    hadoopExitMeta = new HadoopExitMeta();\n    hadoopExitMeta.loadXML( node, new ArrayList<DatabaseMeta>(), metaStore );\n\n    assertEquals( outKeyField, hadoopExitMeta.getOutKeyFieldname() );\n    assertEquals( outValField, hadoopExitMeta.getOutValueFieldname() );\n  }\n\n  @Test\n  public void testSaveRep() throws KettleException {\n    ObjectId id_transformation = mock( ObjectId.class );\n    ObjectId id_step = mock( ObjectId.class );\n    Repository repository = mock( Repository.class );\n\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n\n    hadoopExitMeta.saveRep( repository, metaStore, id_transformation, id_step );\n\n    verify( repository ).saveStepAttribute( id_transformation, id_step, HadoopExitMeta.OUT_KEY_FIELDNAME, outKeyField );\n    verify( repository )\n      .saveStepAttribute( id_transformation, id_step, HadoopExitMeta.OUT_VALUE_FIELDNAME, outValField );\n  }\n\n  @Test\n  public void testReadRep() throws KettleException {\n    ObjectId id_step = mock( ObjectId.class );\n    Repository repository = mock( Repository.class );\n\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    when( repository.getStepAttributeString( id_step, HadoopExitMeta.OUT_KEY_FIELDNAME ) ).thenReturn( outKeyField );\n    when( repository.getStepAttributeString( id_step, HadoopExitMeta.OUT_VALUE_FIELDNAME ) ).thenReturn( outValField );\n\n    hadoopExitMeta.readRep( repository, metaStore, id_step, new ArrayList<DatabaseMeta>() );\n\n    assertEquals( outKeyField, hadoopExitMeta.getOutKeyFieldname() );\n    assertEquals( outValField, hadoopExitMeta.getOutValueFieldname() );\n\n  }\n\n  @Test\n  public void testSetDefault() {\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n    hadoopExitMeta.setDefault();\n\n    assertNull( hadoopExitMeta.getOutKeyFieldname() );\n    assertNull( hadoopExitMeta.getOutValueFieldname() );\n  }\n\n  @Test\n  public void testGetFields() throws KettleStepException {\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n    ValueMetaInterface key = mock( ValueMetaInterface.class );\n    ValueMetaInterface keyClone = mock( ValueMetaInterface.class );\n    ValueMetaInterface value = mock( ValueMetaInterface.class );\n    ValueMetaInterface valueClone = mock( ValueMetaInterface.class );\n\n    when( rowMetaInterface.searchValueMeta( outKeyField ) ).thenReturn( key );\n    when( rowMetaInterface.searchValueMeta( outValField ) ).thenReturn( value );\n    when( key.clone() ).thenReturn( keyClone );\n    when( value.clone() ).thenReturn( valueClone );\n\n    hadoopExitMeta.getFields( DefaultBowl.getInstance(), rowMetaInterface, null, null, null, null );\n\n    verify( keyClone ).setName( HadoopExitMeta.OUT_KEY );\n    verify( valueClone ).setName( HadoopExitMeta.OUT_VALUE );\n    verify( rowMetaInterface ).clear();\n    verify( rowMetaInterface ).addValueMeta( keyClone );\n    verify( rowMetaInterface ).addValueMeta( valueClone );\n  }\n\n  @Test( expected = KettleStepException.class )\n  public void testGetFieldsNullKey() throws KettleStepException {\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n\n    try {\n      hadoopExitMeta.getFields( DefaultBowl.getInstance(), rowMetaInterface, null, null, null, null );\n    } catch ( KettleStepException e ) {\n      assertEquals( BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.ERROR_INVALID_KEY_FIELD, outKeyField ),\n        e.getMessage().trim() );\n      throw e;\n    }\n  }\n\n  @Test( expected = KettleStepException.class )\n  public void testGetFieldsNullValue() throws KettleStepException {\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n    ValueMetaInterface key = mock( ValueMetaInterface.class );\n    when( rowMetaInterface.searchValueMeta( outKeyField ) ).thenReturn( key );\n\n    try {\n      hadoopExitMeta.getFields( DefaultBowl.getInstance(), rowMetaInterface, null, null, null, null );\n    } catch ( KettleStepException e ) {\n      assertEquals( BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.ERROR_INVALID_VALUE_FIELD, outValField ),\n        e.getMessage().trim() );\n      throw e;\n    }\n  }\n\n  private void assertSingleRemark( List<CheckResultInterface> remarks, int type, String text, StepMeta stepinfo ) {\n    assertEquals( 1, remarks.size() );\n    CheckResultInterface checkResultInterface = remarks.get( 0 );\n    assertEquals( type, checkResultInterface.getType() );\n    assertEquals( text, checkResultInterface.getText() );\n    assertEquals( stepinfo, checkResultInterface.getSourceInfo() );\n  }\n\n  @Test\n  public void testCheckNoDataStream() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    hadoopExitMeta.check( remarks, null, stepinfo, null, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NO_DATA_STREAM ),\n      stepinfo );\n  }\n\n  @Test\n  public void testCheckEmptyDataStream() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 0 );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NO_DATA_STREAM ),\n      stepinfo );\n  }\n\n  @Test\n  public void testCheckNoSpecifiedFieldsNullOutKeyFieldName() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    when( stepinfo.getStepMetaInterface() ).thenReturn( hadoopExitMeta );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 1 );\n    when( prev.getFieldNames() ).thenReturn( new String[ 0 ] );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NO_SPECIFIED_FIELDS,\n        prev.size() + \"\" ), stepinfo );\n  }\n\n  @Test\n  public void testCheckNoSpecifiedFieldsNullOutValueFieldName() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    String outKeyField = \"outKeyField\";\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    when( stepinfo.getStepMetaInterface() ).thenReturn( hadoopExitMeta );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 1 );\n    when( prev.getFieldNames() ).thenReturn( new String[ 0 ] );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages.getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NO_SPECIFIED_FIELDS,\n        prev.size() + \"\" ), stepinfo );\n  }\n\n  @Test\n  public void testCheckNotReceivingSpecifiedKeyFields() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n    when( stepinfo.getStepMetaInterface() ).thenReturn( hadoopExitMeta );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 1 );\n    when( prev.getFieldNames() ).thenReturn( new String[ 0 ] );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages\n        .getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NOT_RECEVING_SPECIFIED_FIELDS,\n          prev.size() + \"\" ), stepinfo );\n  }\n\n  @Test\n  public void testCheckNotReceivingSpecifiedValFields() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n    when( stepinfo.getStepMetaInterface() ).thenReturn( hadoopExitMeta );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 1 );\n    when( prev.getFieldNames() ).thenReturn( new String[] { outKeyField } );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_ERROR,\n      BaseMessages\n        .getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_NOT_RECEVING_SPECIFIED_FIELDS,\n          prev.size() + \"\" ), stepinfo );\n  }\n\n  @Test\n  public void testCheckOk() {\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    StepMeta stepinfo = mock( StepMeta.class );\n    String outKeyField = \"outKeyField\";\n    String outValField = \"outValField\";\n    hadoopExitMeta.setOutKeyFieldname( outKeyField );\n    hadoopExitMeta.setOutValueFieldname( outValField );\n    when( stepinfo.getStepMetaInterface() ).thenReturn( hadoopExitMeta );\n    RowMetaInterface prev = mock( RowMetaInterface.class );\n    when( prev.size() ).thenReturn( 1 );\n    when( prev.getFieldNames() ).thenReturn( new String[] { outKeyField, outValField } );\n    hadoopExitMeta.check( remarks, null, stepinfo, prev, null, null, null );\n    assertSingleRemark( remarks, CheckResultInterface.TYPE_RESULT_OK,\n      BaseMessages\n        .getString( HadoopExitMeta.PKG, HadoopExitMeta.HADOOP_EXIT_META_CHECK_RESULT_STEP_RECEVING_DATA,\n          prev.size() + \"\" ), stepinfo );\n  }\n\n  @Test\n  public void testGetStep() {\n    StepMeta stepMeta = mock( StepMeta.class );\n    String testName = \"testName\";\n    when( stepMeta.getName() ).thenReturn( testName );\n    TransMeta transMeta = mock( TransMeta.class );\n    when( transMeta.findStep( testName ) ).thenReturn( stepMeta );\n    assertTrue(\n      hadoopExitMeta.getStep( stepMeta, null, 0, transMeta, mock( Trans.class ) ) instanceof HadoopExit );\n  }\n\n  @Test\n  public void testClone() {\n    assertTrue( hadoopExitMeta.clone() instanceof HadoopExitMeta );\n  }\n\n  @Test\n  public void testGetStepData() {\n    assertTrue( hadoopExitMeta.getStepData() instanceof HadoopExitData );\n  }\n\n  @Test\n  public void testGetDialogClassName() {\n    assertEquals( HadoopExitDialog.class.getCanonicalName(), hadoopExitMeta.getDialogClassName() );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/step/exit/HadoopExitTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.step.exit;\n\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.RowSet;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LoggingObjectInterface;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.steps.mock.StepMockHelper;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.AdditionalMatchers.aryEq;\nimport static org.mockito.Mockito.any;\nimport static org.mockito.Mockito.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 1/18/16.\n */\npublic class HadoopExitTest {\n  private StepMockHelper<HadoopExitMeta, HadoopExitData> stepMockHelper;\n  private HadoopExit hadoopExit;\n  private LogChannelInterface logChannelInterface;\n\n  @Before\n  public void setup() {\n    stepMockHelper = new StepMockHelper<>( \"hadoopExit\", HadoopExitMeta.class, HadoopExitData.class );\n    when( stepMockHelper.logChannelInterfaceFactory.create( any(), any( LoggingObjectInterface.class ) ) )\n      .thenReturn( stepMockHelper.logChannelInterface );\n    when( stepMockHelper.trans.isRunning() ).thenReturn( true );\n    hadoopExit =\n      new HadoopExit( stepMockHelper.stepMeta, stepMockHelper.stepDataInterface, 0, stepMockHelper.transMeta,\n        stepMockHelper.trans );\n    hadoopExit.init( stepMockHelper.initStepMetaInterface, stepMockHelper.initStepDataInterface );\n  }\n\n  @After\n  public void teardown() {\n    stepMockHelper.cleanUp();\n  }\n\n  @Test( timeout = 5000 )\n  public void testProcessRow() throws KettleException {\n    Object[] row1 = new Object[] { 0, 1 };\n    Object[] row2 = new Object[] { 1, 0 };\n    when( stepMockHelper.processRowsStepDataInterface.getInValueOrdinal() ).thenReturn( 1 );\n    RowSet mockInputRowSet = stepMockHelper.getMockInputRowSet( row1, row2 );\n    hadoopExit.addRowSetToInputRowSets( mockInputRowSet );\n    RowSet outputRowSet = mock( RowSet.class );\n    hadoopExit.addRowSetToOutputRowSets( outputRowSet );\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n    when( rowMetaInterface.clone() ).thenReturn( rowMetaInterface );\n    when( stepMockHelper.processRowsStepDataInterface.getOutputRowMeta() ).thenReturn( rowMetaInterface );\n    when( outputRowSet.putRow( eq( rowMetaInterface ), aryEq( row1 ) ) ).thenReturn( true );\n    when( outputRowSet.putRow( eq( rowMetaInterface ), aryEq( row2 ) ) ).thenReturn( true );\n    assertTrue( hadoopExit\n      .processRow( stepMockHelper.processRowsStepMetaInterface, stepMockHelper.processRowsStepDataInterface ) );\n    assertTrue( hadoopExit\n      .processRow( stepMockHelper.processRowsStepMetaInterface, stepMockHelper.processRowsStepDataInterface ) );\n    assertFalse( hadoopExit\n      .processRow( stepMockHelper.processRowsStepMetaInterface, stepMockHelper.processRowsStepDataInterface ) );\n    verify( stepMockHelper.processRowsStepDataInterface )\n      .init( any( Bowl.class), any( RowMetaInterface.class ), eq( stepMockHelper.processRowsStepMetaInterface ),\n             eq( hadoopExit ) );\n    verify( outputRowSet ).putRow( eq( rowMetaInterface ), aryEq( row1 ) );\n    verify( outputRowSet ).putRow( eq( rowMetaInterface ), aryEq( row2 ) );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/java/org/pentaho/big/data/kettle/plugins/mapreduce/ui/entry/pmr/JobEntryHadoopTransJobExecutorControllerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.mapreduce.ui.entry.pmr;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.InOrder;\nimport org.pentaho.big.data.kettle.plugins.mapreduce.entry.pmr.JobEntryHadoopTransJobExecutor;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.dom.Document;\n\nimport java.util.Arrays;\nimport java.util.List;\n\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.inOrder;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * @author Tatsiana_Kasiankova\n *\n */\npublic class JobEntryHadoopTransJobExecutorControllerTest {\n\n  private static final String EDITED_NAMED_CLUSTER = \"Edited Named Cluster\";\n  private static final String SELECTED_NAMED_CLUSTER = \"Selected Named Cluster\";\n  private static final String A_NEW_NAMED_CLUSTER = \"A New Named Cluster\";\n\n  private JobEntryHadoopTransJobExecutorController testController;\n  private XulDomContainer containerMock = mock( XulDomContainer.class );\n  private Document rootDocMock = mock( Document.class );\n  private XulDialog xulDialogMock = mock( XulDialog.class );\n  private HadoopClusterDelegateImpl ncDelegateMock = mock( HadoopClusterDelegateImpl.class );\n  private NamedClusterService namedClusterServiceMock = mock( NamedClusterService.class );\n  private NamedCluster selectedNamedClusterMock = mock( NamedCluster.class );\n  private NamedCluster newNamedClusterMock = mock( NamedCluster.class );\n  private NamedCluster editedNamedClusterMock = mock( NamedCluster.class );\n  private IMetaStore metaStoreMock = mock( IMetaStore.class );\n  private JobMeta jobMetaMock = mock( JobMeta.class );\n  private JobEntryHadoopTransJobExecutor jobEntryHadoopTransJobExecutor = mock( JobEntryHadoopTransJobExecutor.class );\n\n  private List<NamedCluster> ncList;\n\n  @Before\n  public void setUp() throws Throwable {\n    testController = new JobEntryHadoopTransJobExecutorController( ncDelegateMock, namedClusterServiceMock );\n    testController.setXulDomContainer( containerMock );\n    testController.setSelectedNamedCluster( selectedNamedClusterMock );\n\n    when( ncDelegateMock.newNamedCluster( any(), any(), any() ) ).thenReturn( A_NEW_NAMED_CLUSTER );\n    when( ncDelegateMock.editNamedCluster( any(), any(), any() ) ).thenReturn( EDITED_NAMED_CLUSTER );\n    when( selectedNamedClusterMock.getName() ).thenReturn( SELECTED_NAMED_CLUSTER );\n    when( newNamedClusterMock.getName() ).thenReturn( A_NEW_NAMED_CLUSTER );\n    when( editedNamedClusterMock.getName() ).thenReturn( EDITED_NAMED_CLUSTER );\n\n    when( containerMock.getDocumentRoot() ).thenReturn( rootDocMock );\n    when( rootDocMock.getElementById( \"job-entry-dialog\" ) ).thenReturn( xulDialogMock );\n\n    testController.setJobMeta( jobMetaMock );\n    when( jobMetaMock.getMetaStore() ).thenReturn( metaStoreMock );\n    // do this controller as spy to check\n    testController = spy( testController );\n\n    ncList =\n        Arrays.asList( new NamedCluster[] { selectedNamedClusterMock, newNamedClusterMock, editedNamedClusterMock } );\n    when( namedClusterServiceMock.list( metaStoreMock ) ).thenReturn( ncList );\n  }\n\n  @Test\n  public void testEditNamedCluster_NamedClusterIsSelected() throws MetaStoreException {\n    testController.editNamedCluster();\n\n    verify( ncDelegateMock ).editNamedCluster( any(), any( NamedCluster.class ), any() );\n    // verify the times of call\n    verify( testController ).namedClustersChanged();\n    verify( testController ).selectedNamedClusterChanged( SELECTED_NAMED_CLUSTER, EDITED_NAMED_CLUSTER );\n    // Verify order of execution\n    InOrder order = inOrder( testController );\n    order.verify( testController ).namedClustersChanged();\n    order.verify( testController ).selectedNamedClusterChanged( SELECTED_NAMED_CLUSTER, EDITED_NAMED_CLUSTER );\n  }\n\n  @Test\n  public void testEditNamedCluster_NamedClusterIsNOTSelected() throws MetaStoreException {\n    testController.setSelectedNamedCluster( null );\n    testController.editNamedCluster();\n\n    verify( ncDelegateMock, never() ).editNamedCluster( any( IMetaStore.class ), any( NamedCluster.class ), any(\n        Shell.class ) );\n    // verify the times of call\n    verify( testController, never() ).namedClustersChanged();\n    verify( testController, never() ).selectedNamedClusterChanged( SELECTED_NAMED_CLUSTER, EDITED_NAMED_CLUSTER );\n  }\n\n  @Test\n  public void testNewNamedCluster_NamedClusterIsSelected() throws MetaStoreException {\n    testController.newNamedCluster();\n\n    verify( ncDelegateMock ).newNamedCluster( any( VariableSpace.class ), any(), any() );\n    // verify the times of call\n    verify( testController ).namedClustersChanged();\n    verify( testController ).selectedNamedClusterChanged( SELECTED_NAMED_CLUSTER, A_NEW_NAMED_CLUSTER );\n    // Verify order of execution\n    InOrder order = inOrder( testController );\n    order.verify( testController ).namedClustersChanged();\n    order.verify( testController ).selectedNamedClusterChanged( SELECTED_NAMED_CLUSTER, A_NEW_NAMED_CLUSTER );\n  }\n\n  @Test\n  public void testNewNamedCluster_NamedClusterIsNotSelected() throws MetaStoreException {\n    testController.setSelectedNamedCluster( null );\n    testController.newNamedCluster();\n\n    verify( ncDelegateMock ).newNamedCluster( any( VariableSpace.class ), any(), any() );\n    // verify the times of call\n    verify( testController ).namedClustersChanged();\n    verify( testController ).selectedNamedClusterChanged( null, A_NEW_NAMED_CLUSTER );\n    // Verify order of execution\n    InOrder order = inOrder( testController );\n    order.verify( testController ).namedClustersChanged();\n    order.verify( testController ).selectedNamedClusterChanged( null, A_NEW_NAMED_CLUSTER );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/mapreduce/core/src/test/resources/testTrans.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>testTrans</name>\n    <description />\n    <extended_description />\n    <trans_version />\n    <trans_type>Normal</trans_type>\n    <directory>/</directory>\n    <parameters>\n    </parameters>\n    <maxdate>\n      <connection />\n      <table />\n      <field />\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file />\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n    </clusterschemas>\n    <created_user>-</created_user>\n    <created_date>2017/03/14 19:56:21.118</created_date>\n    <modified_user>-</modified_user>\n    <modified_date>2017/03/14 19:56:21.118</modified_date>\n    <key_for_session_key />\n    <is_key_private>N</is_key_private>\n  </info>\n  <notepads>\n  </notepads>\n  <order>\n  </order>\n  <step_error_handling>\n  </step_error_handling>\n  <slave-step-copy-partition-distribution>\n  </slave-step-copy-partition-distribution>\n  <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "kettle-plugins/mapreduce/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-mapreduce</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/oozie/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>oozie-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-oozie-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Oozie Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-oozie-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/oozie/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-oozie-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-oozie-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-oozie-core:*</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/oozie/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/oozie/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-oozie</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>oozie-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Oozie Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/oozie/core/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-oozie</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-oozie-core</artifactId>\n  <name>PDI Oozie Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n  <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <!--\n  Oozie depends on \"pentaho-big-data-kettle-plugins-common-job\". For now it is been placed in lib folder.\n  -->\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n      <scope>compile</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-job</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n        <groupId>org.apache.oozie</groupId>\n        <artifactId>oozie-client</artifactId>\n        <version>4.3.0</version>\n        <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.hadoop.shims</groupId>\n      <artifactId>pentaho-hadoop-shims-common-base</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.kettle.plugins.job.BlockableJobConfig;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.ui.xul.XulEventSource;\nimport org.pentaho.ui.xul.stereotype.Bindable;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * Model for the Oozie Job Executor\n *\n * User: RFellows Date: 6/4/12\n */\npublic class OozieJobExecutorConfig extends BlockableJobConfig implements XulEventSource, Cloneable {\n\n  public static final String OOZIE_WORKFLOW = \"oozieWorkflow\";\n  public static final String OOZIE_URL = \"oozieUrl\";\n  public static final String OOZIE_WORKFLOW_CONFIG = \"oozieWorkflowConfig\";\n  public static final String OOZIE_WORKFLOW_PROPERTIES = \"oozieWorkflowProperties\";\n  public static final String MODE = \"mode\";\n\n  private transient AbstractModelList<NamedCluster> namedClusters;\n  private transient NamedCluster namedCluster = null; // selected\n  private String clusterName; // saved (String)\n  private String oozieUrl = null;\n  private String oozieWorkflowConfig = null;\n  private String oozieWorkflow = null;\n\n  // coded to implementation not Interface for serialization purposes\n  private ArrayList<PropertyEntry> workflowProperties = null;\n  private String mode = null;\n\n  public OozieJobExecutorConfig() {\n    namedClusters = new AbstractModelList<>(  );\n  }\n\n  @Bindable\n  public String getOozieUrl() {\n    return oozieUrl;\n  }\n\n  @Bindable\n  public void setOozieUrl( String oozieUrl ) {\n    String prev = this.oozieUrl;\n    this.oozieUrl = oozieUrl;\n    pcs.firePropertyChange( OOZIE_URL, prev, this.oozieUrl );\n  }\n\n  @Bindable\n  public String getClusterName() {\n    return clusterName;\n  }\n\n  @Bindable\n  public void setClusterName( String clusterName ) {\n    this.clusterName = clusterName;\n  }\n\n  @Bindable\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  @Bindable\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n    if ( namedCluster != null ) {\n      this.clusterName = namedCluster.getName();\n      this.oozieUrl = namedCluster.getOozieUrl();\n    }\n  }\n\n  @Bindable\n  public List<NamedCluster> getNamedClusters() {\n    return namedClusters;\n  }\n\n  @Bindable\n  public void setNamedClusters( AbstractModelList<NamedCluster> namedClusters ) {\n    this.namedClusters = namedClusters;\n  }\n\n  @Bindable\n  public String getOozieWorkflowConfig() {\n    return oozieWorkflowConfig;\n  }\n\n  @Bindable\n  public void setOozieWorkflowConfig( String oozieWorkflowConfig ) {\n    String prev = this.oozieWorkflowConfig;\n    this.oozieWorkflowConfig = oozieWorkflowConfig;\n    pcs.firePropertyChange( OOZIE_WORKFLOW_CONFIG, prev, this.oozieWorkflowConfig );\n  }\n\n  @Bindable\n  public String getOozieWorkflow() {\n    return this.oozieWorkflow;\n  }\n\n  @Bindable\n  public void setOozieWorkflow( String oozieWorkflow ) {\n    String prev = this.oozieWorkflow;\n    this.oozieWorkflow = oozieWorkflow;\n    pcs.firePropertyChange( OOZIE_WORKFLOW, prev, oozieWorkflow );\n  }\n\n  /**\n   * Workflow properties configured in the advanced mode of the Oozie Job Executor\n   *\n   * @return\n   */\n  public List<PropertyEntry> getWorkflowProperties() {\n    if ( workflowProperties == null ) {\n      workflowProperties = new ArrayList<PropertyEntry>();\n    }\n    return workflowProperties;\n  }\n\n  public void setWorkflowProperties( List<PropertyEntry> workflowProperties ) {\n    ArrayList<PropertyEntry> prev = this.workflowProperties;\n    if ( workflowProperties instanceof ArrayList ) {\n      this.workflowProperties = (ArrayList) workflowProperties;\n    } else {\n      this.workflowProperties = new ArrayList<PropertyEntry>( workflowProperties );\n    }\n    pcs.firePropertyChange( OOZIE_WORKFLOW_PROPERTIES, prev, workflowProperties );\n  }\n\n  public String getMode() {\n    return mode;\n  }\n\n  public JobEntryMode getModeAsEnum() {\n    try {\n      return JobEntryMode.valueOf( getMode() );\n    } catch ( Exception ex ) {\n      // Not a valid ui mode, return the default\n      return JobEntryMode.QUICK_SETUP;\n    }\n  }\n\n  /**\n   * Sets the mode based on the enum value\n   *\n   * @param mode\n   */\n  public void setMode( JobEntryMode mode ) {\n    setMode( mode.name() );\n  }\n\n  public void setMode( String mode ) {\n    String old = this.mode;\n    this.mode = mode;\n    pcs.firePropertyChange( MODE, old, this.mode );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.big.data.kettle.plugins.job.AbstractJobEntry;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryUtils;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.hadoop.shim.api.HadoopClientServices;\nimport org.pentaho.hadoop.shim.api.HadoopClientServicesException;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.oozie.OozieJobInfo;\nimport org.pentaho.hadoop.shim.api.oozie.OozieServiceException;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\n\n\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.net.ConnectException;\nimport java.net.MalformedURLException;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Properties;\n/**\n * User: RFellows Date: 6/4/12\n */\n@JobEntry( id = \"OozieJobExecutor\", name = \"Oozie.JobExecutor.PluginName\",\n  description = \"Oozie.JobExecutor.PluginDescription\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\", image = \"oozie-job-executor.svg\",\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Oozie+Job+Executor\",\n  i18nPackageName = \"org.pentaho.di.job.entries.oozie\", version = \"1\" )\npublic class OozieJobExecutorJobEntry extends AbstractJobEntry<OozieJobExecutorConfig> implements Cloneable,\n  JobEntryInterface {\n\n  public static final String HTTP_ERROR_CODE_404 = \"HTTP error code: 404\";\n  public static final String HTTP_ERROR_CODE_401 = \"HTTP error code: 401\";\n  public static final String HTTP_ERROR_CODE_403 = \"HTTP error code: 403\";\n  public static final String USER_NAME = \"user.name\";\n  public static final String VALIDATION_MESSAGES_MISSING_CONFIGURATION = \"ValidationMessages.Missing.Configuration\";\n  private final NamedClusterService namedClusterService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private HadoopClientServices hadoopClientServices = null;\n\n  public OozieJobExecutorJobEntry() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.namedClusterServiceLocator = BigDataServicesHelper.getNamedClusterServiceLocator();\n  }\n  public OozieJobExecutorJobEntry(\n    NamedClusterService namedClusterService,\n    RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n    NamedClusterServiceLocator namedClusterServiceLocator ) {\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n  }\n\n  @Override\n  protected OozieJobExecutorConfig createJobConfig() {\n    return new OozieJobExecutorConfig();\n  }\n\n  public List<String> getValidationWarnings( OozieJobExecutorConfig config, boolean checkOozieConnection ) {\n    List<String> messages = new ArrayList<>();\n\n    // verify there is a job name\n    if ( StringUtil.isEmpty( config.getJobEntryName() ) ) {\n      messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Missing.JobName\" ) );\n    }\n    NamedCluster nc = null;\n    try {\n      nc = getNamedCluster( config );\n    } catch ( MetaStoreException e ) {\n      messages\n        .add( BaseMessages.getString( OozieJobExecutorJobEntry.class, VALIDATION_MESSAGES_MISSING_CONFIGURATION ) );\n    }\n\n    if ( null == nc || nc.getName().equals( \"\" ) || nc.getShimIdentifier().equals( \"\" ) ) {\n      messages.add(\n        BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Missing.NamedCluster\", config.getClusterName() ) );\n      return messages;\n    }\n\n    verifyOozieUrl( config, checkOozieConnection, messages, nc );\n    checkOozieConnection( config, checkOozieConnection, messages );\n    verifyJobConfiguration( config, checkOozieConnection, messages );\n\n    boolean pollingIntervalValid = false;\n    try {\n      long pollingInterval = JobEntryUtils.asLong( config.getBlockingPollingInterval(), variables );\n      pollingIntervalValid = pollingInterval > 0;\n    } catch ( Exception ex ) {\n      // ignore, polling interval is not valid\n    }\n    if ( !pollingIntervalValid ) {\n      messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n        \"ValidationMessages.Invalid.PollingInterval\" ) );\n    }\n\n    return messages;\n  }\n\n  private void verifyJobConfiguration( OozieJobExecutorConfig config, boolean checkOozieConnection,\n                                       List<String> messages ) {\n    // path to oozie workflow properties file\n    if ( config.getModeAsEnum() == JobEntryMode.QUICK_SETUP && StringUtil.isEmpty( config.getOozieWorkflowConfig() ) ) {\n      messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n        \"ValidationMessages.Missing.Workflow.Properties\" ) );\n    } else {\n      // make sure the path to the properties file is valid\n      try {\n        Properties props = getProperties( config );\n\n        // make sure it has at minimum a workflow definition (need app path)\n        if ( checkOozieConnection && !hadoopClientServices.hasOozieAppPath( props ) ) {\n          messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n            \"ValidationMessages.App.Path.Property.Missing\" ) );\n        }\n\n      } catch ( KettleFileException e ) {\n        // can't find the file specified as the Workflow Properties definition\n        messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n          \"ValidationMessages.Workflow.Properties.FileNotFound\" ) );\n      } catch ( IOException e ) {\n        // something went wrong with the reading of the properties file\n        messages.add( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n          \"ValidationMessages.Workflow.Properties.ReadError\" ) );\n      }\n    }\n  }\n\n  private void checkOozieConnection( OozieJobExecutorConfig config, boolean checkOozieConnection,\n                                     List<String> messages ) {\n    if ( checkOozieConnection && !StringUtils.isEmpty( getEffectiveOozieUrl( config ) ) ) {\n      try {\n        hadoopClientServices = getHadoopClientServices( config );\n        hadoopClientServices.getOozieProtocolUrl();\n        hadoopClientServices.validateOozieWSVersion();\n      } catch ( HadoopClientServicesException e ) {\n        if ( e.getErrorCode().equals( HTTP_ERROR_CODE_404 )\n          || ( e.getCause() != null\n          && ( e.getCause() instanceof MalformedURLException || e.getCause() instanceof ConnectException ) ) ) {\n          messages\n            .add( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Invalid.Oozie.URL\" ) );\n        } else if ( e.getErrorCode().equals( HTTP_ERROR_CODE_401 ) || e.getErrorCode().equals( HTTP_ERROR_CODE_403 ) ) {\n          messages.add(\n            BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Unauthorized.Oozie.Access\" ) );\n        } else {\n          messages.add( BaseMessages\n            .getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Incompatible.Oozie.Versions\" ) );\n        }\n      }\n    }\n  }\n\n  private void verifyOozieUrl( OozieJobExecutorConfig config, boolean checkOozieConnection, List<String> messages,\n                               NamedCluster nc ) {\n    if ( StringUtils.isEmpty( getEffectiveOozieUrl( config ) ) ) {\n      messages\n        .add( BaseMessages.getString( OozieJobExecutorJobEntry.class, VALIDATION_MESSAGES_MISSING_CONFIGURATION ) );\n    } else {\n      try {\n        if ( !checkOozieConnection ) {\n          if ( nc == null ) {\n            messages.add(\n              BaseMessages.getString( OozieJobExecutorJobEntry.class, VALIDATION_MESSAGES_MISSING_CONFIGURATION ) );\n          } else if ( StringUtils.isEmpty( nc.getOozieUrl() ) ) {\n            messages\n              .add( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Missing.Oozie.URL\" ) );\n          }\n        }\n      } catch ( Throwable t ) {\n        messages\n          .add( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Missing.Oozie.URL\" ) );\n      }\n    }\n  }\n\n  private NamedCluster getNamedCluster( OozieJobExecutorConfig config ) throws MetaStoreException {\n    // load from system first, then\n    NamedCluster nc = null;\n    if ( !StringUtils.isEmpty( jobConfig.getClusterName() )\n      && namedClusterService.contains( jobConfig.getClusterName(), metaStore ) ) {\n      // pull config from NamedCluster\n      nc = namedClusterService.read( jobConfig.getClusterName(), metaStore );\n    }\n    // fall back to copy stored with job (AbstractMeta)\n    if ( nc == null ) {\n      nc = config.getNamedCluster();\n    }\n    // final fallback, construct cluster based on oozie url from job config\n    if ( nc == null && namedClusterService != null ) {\n      nc = namedClusterService.getClusterTemplate();\n      nc.setOozieUrl( config.getOozieUrl() );\n    }\n    return nc;\n  }\n\n  /**\n   * Validates the current configuration of the step.\n   * <p/>\n   * <strong>To be valid in Quick Setup mode:</strong> <ul> <li>Name is required</li> <li>Oozie URL is required and\n   * must be a valid oozie location</li> <li>Workflow Properties file path is required and must be a valid job\n   * properties file</li> </ul>\n   *\n   * @param config Configuration to validate\n   * @return\n   */\n  @Override\n  public List<String> getValidationWarnings( OozieJobExecutorConfig config ) {\n    return getValidationWarnings( config, true );\n  }\n\n  public Properties getPropertiesFromFile( OozieJobExecutorConfig config ) throws IOException, KettleFileException {\n    return getPropertiesFromFile( parentJobMeta.getBowl(), config, getVariableSpace() );\n  }\n\n  public static Properties getPropertiesFromFile( Bowl bowl, OozieJobExecutorConfig config,\n    VariableSpace variableSpace ) throws IOException, KettleFileException {\n    InputStreamReader reader =\n      new InputStreamReader( KettleVFS.getInstance( bowl )\n        .getInputStream( variableSpace.environmentSubstitute( config.getOozieWorkflowConfig() ) ) );\n\n    Properties jobProps = new Properties();\n    jobProps.load( reader );\n    return jobProps;\n  }\n\n  public Properties getProperties( OozieJobExecutorConfig config ) throws KettleFileException, IOException {\n    return getProperties( parentJobMeta.getBowl(), config, getVariableSpace() );\n  }\n\n  public static Properties getProperties( Bowl bowl, OozieJobExecutorConfig config, VariableSpace variableSpace )\n    throws KettleFileException, IOException {\n    Properties jobProps;\n    if ( config.getModeAsEnum() == JobEntryMode.ADVANCED_LIST && config.getWorkflowProperties() != null ) {\n      jobProps = new Properties();\n      for ( PropertyEntry propertyEntry : config.getWorkflowProperties() ) {\n        if ( propertyEntry.getKey() != null ) {\n          String value = propertyEntry.getValue() == null ? \"\" : propertyEntry.getValue();\n          jobProps.setProperty( propertyEntry.getKey(), variableSpace.environmentSubstitute( value ) );\n        }\n      }\n    } else {\n      jobProps = getPropertiesFromFile( bowl, config, variableSpace );\n    }\n    return jobProps;\n  }\n\n  @Override\n  protected Runnable getExecutionRunnable( final Result jobResult ) {\n    return new Runnable() {\n      @Override\n      public void run() {\n\n        HadoopClientServices hadoopClientServices = getHadoopClientServices();\n\n        try {\n          hadoopClientServices.validateOozieWSVersion();\n        } catch ( HadoopClientServicesException e ) {\n\n          setJobResultFailed( jobResult );\n\n          if ( e.getErrorCode().equals( HTTP_ERROR_CODE_404 )\n            || ( e.getCause() != null\n            && ( e.getCause() instanceof MalformedURLException || e.getCause() instanceof ConnectException ) ) ) {\n            logError( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Invalid.Oozie.URL\" ),\n              e );\n          } else if ( e.getErrorCode().equals( HTTP_ERROR_CODE_401 ) || e.getErrorCode()\n            .equals( HTTP_ERROR_CODE_403 ) ) {\n            logError( BaseMessages\n              .getString( OozieJobExecutorJobEntry.class, \"ValidationMessages.Unauthorized.Oozie.Access\" ) );\n          } else {\n            logError( BaseMessages.getString( OozieJobExecutorJobEntry.class,\n              \"ValidationMessages.Incompatible.Oozie.Versions\" ), e );\n          }\n        }\n\n        try {\n          Properties jobProps = getProperties( jobConfig );\n\n          // make sure we supply the current user name\n          if ( !jobProps.containsKey( USER_NAME ) ) {\n            jobProps.setProperty( USER_NAME, getVariableSpace().environmentSubstitute( \"${\" + USER_NAME + \"}\" ) );\n          }\n\n          OozieJobInfo job = hadoopClientServices.runOozie( jobProps );\n          if ( JobEntryUtils.asBoolean( getJobConfig().getBlockingExecution(), variables ) ) {\n            while ( job.isRunning() ) {\n              long interval = JobEntryUtils.asLong( jobConfig.getBlockingPollingInterval(), variables );\n              Thread.sleep( interval );\n            }\n            String logDetail = job.getJobLog();\n            if ( job.didSucceed() ) {\n              jobResult.setResult( true );\n              logDetailed( logDetail );\n            } else {\n              // it failed\n              setJobResultFailed( jobResult );\n              logError( logDetail );\n            }\n          }\n\n        } catch ( KettleFileException e ) {\n          setJobResultFailed( jobResult );\n          logError(\n            BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.JobExecutor.ERROR.File.Resolution\" ), e );\n        } catch ( IOException e ) {\n          setJobResultFailed( jobResult );\n          logError( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.JobExecutor.ERROR.Props.Loading\" ),\n            e );\n        } catch ( HadoopClientServicesException | OozieServiceException e ) {\n          setJobResultFailed( jobResult );\n          logError(\n            BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.JobExecutor.ERROR.OozieClient\" ), e );\n        } catch ( InterruptedException e ) {\n          setJobResultFailed( jobResult );\n          logError( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.JobExecutor.ERROR.Threading\" ), e );\n        }\n      }\n    };\n  }\n\n  @Override\n  protected void handleUncaughtThreadException( Thread t, Throwable e, Result jobResult ) {\n    logError( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.JobExecutor.ERROR.Generic\" ), e );\n    setJobResultFailed( jobResult );\n  }\n\n  @VisibleForTesting\n  String getEffectiveOozieUrl( OozieJobExecutorConfig config ) {\n    String oozieUrl = config.getOozieUrl();\n    try {\n      NamedCluster nc = getNamedCluster( config );\n\n      if ( nc != null && !StringUtils.isEmpty( nc.getOozieUrl() ) ) {\n        oozieUrl = nc.getOozieUrl();\n      }\n    } catch ( Throwable t ) {\n      logDebug( t.getMessage(), t );\n    }\n    return getVariableSpace().environmentSubstitute( oozieUrl );\n  }\n\n  public HadoopClientServices getHadoopClientServices() {\n    return getHadoopClientServices( jobConfig );\n  }\n\n  public HadoopClientServices getHadoopClientServices( OozieJobExecutorConfig config ) {\n    try {\n      NamedCluster cluster = getNamedCluster( config ).clone();\n      cluster.setOozieUrl( getEffectiveOozieUrl( config ) );\n      return namedClusterServiceLocator.getService(\n        cluster,\n        HadoopClientServices.class );\n    } catch ( ClusterInitializationException e ) {\n      logError( \"Cluster initialization failure on service load\", e );\n    } catch ( NullPointerException | MetaStoreException e ) {\n      logError( \"Failed to read cluster from metastore\", e );\n    }\n    return null;\n  }\n\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorJobEntryController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.vfs2.FileObject;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.kettle.plugins.job.AbstractJobEntryController;\nimport org.pentaho.big.data.kettle.plugins.job.BlockableJobConfig;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingConvertor;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.stereotype.Bindable;\nimport org.pentaho.ui.xul.util.AbstractModelList;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\n\n/**\n * User: RFellows Date: 6/4/12\n */\npublic class OozieJobExecutorJobEntryController extends\n  AbstractJobEntryController<OozieJobExecutorConfig, OozieJobExecutorJobEntry> {\n\n  public static final String OOZIE_JOB_EXECUTOR = \"oozie-job-executor\";\n  private static final String VALUE = \"value\";\n  public static final String ERROR_BROWSING_DIRECTORY = \"ErrorBrowsingDirectory\";\n  public static final String FILE_FILTER_NAMES_PROPERTIES = \"FileFilterNames.Properties\";\n  public static final String MODE_TOGGLE_LABEL = \"mode-toggle-label\";\n  public static final String ADVANCED_TABLE = \"advanced-table\";\n  public static final String CHILDREN = \"children\";\n  public static final String ELEMENTS = \"elements\";\n  private final HadoopClusterDelegateImpl hadoopClusterDelegate;\n\n  protected AbstractModelList<PropertyEntry> advancedArguments;\n  private transient boolean advancedArgumentsChanged = false;\n  protected XulTree variablesTree = null;\n\n  private Binding namedClustersBinding = null;\n\n  /**\n   * The text for the Quick Setup/Advanced Options mode toggle (label)\n   */\n  private String modeToggleLabel;\n\n\n  public OozieJobExecutorJobEntryController( JobMeta jobMeta, XulDomContainer container,\n                                             OozieJobExecutorJobEntry jobEntry, BindingFactory bindingFactory,\n                                             HadoopClusterDelegateImpl hadoopClusterDelegate ) {\n    super( jobMeta, container, jobEntry, bindingFactory );\n    advancedArguments = new AbstractModelList<PropertyEntry>();\n    this.hadoopClusterDelegate = hadoopClusterDelegate;\n\n    if ( jobEntry.getJobConfig().getWorkflowProperties().size() > 0 ) {\n      advancedArguments.addAll( jobEntry.getJobConfig().getWorkflowProperties() );\n    }\n    populateNamedClusters();\n  }\n\n  @Override\n  protected void beforeInit() {\n    setMode( jobEntry.getJobConfig().getModeAsEnum() );\n    variablesTree = (XulTree) container.getDocumentRoot().getElementById( ADVANCED_TABLE );\n  }\n\n  @Override\n  protected void syncModel() {\n\n    if ( !shouldUseAdvancedProperties() ) {\n      // sync properties to advanced args\n      advancedArguments.clear();\n      if ( config.getWorkflowProperties() != null ) {\n        config.getWorkflowProperties().clear();\n      }\n    } else {\n      if ( advancedArguments.size() == 0 && !StringUtil.isEmpty( config.getOozieWorkflowConfig() ) ) {\n        preFillAdvancedArgs();\n      }\n      // advanced mode was used to modify/create properties\n      // save the args out...\n      ArrayList<PropertyEntry> m = new ArrayList<PropertyEntry>( advancedArguments );\n      config.setWorkflowProperties( m );\n    }\n\n    config.setMode( jobEntryMode );\n  }\n\n  private void preFillAdvancedArgs() {\n    try {\n      if ( jobEntry != null && config != null ) {\n        Properties props = jobEntry.getProperties( config );\n        for ( Map.Entry<Object, Object> prop : props.entrySet() ) {\n          if ( prop.getKey() instanceof String && prop.getValue() instanceof String ) {\n            PropertyEntry pEntry = new PropertyEntry( ( prop.getKey() ).toString(), prop.getValue().toString() );\n            advancedArguments.add( pEntry );\n          }\n        }\n      }\n    } catch ( Exception e ) {\n      // could not read in the props...\n    }\n  }\n\n  /**\n   * Determines if the advanced properties should be used instead of the quick-setup defined workflow properties file\n   *\n   * @return\n   */\n  protected boolean shouldUseAdvancedProperties() {\n    return jobEntryMode == JobEntryMode.ADVANCED_LIST;\n  }\n\n  /**\n   * make this available for unit testing\n   *\n   * @param mode\n   */\n  protected void setJobEntryMode( JobEntryMode mode ) {\n    this.jobEntryMode = mode;\n  }\n\n  @Override\n  protected void createBindings( final OozieJobExecutorConfig config, XulDomContainer container,\n                                 BindingFactory bindingFactory, Collection<Binding> bindings ) {\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n    bindings.add( bindingFactory.createBinding( config, BlockableJobConfig.JOB_ENTRY_NAME,\n      BlockableJobConfig.JOB_ENTRY_NAME, VALUE ) );\n\n    //config.setRepository( rep );\n    String clusterName = config.getClusterName();\n\n    namedClustersBinding = bindingFactory.createBinding( config.getNamedClusters(), \"children\", \"named-clusters\", \"elements\" );\n    try {\n      namedClustersBinding.fireSourceChanged();\n    } catch ( Throwable ignored ) {\n      // Ignore\n    }\n    bindings.add( namedClustersBinding );\n    Binding selectedNamedClusterBinding = bindingFactory.createBinding( \"named-clusters\", \"selectedIndex\", config,\n      \"namedCluster\", new BindingConvertor<Integer, NamedCluster>() {\n        public NamedCluster sourceToTarget( final Integer index ) {\n          List<NamedCluster> clusters = config.getNamedClusters();\n          if ( index == -1 || clusters.isEmpty() ) {\n            return null;\n          }\n          return clusters.get( index );\n        }\n\n        public Integer targetToSource( final NamedCluster value ) {\n          return config.getNamedClusters().indexOf( value );\n        }\n      } );\n    try {\n      selectedNamedClusterBinding.fireSourceChanged();\n    } catch ( Throwable ignored ) {\n      // Ignore\n    }\n    bindings.add( selectedNamedClusterBinding );\n\n    selectNamedCluster( clusterName );\n\n    bindings.add( bindingFactory.createBinding( config, OozieJobExecutorConfig.OOZIE_WORKFLOW_CONFIG,\n      OozieJobExecutorConfig.OOZIE_WORKFLOW_CONFIG, VALUE ) );\n\n    bindings.add( bindingFactory.createBinding( config, BlockableJobConfig.BLOCKING_POLLING_INTERVAL,\n      BlockableJobConfig.BLOCKING_POLLING_INTERVAL, VALUE ) );\n\n    BindingConvertor<String, Boolean> string2BooleanConvertor = new BindingConvertor<String, Boolean>() {\n      @Override\n      public String targetToSource( Boolean aBoolean ) {\n        String val = aBoolean.toString();\n        return val;\n      }\n\n      @Override\n      public Boolean sourceToTarget( String s ) {\n        Boolean val = Boolean.valueOf( s );\n        return val;\n      }\n    };\n    bindings.add( bindingFactory.createBinding( config, BlockableJobConfig.BLOCKING_EXECUTION,\n      BlockableJobConfig.BLOCKING_EXECUTION, \"checked\", string2BooleanConvertor ) );\n\n    bindingFactory.setBindingType( Binding.Type.ONE_WAY );\n    bindings.add( bindingFactory.createBinding( this, \"modeToggleLabel\", getModeToggleLabelElementId(), VALUE ) );\n\n    // only enable the polling interval text box is blocking is checked\n    bindings.add( bindingFactory.createBinding( config, BlockableJobConfig.BLOCKING_EXECUTION,\n      BlockableJobConfig.BLOCKING_POLLING_INTERVAL, \"!disabled\", string2BooleanConvertor ) );\n\n    BindingConvertor<AbstractModelList<PropertyEntry>, Collection<PropertyEntry>> propsChangedBindingConvertor =\n      new BindingConvertor<AbstractModelList<PropertyEntry>, Collection<PropertyEntry>>() {\n        @Override\n        public Collection<PropertyEntry> sourceToTarget( AbstractModelList<PropertyEntry> propertyEntries ) {\n          // user has modified the properties in advanced mode, set the flag...\n          advancedArgumentsChanged = true;\n          return propertyEntries;\n        }\n\n        @Override\n        public AbstractModelList<PropertyEntry> targetToSource( Collection<PropertyEntry> propertyEntries ) {\n          // one-way convertor, don't need this\n          return null;\n        }\n      };\n\n    bindings.add( bindingFactory.createBinding( advancedArguments, CHILDREN, variablesTree, ELEMENTS,\n      propsChangedBindingConvertor ) );\n\n  }\n\n  @VisibleForTesting List<NamedCluster> getNamedClusters() {\n    try {\n      return jobEntry.getNamedClusterService().list( jobMeta.getMetaStore() );\n    } catch ( MetaStoreException e ) {\n      jobEntry.logError( e.getMessage(), e );\n      return Collections.emptyList();\n    }\n  }\n\n  public void selectNamedCluster( String configName ) {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedConfigMenu =\n      (XulMenuList<NamedCluster>) container.getDocumentRoot().getElementById( \"named-clusters\" ); //$NON-NLS-1$\n    for ( NamedCluster nc : getNamedClusters() ) {\n      if ( configName != null && configName.equals( nc.getName() ) ) {\n        namedConfigMenu.setSelectedItem( nc );\n      }\n    }\n  }\n\n  public void editNamedCluster() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"oozie-job-executor\" );\n    Shell shell = (Shell) xulDialog.getRootObject();\n\n    String clusterName = hadoopClusterDelegate\n      .editNamedCluster( null, config.getNamedCluster(), shell );\n    if ( clusterName != null ) {\n      //cancel button on editing pressed, clusters not changed\n      populateNamedClusters();\n      selectNamedCluster( clusterName );\n    }\n  }\n\n  protected void populateNamedClusters() {\n    config.getNamedClusters().clear();\n    config.getNamedClusters().addAll( getNamedClusters() );\n  }\n\n  public void newNamedCluster() {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( \"oozie-job-executor\" );\n    Shell shell = (Shell) xulDialog.getRootObject();\n    String newNamedCluster = hadoopClusterDelegate.newNamedCluster( jobMeta, null, shell );\n    if ( newNamedCluster != null ) {\n      //cancel button on editing pressed, clusters not changed\n      populateNamedClusters();\n      selectNamedCluster( newNamedCluster );\n    }\n  }\n\n  @Bindable\n  public void addNewProperty() {\n    advancedArgumentsChanged = true;\n    try {\n      advancedArguments.add( new PropertyEntry( \"key\", \"value\" ) );\n    } catch ( Exception e ) {\n      // set elements manually to workaround a failure with adding new item when there is a cell in edit mode\n      variablesTree.setElements( advancedArguments );\n    }\n  }\n\n  @Bindable\n  public void removeProperty() {\n    advancedArgumentsChanged = true;\n    Collection<PropertyEntry> selected = variablesTree.getSelectedItems();\n    for ( PropertyEntry pe : selected ) {\n      try {\n        advancedArguments.remove( pe );\n      } catch ( Exception e ) {\n        // The implementation of the SwtTree selection model is buggy. if you have an item (row) selected\n        // but a field is in edit mode and try to remove the item, we get a failure (sometimes).\n        // just set the children manually in this case to make sure we are in sync.\n        variablesTree.setElements( advancedArguments );\n      }\n    }\n  }\n\n  /**\n   * Accept and apply the changes made in the dialog. Also, close the dialog\n   */\n  @Override\n  @Bindable\n  public void accept() {\n    syncModel();\n\n    List<String> warnings = jobEntry.getValidationWarnings( getConfig(), false );\n    if ( !warnings.isEmpty() ) {\n      StringBuilder sb = new StringBuilder();\n      for ( String warning : warnings ) {\n        sb.append( warning ).append( \"\\n\" );\n      }\n      showErrorDialog( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationError.Dialog.Title\" ), sb\n        .toString() );\n      return;\n    }\n\n    super.accept();\n  }\n\n  public AbstractModelList<PropertyEntry> getAdvancedArguments() {\n    return advancedArguments;\n  }\n\n  public void setAdvancedArguments( AbstractModelList<PropertyEntry> advancedArguments ) {\n    advancedArgumentsChanged = true;\n    this.advancedArguments = advancedArguments;\n  }\n\n  @Bindable\n  public boolean isAdvancedArgumentsChanged() {\n    return advancedArgumentsChanged;\n  }\n\n  @Override\n  protected String getDialogElementId() {\n    return OOZIE_JOB_EXECUTOR;\n  }\n\n  /**\n   * @return the id of the element responsible for toggling between \"Quick Setup\" and \"Advanced Options\" modes\n   */\n  @Bindable\n  public String getModeToggleLabelElementId() {\n    return MODE_TOGGLE_LABEL;\n  }\n\n  @Bindable\n  public String getModeToggleLabel() {\n    return modeToggleLabel;\n  }\n\n  @Bindable\n  public void setModeToggleLabel( String modeToggleLabel ) {\n    String prev = this.modeToggleLabel;\n    this.modeToggleLabel = modeToggleLabel;\n    firePropertyChange( \"modeToggleLabel\", prev, modeToggleLabel );\n  }\n\n  @Override\n  protected void setModeToggleLabel( JobEntryMode mode ) {\n    switch ( mode ) {\n      case ADVANCED_LIST:\n        setModeToggleLabel( BaseMessages\n          .getString( OozieJobExecutorJobEntry.class, \"Oozie.AdvancedOptions.Button.Text\" ) );\n        break;\n      case QUICK_SETUP:\n        setModeToggleLabel(\n          BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Oozie.BasicOptions.Button.Text\" ) );\n        break;\n      default:\n        throw new RuntimeException( \"unsupported JobEntryMode\" );\n    }\n  }\n\n  /**\n   * Make sure everything required is entered and valid\n   */\n  @Bindable\n  public void testSettings() {\n    syncModel();\n    try {\n      List<String> warnings = jobEntry.getValidationWarnings( getConfig() );\n      if ( !warnings.isEmpty() ) {\n        StringBuilder sb = new StringBuilder();\n        for ( String warning : warnings ) {\n          sb.append( warning ).append( \"\\n\" );\n        }\n        showErrorDialog( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationError.Dialog.Title\" ), sb\n                .toString() );\n        return;\n      }\n    } catch ( RuntimeException re ) {\n      showErrorDialog( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"ValidationError.Dialog.Title\" ),\n              re.getMessage() );\n      throw re;\n    }\n    showInfoDialog( BaseMessages.getString( OozieJobExecutorJobEntry.class, \"Info.Dialog.Title\" ), BaseMessages\n            .getString( OozieJobExecutorJobEntry.class, \"ValidationMsg.OK\" ) );\n  }\n\n  /**\n   * Open the VFS file browser to allow for selection of the workflow job properties configuration file.\n   */\n  @Bindable\n  public void browseWorkflowConfig() {\n    FileObject path = null;\n    Spoon spoon = Spoon.getInstance();\n    try {\n      path =\n        KettleVFS.getInstance( spoon.getExecutionBowl() )\n          .getFileObject( jobEntry.getVariableSpace().environmentSubstitute( getConfig().getOozieWorkflowConfig() ) );\n    } catch ( Exception e ) {\n      // Ignore, use null (default VFS browse path)\n    }\n    try {\n      FileObject exportDir =\n        browseVfs( null, path, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY, null, true, \"file\" );\n      if ( exportDir != null ) {\n        getConfig().setOozieWorkflowConfig( exportDir.getName().getURI() );\n      }\n    } catch ( KettleFileException e ) {\n      getJobEntry().logError( BaseMessages.getString( OozieJobExecutorJobEntry.class, ERROR_BROWSING_DIRECTORY ), e );\n    }\n  }\n\n  @Override\n  protected String[] getFileFilters() {\n    return new String[] { \"*.properties\" };\n  }\n\n  @Override\n  protected String[] getFileFilterNames() {\n    return new String[] { BaseMessages.getString( OozieJobExecutorJobEntry.class, FILE_FILTER_NAMES_PROPERTIES ) };\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorJobEntryDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.XulSpoonSettingsManager;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.XulRunner;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.swt.SwtXulLoader;\nimport org.pentaho.ui.xul.swt.SwtXulRunner;\n\nimport java.util.Enumeration;\nimport java.util.ResourceBundle;\n\n/**\n * User: RFellows Date: 6/4/12\n */\n@PluginDialog( id = \"OozieJobExecutor\", image = \"oozie-job-executor.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Oozie+Job+Executor\" )\npublic class OozieJobExecutorJobEntryDialog extends JobEntryDialog implements JobEntryDialogInterface {\n\n  private static final String OOZIE_JOB_EXECUTOR_XUL = \"org/pentaho/big/data/kettle/plugins/oozie/xul/OozieJobExecutor.xul\";\n  public static final String VARIABLETEXTBOX = \"VARIABLETEXTBOX\";\n  public static final String LABEL = \"LABEL\";\n\n  private OozieJobExecutorJobEntryController controller = null;\n  private XulDomContainer container = null;\n\n  public OozieJobExecutorJobEntryDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException {\n    super( parent, jobEntry, rep, jobMeta );\n    init( OozieJobExecutorJobEntry.class.cast( jobEntry ) );\n  }\n\n  protected void init( OozieJobExecutorJobEntry jobEntry ) throws XulException {\n    SwtXulLoader xulLoader = new SwtXulLoader();\n    xulLoader.setSettingsManager( XulSpoonSettingsManager.getInstance() );\n    xulLoader.registerClassLoader( getClass().getClassLoader() );\n\n    // register the variable-aware text, LinkLabel, and a better checkbox box for use in XUL\n    xulLoader.register( VARIABLETEXTBOX, ExtTextbox.class.getName() );\n    xulLoader.setOuterContext( shell );\n\n    // Load the XUL document with the dialog defined in it\n    container = xulLoader.loadXul( getXulFile(), bundle );\n\n    BindingFactory bf = new DefaultBindingFactory();\n    bf.setDocument( container.getDocumentRoot() );\n    controller = createController( jobEntry, container, bf );\n    controller.setJobMeta( jobMeta );\n\n    String clusterName = controller.getConfig().getClusterName();\n\n    container.addEventHandler( controller );\n\n    // Load up the SWT-XUL runtime and initialize it with our container\n    final XulRunner runner = new SwtXulRunner();\n    runner.addContainer( container );\n    runner.initialize();\n\n    controller.selectNamedCluster( clusterName );\n  }\n\n  protected OozieJobExecutorJobEntryController createController( OozieJobExecutorJobEntry jobEntry,\n      XulDomContainer container, BindingFactory bindingFactory ) {\n    return new OozieJobExecutorJobEntryController( jobMeta, container, jobEntry, bindingFactory,\n      new HadoopClusterDelegateImpl( Spoon.getInstance(), jobEntry.getNamedClusterService(),\n        jobEntry.getRuntimeTestActionService(), jobEntry.getRuntimeTester() ) );\n  }\n\n  private String getMessage( String key ) {\n    return BaseMessages.getString( OozieJobExecutorJobEntry.class, key );\n  }\n\n  @Override\n  public JobEntryInterface open() {\n    return controller.open();\n  }\n\n  protected String getXulFile() {\n    return OOZIE_JOB_EXECUTOR_XUL;\n  }\n\n  protected ResourceBundle bundle = new ResourceBundle() {\n    @Override\n    protected Object handleGetObject( String key ) {\n      return BaseMessages.getString( OozieJobExecutorJobEntry.class, key );\n    }\n\n    @Override\n    public Enumeration<String> getKeys() {\n      return null;\n    }\n  };\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/resources/org/pentaho/big/data/kettle/plugins/oozie/messages/messages_en_US.properties",
    "content": "Oozie.JobExecutor.Dialog.Title=Oozie job executor\nOozie.JobExecutor.Name.Label=Name:\nOozie.JobExecutor.Source.Group=Source:\nOozie.JobExecutor.Workflow.Label=Workflow:\nOozie.JobExecutor.Workflow.Properties.Label=Workflow Properties:\nOozie.JobExecutor.AdvancedTable.Column.Name.Label=Argument\nOozie.JobExecutor.AdvancedTable.Column.Value.Label=Value\nOozie.JobExecutor.OozieUrl.Label=Oozie URL:\nOozie.JobExecutor.NamedCluster.Label=Hadoop Cluster:\nOozie.JobExecutor.NamedCluster.Edit=Edit\nOozie.JobExecutor.NamedCluster.New=New\nOozie.JobExecutor.Browse.Workflow.Config.Label=Browse\nOozie.JobExecutor.Enable.Blocking.Label=Enable Blocking:\nOozie.JobExecutor.Enable.Polling.Interval.Label=Polling Interval (ms):\n\nOozie.JobExecutor.PluginName=Oozie job executor\nOozie.JobExecutor.PluginDescription=Execute an existing Oozie oozieWorkflow\nBigData.Category.Description=Big Data\n\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Test=Test\nDialog.Help=Help\nOozie.AdvancedOptions.Button.Text=Advanced Options\nOozie.BasicOptions.Button.Text=Quick Setup\n\nFileFilterNames.Properties=Properties Files\nErrorBrowsingDirectory=Error browsing for directory\n\nOozie.JobExecutor.ERROR.File.Resolution=Could not find workflow properties file\nOozie.JobExecutor.ERROR.Props.Loading=Could not load workflow properties file\nOozie.JobExecutor.ERROR.OozieClient=Error while running Oozie workflow\nOozie.JobExecutor.ERROR.Threading=Threading error\nOozie.JobExecutor.ERROR.Generic=Error occurred while executing the Oozie Job Executor step\nOozie.JobExecutor.ERROR.InvalidWSVersion=Oozie Client [version {0}] and Oozie Web Service are not compatible\n\nValidationMessages.Missing.JobName=Job name is required.\nValidationMessages.Missing.Oozie.URL=Oozie URL not provided by Hadoop cluster.\nValidationMessages.Missing.Configuration=Hadoop Cluster is required.\nValidationMessages.Incompatible.Oozie.Versions=Oozie Client and Oozie Web Service are not compatible\nValidationMessages.Unauthorized.Oozie.Access=Unauthorized or forbidden access to Oozie URL.\nValidationMessages.Missing.Workflow.Properties=Workflow job properties file is required.\nValidationMessages.Missing.NamedCluster=Could not find Named Cluster {0}\nValidationError.Dialog.Title=Configuration Error\nValidationMessages.Invalid.Oozie.URL=Invalid Oozie URL.\nValidationMessages.Invalid.PollingInterval=Polling interval must be a number greater than 0. \nInfo.Dialog.Title=Job Configuration Info\nValidationMsg.OK=Configuration is valid.\nValidationMessages.Workflow.Properties.FileNotFound=Can not resolve Workflow Properties file\nValidationMessages.Workflow.Properties.ReadError=Encountered an error while reading the Workflow Properties file\nValidationMessages.App.Path.Property.Missing=App Path setting not found in Workflow Properties\n\nOozie.JobExecutor.Add.Property.Label=+\nOozie.JobExecutor.Remove.Property.Label=-\n\nJobExecutor.Confirm.Toggle.Quick.Mode.Title=Confirm leaving Advanced Mode\nJobExecutor.Confirm.Toggle.Quick.Mode.Message=Any changes made in \"Advanced\" mode will be lost by switching to \"Quick Setup\" mode.\\nAre you sure you want to proceed?\n\nErrorLoadingClusters.Title=Could not load clusters\nErrorLoadingClusters.Message=Failed to load named cluster list from the metastore."
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/resources/org/pentaho/big/data/kettle/plugins/oozie/xul/OozieJobExecutor.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<window id=\"hadoop-window-wrapper\" onload=\"controller.init()\">\n<dialog id=\"oozie-job-executor\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n        xmlns:pen=\"http://www.pentaho.org/2008/xul\"\n        title=\"${Oozie.JobExecutor.Dialog.Title}\"\n        resizable=\"true\"\n        appicon=\"ui/images/spoon.ico\"\n        width=\"650\"\n        height=\"600\"\n        buttons=\"\"\n        buttonalign=\"center\">\n    <vbox>\n        <grid>\n            <columns>\n                <column/>\n                <column/>\n            </columns>\n            <rows>\n                <row>\n                    <label value=\"${Oozie.JobExecutor.Name.Label}\"/>\n                    <textbox id=\"jobEntryName\" flex=\"1\" multiline=\"false\"/>\n                </row>\n            </rows>\n        </grid>\n    </vbox>\n    <vbox flex=\"1\">\n        <grid>\n            <columns>\n                <column/>\n                <column flex=\"1\"/>\n            </columns>\n            <rows>\n                <row>\n                    <hbox padding=\"5\">\n                        <label value=\"${Oozie.JobExecutor.NamedCluster.Label}\"/>\n                    </hbox>\n                    <!-- Wrap with an hbox so all components align -->\n                    <hbox flex=\"1\" padding=\"5\">\n\t\t\t\t\t\t<grid>\n\t\t\t\t\t\t\t<columns>\n\t\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t\t\t<column/>\n\t\t\t\t\t\t\t</columns>\n\t\t\t\t\t\t\t<rows>\n\t\t\t\t\t\t\t\t<row>\n\t\t\t\t\t\t\t\t\t<menulist id=\"named-clusters\" pen:binding=\"name\">\n\t\t\t\t\t\t\t\t\t    <menupopup>\n\t\t\t\t\t\t\t\t\t    </menupopup>\n\t\t\t\t\t\t\t\t\t</menulist>\n\t\t\t\t\t\t\t\t\t<button id=\"editNamedCluster\" label=\"${Oozie.JobExecutor.NamedCluster.Edit}\" onclick=\"controller.editNamedCluster()\"/>\n\t\t\t\t\t\t\t\t\t<button id=\"newNamedCluster\" label=\"${Oozie.JobExecutor.NamedCluster.New}\" onclick=\"controller.newNamedCluster()\"/>\n\t\t\t\t\t\t\t\t</row>\n\t\t\t\t\t\t\t</rows>\n\t\t\t\t\t\t</grid>\n                    </hbox>\n                </row>\n                <row>\n                    <hbox padding=\"5\">\n                        <label value=\"${Oozie.JobExecutor.Enable.Blocking.Label}\" />\n                    </hbox>\n                    <hbox padding=\"5\">\n                        <checkbox id=\"blockingExecution\" />\n                    </hbox>\n                </row>\n                <row>\n                    <hbox padding=\"5\">\n                        <label value=\"${Oozie.JobExecutor.Enable.Polling.Interval.Label}\" />\n                    </hbox>\n                    <hbox padding=\"5\" flex=\"1\">\n                        <textbox pen:customclass=\"variabletextbox\" id=\"blockingPollingInterval\" width=\"100\"/>\n                    </hbox>\n                </row>\n            </rows>\n        </grid>\n        <deck id=\"modeDeck\" flex=\"1\">\n            <vbox id=\"quickSetupPanel\" flex=\"1\">\n                <grid flex=\"1\">\n                    <columns>\n                        <column/>\n                        <column flex=\"1\"/>\n                    </columns>\n                    <rows>\n                        <row>\n                            <hbox padding=\"5\">\n                                <label value=\"${Oozie.JobExecutor.Workflow.Properties.Label}\"/>\n                            </hbox>\n                            <!-- Wrap with an hbox so all components align -->\n                            <hbox flex=\"1\" padding=\"5\">\n                                <textbox pen:customclass=\"variabletextbox\" id=\"oozieWorkflowConfig\" flex=\"1\"/>\n                                <button label=\"${Oozie.JobExecutor.Browse.Workflow.Config.Label}\" onclick=\"controller.browseWorkflowConfig()\"/>\n                            </hbox>\n                        </row>\n                    </rows>\n                </grid>\n            </vbox>\n            <vbox id=\"advancedOptionsPanel\" flex=\"1\">\n                <vbox padding=\"5\" flex=\"1\">\n                    <hbox padding=\"0\" spacing=\"0\">\n                        <label value=\"${Oozie.JobExecutor.Workflow.Properties.Label}\" />\n                        <spacer flex=\"1\"/>\n                        <hbox padding=\"2\">\n                            <button image=\"ui/images/Add.png\" onclick=\"controller.addNewProperty()\" />\n                            <button image=\"ui/images/generic-delete.png\" onclick=\"controller.removeProperty()\" />\n                        </hbox>\n                    </hbox>\n                    <vbox padding=\"0\" spacing=\"0\" flex=\"1\">\n                        <tree id=\"advanced-table\" hidecolumnpicker=\"true\" autocreatenewrows=\"false\" flex=\"1\">\n                            <treecols>\n                                <treecol id=\"name-col\" editable=\"true\" flex=\"1\" label=\"${Oozie.JobExecutor.AdvancedTable.Column.Name.Label}\" pen:binding=\"key\"/>\n                                <treecol id=\"value-col\" editable=\"true\" flex=\"1\" label=\"${Oozie.JobExecutor.AdvancedTable.Column.Value.Label}\" pen:binding=\"value\"/>\n                            </treecols>\n                            <treechildren />\n                        </tree>\n                    </vbox>\n                </vbox>\n            </vbox>\n        </deck>\n        <pen:include src=\"button-bar.xul\"/>\n    </vbox>\n</dialog>\n</window>"
  },
  {
    "path": "kettle-plugins/oozie/core/src/main/resources/org/pentaho/big/data/kettle/plugins/oozie/xul/button-bar.xul",
    "content": "<?xml version=\"1.0\"?>\n<!--\n  ~ *******************************************************************************\n  ~ Pentaho Big Data\n  ~\n  ~ Copyright (C) 2002-2012 by Pentaho : http://www.pentaho.com\n  ~ *******************************************************************************\n  ~\n  ~ Licensed under the Apache License, Version 2.0 (the \"License\");\n  ~ you may not use this file except in compliance with\n  ~ the License. You may obtain a copy of the License at\n  ~    http://www.apache.org/licenses/LICENSE-2.0\n  ~\n  ~ Unless required by applicable law or agreed to in writing, software\n  ~ distributed under the License is distributed on an \"AS IS\" BASIS,\n  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n  ~ See the License for the specific language governing permissions and\n  ~ limitations under the License.\n  ~ ******************************************************************************\n  -->\n\n<!-- A button bar that contains a Quick Setup/Advanced Options toggle, test, ok, and cancel buttons. -->\n<grid>\n  <columns>\n    <column align=\"center\"/>\n    <column flex=\"1\"/>\n    <column/>\n    <column/>\n    <column/>\n  </columns>\n  <rows>\n    <row>\n\n      <label id=\"mode-toggle-label\" value=\"${Oozie.AdvancedOptions.Button.Text}\" onclick=\"controller.toggleMode();\"/>\n\n      <hbox flex=\"1\">\n        <spacer flex=\"1\"/>\n        <button label=\"${Dialog.Test}\" onclick=\"controller.testSettings();\"/>\n        <spacer flex=\"1\"/>\n      </hbox>\n      <button label=\"${Dialog.Help}\" onclick=\"controller.help();\"/>\n      <button label=\"${Dialog.Accept}\" onclick=\"controller.accept();\"/>\n      <button label=\"${Dialog.Cancel}\" onclick=\"controller.cancel();\"/>\n    </row>\n  </rows>\n</grid>\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/test/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorConfigTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.mockito.Captor;\nimport org.mockito.Mock;\nimport org.mockito.MockitoAnnotations;\n\nimport java.beans.PropertyChangeEvent;\nimport java.beans.PropertyChangeListener;\n\nimport static junit.framework.Assert.assertEquals;\nimport static junit.framework.Assert.assertNull;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\n\npublic class OozieJobExecutorConfigTest {\n\n  @Mock PropertyChangeListener listener;\n  @Captor ArgumentCaptor<PropertyChangeEvent> event;\n\n  @Before\n  public void init() {\n    MockitoAnnotations.initMocks( this );\n  }\n\n  @Test\n  public void testAddPropertyChangeListener() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n\n    // make sure it is capturing property change events\n    config.addPropertyChangeListener( listener );\n    config.setOozieWorkflow( \"workflow1.xml\" );\n\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    verify( listener ).propertyChange( event.capture() );\n    assertEquals( config.getOozieWorkflow(), event.getValue().getNewValue() );\n\n    // remove the listener & verify that it isn't receiving events anymore\n    config.removePropertyChangeListener( listener );\n    config.setOozieWorkflow( \"workflow2.xml\" );\n    // still 1, from the previous call\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n  }\n\n  @Test\n  public void testAddPropertyChangeListener_propertyName() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n\n    // dummy property name, should not indicate any captured prop change\n    config.addPropertyChangeListener( \"dummy\", listener );\n    config.setOozieWorkflowConfig( \"job0.properties\" );\n\n\n    verify( listener, times( 0 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    // assertEquals( 0, listener.getReceivedEvents().size() );\n    config.removePropertyChangeListener( \"dummy\", listener );\n\n    // make sure it is capturing property change events\n    config.addPropertyChangeListener( OozieJobExecutorConfig.OOZIE_WORKFLOW_CONFIG, listener );\n    config.setOozieWorkflowConfig( \"job1.properties\" );\n\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n    verify( listener ).propertyChange( event.capture() );\n    assertEquals( config.getOozieWorkflowConfig(), event.getValue().getNewValue() );\n\n    // remove the listener & verify that it isn't receiving events anymore\n    config.removePropertyChangeListener( OozieJobExecutorConfig.OOZIE_WORKFLOW_CONFIG, listener );\n    config.setOozieWorkflowConfig( \"job2.properties\" );\n    verify( listener, times( 1 ) ).propertyChange( any( PropertyChangeEvent.class ) );\n  }\n\n  @Test\n  public void testGettersAndSetters() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n\n    // everything should be null initially\n    assertNull( config.getOozieUrl() );\n    assertNull( config.getOozieWorkflow() );\n    assertNull( config.getOozieWorkflowConfig() );\n\n    config.setOozieUrl( \"http://localhost:11000\" );\n    assertEquals( \"http://localhost:11000\", config.getOozieUrl() );\n\n    config.setOozieWorkflow( \"hdfs://localhsot:9000/user/test-user/workflowFolder\" );\n    assertEquals( \"hdfs://localhsot:9000/user/test-user/workflowFolder\", config.getOozieWorkflow() );\n\n    config.setOozieWorkflowConfig( \"job.properties\" );\n    assertEquals( \"job.properties\", config.getOozieWorkflowConfig() );\n\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/test/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorControllerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Ignore;\nimport org.junit.Test;\nimport org.mockito.Mock;\nimport org.mockito.MockitoAnnotations;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.KettleEnvironment;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.containers.XulDeck;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.impl.XulFragmentContainer;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Properties;\n\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.fail;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.doThrow;\n\n/**\n * User: RFellows Date: 6/4/12\n */\npublic class OozieJobExecutorControllerTest {\n\n  @Mock HadoopClusterDelegateImpl delegate;\n  @Mock NamedCluster cluster;\n  @Mock NamedCluster cluster2;\n\n  OozieJobExecutorConfig jobConfig = null;\n  OozieJobExecutorJobEntryController controller = null;\n\n  @BeforeClass\n  public static void init() throws KettleException {\n    KettleEnvironment.init();\n  }\n\n  @Before\n  public void before() throws XulException, MetaStoreException {\n    MockitoAnnotations.initMocks( this );\n\n    jobConfig = new OozieJobExecutorConfig();\n    jobConfig.setOozieWorkflow( \"hdfs://localhost:9000/user/\" + System.getProperty( \"user.name\" )\n      + \"/examples/apps/map-reduce\" );\n\n    NamedClusterService namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.list( any() ) ).thenReturn( Arrays.asList( cluster ) );\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry(\n      namedClusterService,\n      mock( RuntimeTestActionService.class ),\n      mock( RuntimeTester.class ),\n      mock( NamedClusterServiceLocator.class ) );\n    jobEntry.setParentJobMeta( new JobMeta() );\n\n    controller =\n      new OozieJobExecutorJobEntryController( new JobMeta(), new XulFragmentContainer( null ),\n        jobEntry, new DefaultBindingFactory(), delegate );\n  }\n\n  @Test\n  public void testSetModeToggleLabel_JobEntryMode() throws Exception {\n    assertEquals( controller.getModeToggleLabel(), null );\n\n    controller.setModeToggleLabel( JobEntryMode.QUICK_SETUP );\n    assertEquals( controller.getModeToggleLabel(), \"Quick Setup\" );\n\n    controller.setModeToggleLabel( JobEntryMode.ADVANCED_LIST );\n    assertEquals( controller.getModeToggleLabel(), \"Advanced Options\" );\n\n  }\n\n  @Test\n  public void testGetNamedClusterOnChangedDataInClusterNamedService() throws Exception {\n    NamedClusterService namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.list( any() ) ).thenReturn( Arrays.asList( cluster ) );\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry(\n      namedClusterService,\n      mock( RuntimeTestActionService.class ),\n      mock( RuntimeTester.class ),\n      mock( NamedClusterServiceLocator.class ) );\n    OozieJobExecutorJobEntryController controller =\n      new OozieJobExecutorJobEntryController( new JobMeta(), new XulFragmentContainer( null ),\n        jobEntry, new DefaultBindingFactory(),\n        delegate );\n\n    assertEquals( controller.getNamedClusters().size(), 1 );\n    assertEquals( controller.getNamedClusters().get( 0 ), cluster );\n\n    when( namedClusterService.list( any() ) ).thenReturn( Arrays.asList( cluster, cluster2 ) );\n    List<NamedCluster> namedClusters = controller.getNamedClusters();\n    assertEquals( namedClusters.size(), 2 );\n    assertEquals( namedClusters.get( 1 ), cluster2 );\n  }\n\n  @Test\n  public void testReturnEmptyCollectionOnNamedClusterServiceThrowMetaStoreException() throws Exception {\n    NamedClusterService namedClusterService = mock( NamedClusterService.class );\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry(\n      namedClusterService,\n      mock( RuntimeTestActionService.class ),\n      mock( RuntimeTester.class ),\n      mock( NamedClusterServiceLocator.class ) );\n    OozieJobExecutorJobEntryController controller =\n      new OozieJobExecutorJobEntryController( new JobMeta(), new XulFragmentContainer( null ),\n        jobEntry, new DefaultBindingFactory(), delegate );\n    when( jobEntry.getNamedClusterService().list( any() ) ).thenThrow( new MetaStoreException() );\n    List<NamedCluster> namedClusters = controller.getNamedClusters();\n    assertEquals( namedClusters.size(), 0 );\n  }\n\n  @Test\n  public void testConfigNamedClustersChangedOnPopulateNamedClusters() throws Exception {\n    NamedClusterService namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.list( any() ) ).thenReturn( Arrays.asList( cluster ) );\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry(\n      namedClusterService,\n      mock( RuntimeTestActionService.class ),\n      mock( RuntimeTester.class ),\n      mock( NamedClusterServiceLocator.class ) );\n    OozieJobExecutorJobEntryController controller =\n      new OozieJobExecutorJobEntryController( new JobMeta(), new XulFragmentContainer( null ),\n        jobEntry, new DefaultBindingFactory(),\n        delegate );\n\n    controller.populateNamedClusters();\n    assertEquals( controller.getConfig().getNamedClusters().size(), 1 );\n\n    when( namedClusterService.list( any() ) ).thenReturn( Arrays.asList( cluster, cluster2 ) );\n    controller.populateNamedClusters();\n    assertEquals( controller.getConfig().getNamedClusters().size(), 2 );\n\n    when( namedClusterService.list( any() ) ).thenReturn( Collections.emptyList() );\n    controller.populateNamedClusters();\n    assertEquals( controller.getConfig().getNamedClusters().size(), 0 );\n  }\n\n  @Test( expected = RuntimeException.class )\n  public void testSetModeToggleLabel_UnsupportedJobEntryMode() {\n    controller.setModeToggleLabel( JobEntryMode.ADVANCED_COMMAND_LINE );\n    fail( \"JobEntryMode.ADVANCED_COMMAND_LINE is not supported, should have gotten a RuntimeException\" );\n  }\n\n  @Test\n  public void testSyncModel_quickSetupMode() throws Exception {\n    assertEquals( 0, controller.getAdvancedArguments().size() );\n\n    // set the props file, sync the model... should have equal amounts of elements\n    OozieJobExecutorConfig config = getGoodConfig();\n    controller.setConfig( config );\n    controller.setJobEntryMode( JobEntryMode.QUICK_SETUP );\n    assertEquals( 0, controller.getAdvancedArguments().size() );\n    controller.syncModel();\n    assertEquals( 0, controller.getAdvancedArguments().size() );\n  }\n\n  @Test\n  public void testSyncModel_advancedMode() throws Exception {\n    assertEquals( 0, controller.getAdvancedArguments().size() );\n\n    // set the props file, sync the model... should have equal amounts of elements\n    OozieJobExecutorConfig config = getGoodConfig();\n    controller.setConfig( config );\n    controller.setJobEntryMode( JobEntryMode.ADVANCED_LIST );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n\n    assertFalse( props.size() == controller.getAdvancedArguments().size() );\n    controller.syncModel();\n    assertEquals( props.size(), controller.getAdvancedArguments().size() );\n  }\n\n  @Test\n  public void testSyncModel_advanced_addedProp() throws Exception {\n    OozieJobExecutorConfig config = getGoodConfig();\n    controller.setConfig( config );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n    controller.setJobEntryMode( JobEntryMode.ADVANCED_LIST );\n\n    controller.syncModel();\n\n    controller.addNewProperty();\n    controller.syncModel();\n    assertTrue( controller.isAdvancedArgumentsChanged() );\n    assertEquals( props.size() + 1, controller.getAdvancedArguments().size() );\n  }\n\n  // ignoring for now, remove depends on the tree having selected items...\n  @Ignore\n  @Test\n  public void testSyncModel_advanced_removedProp() throws Exception {\n    OozieJobExecutorConfig config = getGoodConfig();\n\n    controller.setConfig( config );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n    controller.syncModel();\n    controller.setJobEntryMode( JobEntryMode.ADVANCED_LIST );\n\n    controller.variablesTree.setSelectedRows( new int[] { 0 } );\n    controller.removeProperty();\n    controller.syncModel();\n    assertTrue( controller.isAdvancedArgumentsChanged() );\n    assertEquals( props.size() - 1, controller.getAdvancedArguments().size() );\n  }\n\n  @Test\n  public void testSyncModel_advanced_editProp() throws Exception {\n    OozieJobExecutorConfig config = getGoodConfig();\n    controller.setConfig( config );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n    controller.setJobEntryMode( JobEntryMode.ADVANCED_LIST );\n\n    controller.syncModel();\n\n    String key = controller.getAdvancedArguments().get( 0 ).getKey();\n    AbstractModelList<PropertyEntry> advanced = controller.getAdvancedArguments();\n    advanced.get( 0 ).setValue( \"new value\" );\n    controller.setAdvancedArguments( advanced );\n    controller.syncModel();\n    assertEquals( props.size(), controller.getAdvancedArguments().size() );\n    Properties updatedProps = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(),\n      controller.getConfig(), new Variables() );\n    assertEquals( \"new value\", updatedProps.get( key ) );\n  }\n\n  @Test\n  public void testToggleMode() throws Exception {\n    // get into advanced mode\n    TestOozieJobExecutorController ctr = new TestOozieJobExecutorController();\n    ctr.getJobEntry().setParentJobMeta( new JobMeta() );\n\n    OozieJobExecutorConfig config = getGoodConfig();\n    ctr.setConfig( config );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n    ctr.syncModel();\n\n    ctr.setJobEntryMode( JobEntryMode.ADVANCED_LIST );\n\n    ctr.syncModel();\n\n    String key = ctr.getAdvancedArguments().get( 0 ).getKey();\n    AbstractModelList<PropertyEntry> advanced = ctr.getAdvancedArguments();\n    advanced.get( 0 ).setValue( \"new value\" );\n    ctr.setAdvancedArguments( advanced );\n    ctr.syncModel();\n    assertEquals( props.size(), ctr.getAdvancedArguments().size() );\n    Properties updatedProps = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), ctr.getConfig(),\n      new Variables() );\n    assertEquals( \"new value\", updatedProps.get( key ) );\n    assertEquals( props.size(), ctr.getConfig().getWorkflowProperties().size() );\n\n    // make sure if set to QUICK_SETUP that we clear out any custom props from advanced mode\n    ctr.toggleMode();\n    assertEquals( 0, ctr.getConfig().getWorkflowProperties().size() );\n\n  }\n\n  @Test\n  public void testShouldUseAdvancedProperties_basicMode() throws Exception {\n    OozieJobExecutorConfig config = getGoodConfig();\n    controller.setConfig( config );\n    assertFalse( controller.shouldUseAdvancedProperties() );\n  }\n\n  @Test\n  public void testAddProperty_exception() {\n    AbstractModelList<PropertyEntry> argumentsMock = mock( AbstractModelList.class );\n    doThrow( RuntimeException.class ).when( argumentsMock ).add( any() );\n    controller.advancedArguments = argumentsMock;\n\n    XulTree treeMock = mock( XulTree.class );\n    controller.variablesTree = treeMock;\n\n    controller.addNewProperty();\n    verify( treeMock ).setElements( argumentsMock );\n  }\n\n  private OozieJobExecutorConfig getGoodConfig() {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n    config.setOozieUrl( \"http://localhost:11000/oozie\" ); // don't worry if it isn't running, we fake out our test\n    // connection to it anyway\n    config.setOozieWorkflowConfig( \"src/test/resources/job.properties\" );\n    config.setJobEntryName( \"name\" );\n    return config;\n  }\n\n  // stub classes\n  class TestOozieJobExecutorController extends OozieJobExecutorJobEntryController {\n    @SuppressWarnings( \"unused\" )\n    private XulDeck modeDeck;\n\n    private List<Object[]> shownErrors = new ArrayList<Object[]>();\n    private boolean infoShown = false;\n\n    TestOozieJobExecutorController() {\n      this( null );\n    }\n\n    public TestOozieJobExecutorController( XulDeck modeDeck ) {\n      super( new JobMeta(), new XulFragmentContainer( null ), new OozieJobExecutorJobEntry(\n          mock( NamedClusterService.class ),\n          mock( RuntimeTestActionService.class ),\n          mock( RuntimeTester.class ),\n          mock( NamedClusterServiceLocator.class ) ),\n        new DefaultBindingFactory(), delegate );\n\n      this.modeDeck = modeDeck;\n      syncModel();\n    }\n\n    @Override\n    protected void showErrorDialog( String title, String message ) {\n      shownErrors.add( new Object[] { title, message, null } );\n    }\n\n    @Override\n    protected void showErrorDialog( String title, String message, Throwable t ) {\n      shownErrors.add( new Object[] { title, message, t } );\n    }\n\n    @Override\n    protected void showInfoDialog( String title, String message ) {\n      infoShown = true;\n    }\n\n    public List<Object[]> getShownErrors() {\n      return shownErrors;\n    }\n\n    public boolean wasInfoShown() {\n      return infoShown;\n    }\n\n    public void setJobEntry( OozieJobExecutorJobEntry je ) {\n      jobEntry = je;\n    }\n\n    @Override\n    protected boolean showConfirmationDialog( String title, String message ) {\n      return true;\n    }\n\n    @Override\n    public void toggleMode() {\n      JobEntryMode mode =\n        ( jobEntryMode == JobEntryMode.ADVANCED_LIST ? JobEntryMode.QUICK_SETUP : JobEntryMode.ADVANCED_LIST );\n      this.setJobEntryMode( mode );\n      this.syncModel();\n    }\n  }\n\n  public class TestOozieJobExecutorJobEntry extends OozieJobExecutorJobEntry {\n    @Override\n    public List<String> getValidationWarnings( OozieJobExecutorConfig config ) {\n      return new ArrayList<String>();\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/test/java/org/pentaho/big/data/kettle/plugins/oozie/OozieJobExecutorJobEntryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.oozie;\n\nimport static junit.framework.Assert.assertEquals;\nimport static org.hamcrest.CoreMatchers.is;\nimport static org.hamcrest.MatcherAssert.assertThat;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\nimport static org.mockito.Mockito.spy;\n\nimport java.util.ArrayList;\nimport java.util.Date;\nimport java.util.List;\nimport java.util.Properties;\n\nimport org.apache.oozie.client.OozieClient;\nimport org.apache.oozie.client.OozieClientException;\nimport org.apache.oozie.client.WorkflowAction;\nimport org.apache.oozie.client.WorkflowJob;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.InjectMocks;\nimport org.mockito.Mock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.di.core.bowl.DefaultBowl;\n\nimport org.pentaho.di.core.KettleEnvironment;\n\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.job.entry.JobEntryCopy;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.w3c.dom.Document;\n\n/**\n * User: RFellows Date: 6/5/12\n */\n@RunWith(MockitoJUnitRunner.class)\npublic class OozieJobExecutorJobEntryTest {\n\n  @Mock\n  NamedClusterService namedClusterService;\n  @Mock\n  NamedCluster namedCluster;\n  @Mock\n  OozieJobExecutorConfig config;\n  @Mock\n  RuntimeTestActionService runtimeTestActionService;\n  @Mock\n  NamedClusterServiceLocator namedClusterServiceLocator;\n  @Mock\n  RuntimeTester runtimeTester;\n  @Mock\n  IMetaStore metaStore;\n  @InjectMocks\n  OozieJobExecutorJobEntry oozieJobEntry;\n\n  final String OOZIE_URL = \"http://the.url\";\n  final String CLUSTER_NAME = \"cluster name\";\n\n  @BeforeClass\n  public static void init() throws Exception {\n    KettleEnvironment.init();\n  }\n\n  @Test\n  public void testLoadXml() throws Exception {\n\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry();\n    OozieJobExecutorConfig jobConfig = new OozieJobExecutorConfig();\n\n    jobConfig.setOozieWorkflow( \"hdfs://localhost:9000/user/test-user/oozie/workflow.xml\" );\n    jobConfig.setOozieWorkflowConfig( \"file:///User/test-user/oozie/job.properties\" );\n    jobConfig.setOozieUrl( \"http://localhost:11000/oozie\" );\n\n    jobEntry.setJobConfig( jobConfig );\n\n    JobEntryCopy jec = new JobEntryCopy( jobEntry );\n    jec.setLocation( 0, 0 );\n    String xml = jec.getXML();\n\n    Document d = XMLHandler.loadXMLString( xml );\n\n    OozieJobExecutorJobEntry jobEntry2 = new OozieJobExecutorJobEntry();\n    JobMeta jobMeta = new JobMeta();\n    jobMeta.setBowl( DefaultBowl.getInstance() );\n    jobEntry2.setParentJobMeta( jobMeta );\n    jobEntry2.loadXML( d.getDocumentElement(), null, null, null );\n\n    OozieJobExecutorConfig jobConfig2 = jobEntry2.getJobConfig();\n    assertEquals( jobConfig.getOozieWorkflow(), jobConfig2.getOozieWorkflow() );\n    assertEquals( jobConfig.getOozieWorkflowConfig(), jobConfig2.getOozieWorkflowConfig() );\n    assertEquals( jobConfig.getOozieUrl(), jobConfig2.getOozieUrl() );\n  }\n\n  @Test\n  public void testLoadXml_customProps() throws Exception {\n\n    OozieJobExecutorJobEntry jobEntry = new OozieJobExecutorJobEntry();\n    OozieJobExecutorConfig jobConfig = new OozieJobExecutorConfig();\n\n    jobConfig.setOozieWorkflow( \"hdfs://localhost:9000/user/test-user/oozie/workflow.xml\" );\n    jobConfig.setOozieWorkflowConfig( \"file:///User/test-user/oozie/job.properties\" );\n    jobConfig.setOozieUrl( \"http://localhost:11000/oozie\" );\n\n    ArrayList<PropertyEntry> props = new ArrayList<>();\n    props.add( new PropertyEntry( \"testProp\", \"testValue\" ) );\n    jobConfig.setWorkflowProperties( props );\n\n    jobEntry.setJobConfig( jobConfig );\n\n    JobEntryCopy jec = new JobEntryCopy( jobEntry );\n    jec.setLocation( 0, 0 );\n    String xml = jec.getXML();\n\n    Document d = XMLHandler.loadXMLString( xml );\n\n    OozieJobExecutorJobEntry jobEntry2 = new OozieJobExecutorJobEntry();\n    JobMeta jobMeta = new JobMeta();\n    jobMeta.setBowl( DefaultBowl.getInstance() );\n    jobEntry2.setParentJobMeta( jobMeta );\n    jobEntry2.loadXML( d.getDocumentElement(), null, null, null );\n\n    OozieJobExecutorConfig jobConfig2 = jobEntry2.getJobConfig();\n    assertEquals( jobConfig.getOozieWorkflow(), jobConfig2.getOozieWorkflow() );\n    assertEquals( jobConfig.getOozieWorkflowConfig(), jobConfig2.getOozieWorkflowConfig() );\n    assertEquals( jobConfig.getOozieUrl(), jobConfig2.getOozieUrl() );\n\n    assertNotNull( jobConfig2.getWorkflowProperties() );\n    assertEquals( \"testValue\", jobConfig2.getWorkflowProperties().get( 0 ).getValue() );\n  }\n\n  @Test\n  public void testGetValidationWarnings_emptyConfig() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n\n    OozieJobExecutorJobEntry je = new OozieJobExecutorJobEntry();\n    JobMeta jobMeta = new JobMeta();\n    jobMeta.setBowl( DefaultBowl.getInstance() );\n    je.setParentJobMeta( jobMeta );\n    List<String> warnings = je.getValidationWarnings( config );\n\n    assertEquals( 2, warnings.size() );\n  }\n\n  @Test\n  public void testGetProperties() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n    config.setOozieWorkflowConfig( \"src/test/resources/job.properties\" );\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n\n    assertEquals( 6, props.size() );\n  }\n\n  @Test\n  public void testGetProperties_VariableizedWorkflowPath() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n    config.setOozieWorkflowConfig( \"${propertiesFile}\" );\n    OozieJobExecutorJobEntry je = new OozieJobExecutorJobEntry();\n    JobMeta jobMeta = new JobMeta();\n    jobMeta.setBowl( DefaultBowl.getInstance() );\n    je.setParentJobMeta( jobMeta );\n    je.setVariable( \"propertiesFile\", \"src/test/resources/job.properties\" );\n\n    Properties props = je.getProperties( config );\n    assertEquals( 6, props.size() );\n  }\n\n  @Test\n  public void testGetProperties_fromAdvancedProperties() throws Exception {\n    OozieJobExecutorConfig config = new OozieJobExecutorConfig();\n\n    ArrayList<PropertyEntry> advancedProps = new ArrayList<>();\n    advancedProps.add( new PropertyEntry( \"prop1\", \"value1\" ) );\n    advancedProps.add( new PropertyEntry( \"prop2\", \"value2\" ) );\n    advancedProps.add( new PropertyEntry( \"prop3\", \"value3\" ) );\n\n    config.setOozieWorkflowConfig( \"src/test/resources/job.properties\" );\n    config.setWorkflowProperties( advancedProps );\n    config.setMode( JobEntryMode.ADVANCED_LIST );\n\n    // make sure our properties are the advanced ones, not read in from the workflow config file\n    Properties props = OozieJobExecutorJobEntry.getProperties( DefaultBowl.getInstance(), config, new Variables() );\n\n    assertTrue( \"Advanced properties were not used\", props.containsKey( \"prop1\" ) );\n    assertEquals( 3, props.size() );\n  }\n\n\n  @Test\n  public void getEffectiveOozieUrlFromCluster() {\n    when( config.getNamedCluster() ).thenReturn( namedCluster );\n    when( namedCluster.getOozieUrl() ).thenReturn( OOZIE_URL );\n\n    assertThat( oozieJobEntry.getEffectiveOozieUrl( config ),\n      is( OOZIE_URL ) );\n  }\n\n  @Test\n  public void oozieUrlSubstitutedInVariableSpace() {\n    OozieJobExecutorJobEntry jobEntry = getStubbedOozieJobExecutorJobEntry();\n    VariableSpace variableSpace = mock( VariableSpace.class );\n    String OOZIE_VAR = \"${oozie_url}\";\n    when( jobEntry.getVariableSpace() ).thenReturn( variableSpace );\n    when( config.getOozieUrl() ).thenReturn( OOZIE_VAR );\n    String SUBSTITUTED_URL = \"http://my.url\";\n    when( variableSpace.environmentSubstitute( OOZIE_VAR ) )\n      .thenReturn( SUBSTITUTED_URL );\n\n    assertThat( jobEntry.getEffectiveOozieUrl( config ), is( SUBSTITUTED_URL ) );\n  }\n\n  private OozieJobExecutorJobEntry getStubbedOozieJobExecutorJobEntry() {\n    OozieJobExecutorJobEntry jobEntry = spy( oozieJobEntry );\n    jobEntry.setMetaStore( metaStore );\n    jobEntry.setJobConfig( config );\n    when( config.getClusterName() ).thenReturn( CLUSTER_NAME );\n    return jobEntry;\n  }\n\n  private TestOozieClient getFailingTestOozieClient() {\n    // return status = FAILED\n    // isValidWS = true\n    // isValidProtocol = true\n    return new TestOozieClient( WorkflowJob.Status.FAILED, true, true );\n  }\n\n  private TestOozieClient getSucceedingTestOozieClient() {\n    // return status = SUCCEEDED\n    // isValidWS = true\n    // isValidProtocol = true\n    return new TestOozieClient( WorkflowJob.Status.SUCCEEDED, true, true );\n  }\n\n  private TestOozieClient getBadConfigTestOozieClient() {\n    // return status = SUCCEEDED\n    // isValidWS = false\n    // isValidProtocol = false\n    return new TestOozieClient( WorkflowJob.Status.SUCCEEDED, false, false );\n  }\n\n  // //////////////////////////////////////////////////////////\n  // Stub classes to help in testing.\n  // Oozie doesn't provide much in the way of interfaces,\n  // so this is our best solution\n  // //////////////////////////////////////////////////////////\n  class TestOozieClient extends OozieClient {\n    TestWorkflowJob wj = null;\n    WorkflowJob.Status returnStatus = null;\n    boolean isValidWS = true;\n    boolean isValidProtocol = true;\n\n    TestOozieClient( WorkflowJob.Status returnStatus, boolean isValidWS, boolean isValidProtocol ) {\n      this.returnStatus = returnStatus;\n      this.isValidWS = isValidWS;\n      this.isValidProtocol = isValidProtocol;\n    }\n\n    @Override\n    public synchronized void validateWSVersion() throws OozieClientException {\n      if ( isValidWS ) {\n        return;\n      }\n      throw new OozieClientException( \"Error\", new Exception( \"Not compatible\" ) );\n    }\n\n    @Override\n    public String getProtocolUrl() throws OozieClientException {\n      if ( isValidProtocol ) {\n        return \"HTTP\";\n      }\n      return null;\n    }\n\n    @Override\n    public String run( Properties conf ) throws OozieClientException {\n      wj = new TestWorkflowJob( WorkflowJob.Status.RUNNING );\n      Thread t = new Thread( new Runnable() {\n        @Override\n        public void run() {\n          // block for a second\n          try {\n            Thread.sleep( 1000 );\n            wj.setStatus( returnStatus );\n          } catch ( InterruptedException e ) {\n            //expected\n          }\n        }\n      } );\n      t.start();\n      return \"test-job-id\";\n    }\n\n    @Override\n    public String getJobLog( String jobId ) throws OozieClientException {\n      return \"nothing to log\";\n    }\n\n    @Override\n    public WorkflowJob getJobInfo( String jobId ) throws OozieClientException {\n      return wj;\n    }\n  }\n\n\n  class TestWorkflowJob implements WorkflowJob {\n    private Status status;\n\n    TestWorkflowJob( Status status ) {\n      this.status = status;\n    }\n\n    public void setStatus( Status status ) {\n      this.status = status;\n    }\n\n    @Override\n    public String getAppPath() {\n      return null;\n    }\n\n    @Override\n    public String getAppName() {\n      return null;\n    }\n\n    @Override\n    public String getId() {\n      return null;\n    }\n\n    @Override\n    public String getConf() {\n      return null;\n    }\n\n    @Override\n    public Status getStatus() {\n      return status;\n    }\n\n    @Override\n    public Date getLastModifiedTime() {\n      return null;\n    }\n\n    @Override\n    public Date getCreatedTime() {\n      return null;\n    }\n\n    @Override\n    public Date getStartTime() {\n      return null;\n    }\n\n    @Override\n    public Date getEndTime() {\n      return null;\n    }\n\n    @Override\n    public String getUser() {\n      return null;\n    }\n\n    @Override\n    public String getGroup() {\n      return null;\n    }\n\n    @Override\n    public String getAcl() {\n      return null;\n    }\n\n    @Override\n    public int getRun() {\n      return 0;\n    }\n\n    @Override\n    public String getConsoleUrl() {\n      return null;\n    }\n\n    @Override\n    public String getParentId() {\n      return null;\n    }\n\n    @Override\n    public List<WorkflowAction> getActions() {\n      return null;\n    }\n\n    @Override\n    public String getExternalId() {\n      return null;\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/test/resources/badJob.properties",
    "content": "#\n# *******************************************************************************\n# Pentaho Big Data\n#\n# Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n# *******************************************************************************\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ******************************************************************************\n#\nnameNode=hdfs://localhost:9000\njobTracker=localhost:9001\nqueueName=default\nexamplesRoot=examples\n\n# comment this guy out so it is a bad property config\n#oozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduce\n\noutputDir=map-reduce\n"
  },
  {
    "path": "kettle-plugins/oozie/core/src/test/resources/job.properties",
    "content": "#\n# *******************************************************************************\n# Pentaho Big Data\n#\n# Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n# *******************************************************************************\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ******************************************************************************\n#\n\nnameNode=hdfs://localhost:9000\njobTracker=localhost:9001\nqueueName=default\nexamplesRoot=examples\n\noozie.wf.application.path=${nameNode}/user/${user.name}/${examplesRoot}/apps/map-reduce\noutputDir=map-reduce"
  },
  {
    "path": "kettle-plugins/oozie/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-oozie</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/pig/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pig-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-pig-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Pig Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-pig-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/pig/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-pig-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-pig-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-pig-core:*</exclude>\n      </excludes>\n      <includes>\n        <include>org.apache.pig:pig</include>\n      </includes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/pig/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/pig/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-pig</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pig-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Pig Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/pig/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-pig</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-pig-core</artifactId>\n  <name>PDI Pig Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n  <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho.hadoop.shims</groupId>\n      <artifactId>pentaho-hadoop-shims-common-base</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-core</artifactId>\n      <version>0.20.2</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/pig/core/src/main/java/org/pentaho/big/data/kettle/plugins/pig/JobEntryPigScriptExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.pig;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.hadoop.shim.api.HadoopClientServices;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.ResultFile;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.JobListener;\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.pig.PigResult;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.w3c.dom.Node;\n\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\n/**\n * Job entry that executes a Pig script either on a hadoop cluster or locally.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\n@JobEntry( id = \"HadoopPigScriptExecutorPlugin\", image = \"ui/images/deprecated.svg\", name = \"HadoopPigScriptExecutorPlugin.Name\",\n  description = \"HadoopPigScriptExecutorPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.Deprecated\",\n  i18nPackageName = \"org.pentaho.di.job.entries.pig\",\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Pig+Script+Executor\" )\npublic class JobEntryPigScriptExecutor extends JobEntryBase implements Cloneable, JobEntryInterface {\n  public static final Class<?> PKG = JobEntryPigScriptExecutor.class; // for i18n purposes, needed by Translator2!!\n\n  public static final String CLUSTER_NAME = \"cluster_name\";\n  public static final String HDFS_HOSTNAME = \"hdfs_hostname\";\n  public static final String HDFS_PORT = \"hdfs_port\";\n  public static final String JOBTRACKER_HOSTNAME = \"jobtracker_hostname\";\n  public static final String JOBTRACKER_PORT = \"jobtracker_port\";\n  public static final String SCRIPT_FILE = \"script_file\";\n  public static final String ENABLE_BLOCKING = \"enable_blocking\";\n\n  public static final String LOCAL_EXECUTION = \"local_execution\";\n  public static final String JOB_ENTRY_PIG_SCRIPT_EXECUTOR_ERROR_NO_PIG_SCRIPT_SPECIFIED =\n    \"JobEntryPigScriptExecutor.Error.NoPigScriptSpecified\";\n  public static final String JOB_ENTRY_PIG_SCRIPT_EXECUTOR_WARNING_LOCAL_EXECUTION =\n    \"JobEntryPigScriptExecutor.Warning.LocalExecution\";\n  // $NON-NLS-1$\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  /**\n   * Hostname of the job tracker\n   */\n  protected NamedCluster namedCluster;\n  /**\n   * URL to the pig script to execute\n   */\n  protected String m_scriptFile = \"\";\n  /**\n   * True if the job entry should block until the script has executed\n   */\n  protected boolean m_enableBlocking;\n  /**\n   * True if the script should execute locally, rather than on a hadoop cluster\n   */\n  protected boolean m_localExecution;\n  /**\n   * Parameters for the script\n   */\n  protected Map<String, String> m_params = new HashMap<String, String>();\n\n  public JobEntryPigScriptExecutor() {\n    this.namedClusterService = NamedClusterManager.getInstance();\n    this.runtimeTester = RuntimeTesterImpl.getInstance();\n    this.runtimeTestActionService = RuntimeTestActionServiceImpl.getInstance();\n    this.namedClusterServiceLocator = BigDataServicesHelper.getNamedClusterServiceLocator();\n  }\n\n  public JobEntryPigScriptExecutor( NamedClusterService namedClusterService,\n                                    RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester,\n                                    NamedClusterServiceLocator namedClusterServiceLocator ) {\n    this.namedClusterService = namedClusterService;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n  }\n\n  private void loadClusterConfig( ObjectId id_jobentry, Repository rep, Node entrynode, IMetaStore metaStore ) {\n    boolean configLoaded = false;\n    try {\n      // attempt to load from named cluster\n      String clusterName = null;\n      if ( entrynode != null ) {\n        clusterName = XMLHandler.getTagValue( entrynode, CLUSTER_NAME ); //$NON-NLS-1$\n      } else if ( rep != null ) {\n        clusterName = rep.getJobEntryAttributeString( id_jobentry, CLUSTER_NAME ); //$NON-NLS-1$ //$NON-NLS-2$\n      }\n\n      // load from system first, then fall back to copy stored with job (AbstractMeta)\n      if ( !StringUtils.isEmpty( clusterName ) && namedClusterService.contains( clusterName, metaStore ) ) {\n        // pull config from NamedCluster\n        namedCluster = namedClusterService.read( clusterName, metaStore );\n      }\n      if ( namedCluster != null ) {\n        configLoaded = true;\n      }\n    } catch ( Throwable t ) {\n      logDebug( t.getMessage(), t );\n    }\n\n    if ( !configLoaded ) {\n      namedCluster = namedClusterService.getClusterTemplate();\n      if ( entrynode != null ) {\n        // load default values for cluster & legacy fallback\n        namedCluster.setName( XMLHandler.getTagValue( entrynode, CLUSTER_NAME ) );\n        namedCluster.setHdfsHost( XMLHandler.getTagValue( entrynode, HDFS_HOSTNAME ) ); //$NON-NLS-1$\n        namedCluster.setHdfsPort( XMLHandler.getTagValue( entrynode, HDFS_PORT ) ); //$NON-NLS-1$\n        namedCluster.setJobTrackerHost( XMLHandler.getTagValue( entrynode, JOBTRACKER_HOSTNAME ) ); //$NON-NLS-1$\n        namedCluster.setJobTrackerPort( XMLHandler.getTagValue( entrynode, JOBTRACKER_PORT ) ); //$NON-NLS-1$\n      } else if ( rep != null ) {\n        // load default values for cluster & legacy fallback\n        try {\n          namedCluster.setName( rep.getJobEntryAttributeString( id_jobentry, CLUSTER_NAME ) );\n          namedCluster.setHdfsHost( rep.getJobEntryAttributeString( id_jobentry, HDFS_HOSTNAME ) );\n          namedCluster.setHdfsPort( rep.getJobEntryAttributeString( id_jobentry, HDFS_PORT ) ); //$NON-NLS-1$\n          namedCluster\n            .setJobTrackerHost( rep.getJobEntryAttributeString( id_jobentry, JOBTRACKER_HOSTNAME ) ); //$NON-NLS-1$\n          namedCluster\n            .setJobTrackerPort( rep.getJobEntryAttributeString( id_jobentry, JOBTRACKER_PORT ) ); //$NON-NLS-1$\n        } catch ( KettleException ke ) {\n          logError( ke.getMessage(), ke );\n        }\n      }\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.job.entry.JobEntryBase#getXML()\n   */\n  public String getXML() {\n    StringBuffer retval = new StringBuffer();\n    retval.append( super.getXML() );\n\n    if ( namedCluster != null ) {\n      String namedClusterName = namedCluster.getName();\n      if ( !StringUtils.isEmpty( namedClusterName ) ) {\n        retval.append( \"      \" )\n          .append( XMLHandler.addTagValue( CLUSTER_NAME, namedClusterName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n      }\n      retval.append( \"    \" ).append( XMLHandler.addTagValue( HDFS_HOSTNAME, namedCluster.getHdfsHost() ) );\n      retval.append( \"    \" ).append( XMLHandler.addTagValue( HDFS_PORT, namedCluster.getHdfsPort() ) );\n      retval.append( \"    \" ).append(\n        XMLHandler.addTagValue( JOBTRACKER_HOSTNAME, namedCluster.getJobTrackerHost() ) );\n      retval.append( \"    \" ).append( XMLHandler.addTagValue( JOBTRACKER_PORT, namedCluster.getJobTrackerPort() ) );\n    }\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( SCRIPT_FILE, m_scriptFile ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( ENABLE_BLOCKING, m_enableBlocking ) );\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( LOCAL_EXECUTION, m_localExecution ) );\n\n    retval.append( \"    <script_parameters>\" ).append( Const.CR );\n    if ( m_params != null ) {\n      for ( String name : m_params.keySet() ) {\n        String value = m_params.get( name );\n        if ( !Utils.isEmpty( name ) && !Utils.isEmpty( value ) ) {\n          retval.append( \"      <parameter>\" ).append( Const.CR );\n          retval.append( \"        \" ).append( XMLHandler.addTagValue( \"name\", name ) );\n          retval.append( \"        \" ).append( XMLHandler.addTagValue( \"value\", value ) );\n          retval.append( \"      </parameter>\" ).append( Const.CR );\n        }\n      }\n    }\n    retval.append( \"    </script_parameters>\" ).append( Const.CR );\n\n    return retval.toString();\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.job.entry.JobEntryInterface#loadXML(org.w3c.dom.Node, java.util.List, java.util.List,\n   * org.pentaho.di.repository.Repository)\n   */\n  @Override\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers,\n                       Repository repository, IMetaStore metaStore ) throws KettleXMLException {\n    super.loadXML( entrynode, databases, slaveServers );\n\n    loadClusterConfig( null, rep, entrynode, metaStore );\n    setRepository( repository );\n\n    m_scriptFile = XMLHandler.getTagValue( entrynode, \"script_file\" );\n    m_enableBlocking = XMLHandler.getTagValue( entrynode, \"enable_blocking\" ).equalsIgnoreCase( \"Y\" );\n    m_localExecution = XMLHandler.getTagValue( entrynode, \"local_execution\" ).equalsIgnoreCase( \"Y\" );\n\n    // Script parameters\n    m_params = new HashMap<String, String>();\n    Node paramList = XMLHandler.getSubNode( entrynode, \"script_parameters\" );\n    if ( paramList != null ) {\n      int numParams = XMLHandler.countNodes( paramList, \"parameter\" );\n      for ( int i = 0; i < numParams; i++ ) {\n        Node paramNode = XMLHandler.getSubNodeByNr( paramList, \"parameter\", i );\n        String name = XMLHandler.getTagValue( paramNode, \"name\" );\n        String value = XMLHandler.getTagValue( paramNode, \"value\" );\n        m_params.put( name, value );\n      }\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.job.entry.JobEntryBase#loadRep(org.pentaho.di.repository.Repository,\n   * org.pentaho.di.repository.ObjectId, java.util.List, java.util.List)\n   */\n  @Override\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    if ( rep != null ) {\n      super.loadRep( rep, metaStore, id_jobentry, databases, slaveServers );\n\n      loadClusterConfig( id_jobentry, rep, null, metaStore );\n      setRepository( rep );\n\n      setScriptFilename( rep.getJobEntryAttributeString( id_jobentry, \"script_file\" ) );\n      setEnableBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"enable_blocking\" ) );\n      setLocalExecution( rep.getJobEntryAttributeBoolean( id_jobentry, \"local_execution\" ) );\n\n      // Script parameters\n      m_params = new HashMap<String, String>();\n      int numParams = rep.countNrJobEntryAttributes( id_jobentry, \"param_name\" );\n      if ( numParams > 0 ) {\n        for ( int i = 0; i < numParams; i++ ) {\n          String name = rep.getJobEntryAttributeString( id_jobentry, i, \"param_name\" );\n          String value = rep.getJobEntryAttributeString( id_jobentry, i, \"param_value\" );\n          m_params.put( name, value );\n        }\n      }\n    } else {\n      throw new KettleException( \"Unable to load from a repository. The repository is null.\" );\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.job.entry.JobEntryBase#saveRep(org.pentaho.di.repository.Repository,\n   * org.pentaho.di.repository.ObjectId)\n   */\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_job ) throws KettleException {\n    if ( rep != null ) {\n      super.saveRep( rep, metaStore, id_job );\n\n      if ( namedCluster != null ) {\n        String namedClusterName = namedCluster.getName();\n        if ( !StringUtils.isEmpty( namedClusterName ) ) {\n          rep.saveJobEntryAttribute( id_job, getObjectId(), \"cluster_name\", namedClusterName ); //$NON-NLS-1$\n        }\n        rep.saveJobEntryAttribute( id_job, getObjectId(), \"hdfs_hostname\", namedCluster.getHdfsHost() );\n        rep.saveJobEntryAttribute( id_job, getObjectId(), \"hdfs_port\", namedCluster.getHdfsPort() );\n        rep.saveJobEntryAttribute( id_job, getObjectId(), \"jobtracker_hostname\", namedCluster.getJobTrackerHost() );\n        rep.saveJobEntryAttribute( id_job, getObjectId(), \"jobtracker_port\", namedCluster.getJobTrackerPort() );\n      }\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"script_file\", m_scriptFile );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"enable_blocking\", m_enableBlocking );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"local_execution\", m_localExecution );\n\n      if ( m_params != null ) {\n        int i = 0;\n        for ( String name : m_params.keySet() ) {\n          String value = m_params.get( name );\n          if ( !Utils.isEmpty( name ) && !Utils.isEmpty( value ) ) {\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"param_name\", name );\n            rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"param_value\", value );\n            i++;\n          }\n        }\n      }\n    } else {\n      throw new KettleException( \"Unable to save to a repository. The repository is null.\" );\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.job.entry.JobEntryBase#evaluates()\n   */\n  public boolean evaluates() {\n    return true;\n  }\n\n  /**\n   * Get whether the job entry will block until the script finishes\n   *\n   * @return true if the job entry will block until the script finishes\n   */\n  public boolean getEnableBlocking() {\n    return m_enableBlocking;\n  }\n\n  /**\n   * Set whether the job will block until the script finishes\n   *\n   * @param block true if the job entry is to block until the script finishes\n   */\n  public void setEnableBlocking( boolean block ) {\n    m_enableBlocking = block;\n  }\n\n  /**\n   * Get whether the script is to run locally rather than on a hadoop cluster\n   *\n   * @return true if the script is to run locally\n   */\n  public boolean getLocalExecution() {\n    return m_localExecution;\n  }\n\n  /**\n   * Set whether the script is to be run locally rather than on a hadoop cluster\n   *\n   * @param l true if the script is to run locally\n   */\n  public void setLocalExecution( boolean l ) {\n    m_localExecution = l;\n  }\n\n  /**\n   * Get the URL to the pig script to run\n   *\n   * @return the URL to the pig script to run\n   */\n  public String getScriptFilename() {\n    return m_scriptFile;\n  }\n\n  /**\n   * Set the URL to the pig script to run\n   *\n   * @param filename the URL to the pig script\n   */\n  public void setScriptFilename( String filename ) {\n    m_scriptFile = filename;\n  }\n\n  /**\n   * Get the values of parameters to replace in the script\n   *\n   * @return a HashMap mapping parameter names to values\n   */\n  public Map<String, String> getScriptParameters() {\n    return m_params;\n  }\n\n  /**\n   * Set the values of parameters to replace in the script\n   *\n   * @param params a HashMap mapping parameter names to values\n   */\n  public void setScriptParameters( Map<String, String> params ) {\n    m_params = params;\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = namedCluster;\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  /*\n           * (non-Javadoc)\n           *\n           * @see org.pentaho.di.job.entry.JobEntryInterface#execute(org.pentaho.di.core.Result, int)\n           */\n  public Result execute( final Result result, int arg1 ) throws KettleException {\n    result.setNrErrors( 0 );\n    if ( Utils.isEmpty( m_scriptFile ) ) {\n      throw new KettleException( BaseMessages.getString( PKG, JOB_ENTRY_PIG_SCRIPT_EXECUTOR_ERROR_NO_PIG_SCRIPT_SPECIFIED ) );\n    }\n    try {\n      String scriptFileS = m_scriptFile;\n      scriptFileS = environmentSubstitute( scriptFileS );\n\n      HadoopClientServices hadoopClientServices = namedClusterServiceLocator.getService( namedCluster, HadoopClientServices.class );\n\n      // transform the map type to list type which can been accepted by ParameterSubstitutionPreprocessor\n      final List<String> paramList = new ArrayList<String>();\n      if ( m_params != null ) {\n        for ( Map.Entry<String, String> entry : m_params.entrySet() ) {\n          String name = entry.getKey();\n          name = environmentSubstitute( name ); // do environment variable substitution\n          String value = entry.getValue();\n          value = environmentSubstitute( value ); // do environment variable substitution\n          paramList.add( name + \"=\" + value );\n        }\n      }\n\n      final HadoopClientServices.PigExecutionMode execMode = ( m_localExecution ? HadoopClientServices.PigExecutionMode.LOCAL : HadoopClientServices.PigExecutionMode.MAPREDUCE );\n\n      if ( m_enableBlocking ) {\n        PigResult pigResult = hadoopClientServices.runPig( scriptFileS, execMode, paramList, getName(), getLogChannel(), this, parentJob.getLogLevel() );\n        processScriptExecutionResult( pigResult, result );\n      } else {\n        final String finalScriptFileS = scriptFileS;\n        final Thread runThread = new Thread() {\n          public void run() {\n            PigResult pigResult =\n                    hadoopClientServices.runPig( finalScriptFileS, execMode, paramList, getName(), getLogChannel(),\n                JobEntryPigScriptExecutor.this, parentJob.getLogLevel() );\n            processScriptExecutionResult( pigResult, result );\n          }\n        };\n\n        runThread.start();\n        parentJob.addJobListener( new JobListener() {\n\n          @Override\n          public void jobStarted( Job job ) throws KettleException {\n          }\n\n          @Override\n          public void jobFinished( Job job ) throws KettleException {\n            if ( runThread.isAlive() ) {\n              logMinimal( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.Warning.AsynctaskStillRunning\", getName(), job.getJobname() ) );\n            }\n          }\n        } );\n      }\n    } catch ( Exception ex ) {\n      ex.printStackTrace();\n      result.setStopped( true );\n      result.setNrErrors( 1 );\n      result.setResult( false );\n      logError( ex.getMessage(), ex );\n    }\n\n    return result;\n  }\n\n  protected void processScriptExecutionResult( PigResult pigResult, Result result ) {\n    int[] executionStatus = pigResult.getResult();\n    Exception pigResultException = pigResult.getException();\n    //we have several execution status\n    if ( executionStatus != null && executionStatus.length > 0 ) {\n      int countFailedJob = 0;\n      if ( executionStatus.length > 1 ) {\n        countFailedJob = executionStatus[ 1 ];\n      }\n      logBasic( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.JobCompletionStatus\",\n        String.valueOf( executionStatus[ 0 ] ), String.valueOf( countFailedJob ) ) );\n\n      if ( countFailedJob > 0 ) {\n        result.setStopped( true );\n        result.setNrErrors( countFailedJob );\n        result.setResult( false );\n      }\n    } else if ( pigResultException != null ) {\n      logError( pigResultException.getMessage(), pigResultException );\n      result.setStopped( true );\n      result.setNrErrors( 1 );\n      result.setResult( false );\n    }\n    FileObject logFile = pigResult.getLogFile();\n    if ( logFile != null ) {\n      ResultFile resultFile = new ResultFile( ResultFile.FILE_TYPE_LOG, logFile, parentJob.getJobname(), getName() );\n      result.getResultFiles().put( resultFile.getFile().toString(), resultFile );\n    }\n  }\n\n  @VisibleForTesting\n  void setLog( LogChannelInterface log ) {\n    this.log = log;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/pig/core/src/main/java/org/pentaho/big/data/kettle/plugins/pig/JobEntryPigScriptExecutorDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.pig;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Group;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.plugins.common.ui.NamedClusterWidgetImpl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.job.dialog.JobDialog;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n/**\n * Job entry dialog for the PigScriptExecutor - job entry that executes a Pig script either on a hadoop cluster or\n * locally.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\n@PluginDialog( id = \"HadoopPigScriptExecutorPlugin\", image = \"PIG.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n  documentationUrl = \"https://pentaho-community.atlassian.net/wiki/display/EAI/Pig+Script+Executor\" )\npublic class JobEntryPigScriptExecutorDialog extends JobEntryDialog implements JobEntryDialogInterface {\n\n  public static final String PIG_FILE_EXT = \".pig\";\n  private static final Class<?> PKG = JobEntryPigScriptExecutor.class;\n  private final NamedClusterService namedClusterService;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n  protected JobEntryPigScriptExecutor m_jobEntry;\n  NamedClusterWidgetImpl namedClusterWidgetImpl;\n  private Display m_display;\n  private boolean m_backupChanged;\n  private Text m_wName;\n  private TextVar m_pigScriptText;\n  private Button m_pigScriptBrowseBut;\n  private Button m_enableBlockingBut;\n  private Button m_localExecutionBut;\n  private TableView m_scriptParams;\n  private boolean m_isMapR = false;\n\n  /**\n   * Constructor.\n   *\n   * @param parent      parent shell\n   * @param jobEntryInt the job entry that this dialog edits\n   * @param rep         a repository\n   * @param jobMeta     job meta data\n   */\n  public JobEntryPigScriptExecutorDialog(\n    Shell parent, JobEntryInterface jobEntryInt, Repository rep, JobMeta jobMeta ) {\n    super( parent, jobEntryInt, rep, jobMeta );\n    m_jobEntry = (JobEntryPigScriptExecutor) jobEntryInt;\n    namedClusterService = m_jobEntry.getNamedClusterService();\n    runtimeTestActionService = m_jobEntry.getRuntimeTestActionService();\n    runtimeTester = m_jobEntry.getRuntimeTester();\n  }\n\n  public JobEntryInterface open() {\n\n    Shell parent = getParent();\n    m_display = parent.getDisplay();\n\n    shell = new Shell( parent, props.getJobsDialogStyle() );\n    props.setLook( shell );\n    JobDialog.setShellImage( shell, m_jobEntry );\n\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_jobEntry.setChanged();\n      }\n    };\n\n    m_backupChanged = m_jobEntry.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( \"Pig script executor\" );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Name line\n    Label nameLineL = new Label( shell, SWT.RIGHT );\n    nameLineL.setText( BaseMessages.getString( PKG, \"JobEntryDialog.Title\" ) );\n    props.setLook( nameLineL );\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( middle, -margin );\n    nameLineL.setLayoutData( fd );\n\n    m_wName = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_wName );\n    m_wName.addModifyListener( lsMod );\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_wName.setLayoutData( fd );\n\n    // named config line\n    Label namedClusterLabel = new Label( shell, SWT.RIGHT );\n    props.setLook( namedClusterLabel );\n    namedClusterLabel.setText( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.NamedCluster.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_wName, 10 );\n    fd.right = new FormAttachment( middle, -margin );\n    namedClusterLabel.setLayoutData( fd );\n\n    namedClusterWidgetImpl = new NamedClusterWidgetImpl( shell, false, namedClusterService, runtimeTestActionService,\n      runtimeTester, false );\n    namedClusterWidgetImpl.initiate();\n    props.setLook( namedClusterWidgetImpl );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_wName, margin );\n    fd.left = new FormAttachment( middle, 0 );\n    namedClusterWidgetImpl.setLayoutData( fd );\n\n    // script file line\n    Label scriptFileLab = new Label( shell, SWT.RIGHT );\n    props.setLook( scriptFileLab );\n    scriptFileLab.setText( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.PigScript.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( namedClusterWidgetImpl, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    scriptFileLab.setLayoutData( fd );\n\n    m_pigScriptBrowseBut = new Button( shell, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_pigScriptBrowseBut );\n    m_pigScriptBrowseBut.setText( BaseMessages.getString( PKG, \"System.Button.Browse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( namedClusterWidgetImpl, 0 );\n    m_pigScriptBrowseBut.setLayoutData( fd );\n    m_pigScriptBrowseBut.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        openDialog();\n      }\n    } );\n\n    m_pigScriptText = new TextVar( jobMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_pigScriptText );\n    m_pigScriptText.addModifyListener( lsMod );\n    m_pigScriptText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_pigScriptText.setToolTipText( jobMeta.environmentSubstitute( m_pigScriptText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( namedClusterWidgetImpl, margin );\n    fd.right = new FormAttachment( m_pigScriptBrowseBut, -margin );\n    m_pigScriptText.setLayoutData( fd );\n\n    // blocking line\n    Label enableBlockingLab = new Label( shell, SWT.RIGHT );\n    props.setLook( enableBlockingLab );\n    enableBlockingLab.setText( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.EnableBlocking.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_pigScriptText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    enableBlockingLab.setLayoutData( fd );\n\n    m_enableBlockingBut = new Button( shell, SWT.CHECK );\n    props.setLook( m_enableBlockingBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_pigScriptText, margin );\n    m_enableBlockingBut.setLayoutData( fd );\n    m_enableBlockingBut.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        m_jobEntry.setChanged();\n      }\n    } );\n\n    // local execution line\n    Label localExecutionLab = new Label( shell, SWT.RIGHT );\n    props.setLook( localExecutionLab );\n    localExecutionLab.setText( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.LocalExecution.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_enableBlockingBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    localExecutionLab.setLayoutData( fd );\n\n    m_localExecutionBut = new Button( shell, SWT.CHECK );\n    props.setLook( m_localExecutionBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_enableBlockingBut, margin );\n    m_localExecutionBut.setLayoutData( fd );\n    m_localExecutionBut.addSelectionListener( new SelectionAdapter() {\n      public void widgetSelected( SelectionEvent e ) {\n        m_jobEntry.setChanged();\n        setEnabledStatus();\n      }\n    } );\n    if ( m_isMapR ) {\n      m_localExecutionBut.setEnabled( false );\n      m_localExecutionBut.setSelection( false );\n      m_localExecutionBut.setToolTipText( BaseMessages.getString( PKG,\n        \"JobEntryPigScriptExecutor.Warning.MapRLocalExecution\" ) );\n      localExecutionLab.setToolTipText( m_localExecutionBut.getToolTipText() );\n    }\n\n    // script parameters -----------------\n    Group paramsGroup = new Group( shell, SWT.SHADOW_ETCHED_IN );\n    paramsGroup.setText( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.ScriptParameters.Label\" ) );\n    FormLayout paramsLayout = new FormLayout();\n    paramsGroup.setLayout( paramsLayout );\n    props.setLook( paramsGroup );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( m_localExecutionBut, margin );\n    fd.right = new FormAttachment( 100, -margin );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 10 );\n    paramsGroup.setLayoutData( fd );\n\n    ColumnInfo[] colinf =\n      new ColumnInfo[] {\n        new ColumnInfo(\n            BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.ScriptParameters.ParamterName.Label\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false ),\n        new ColumnInfo( BaseMessages\n          .getString( PKG, \"JobEntryPigScriptExecutor.ScriptParameters.ParamterValue.Label\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT, false ) };\n\n    m_scriptParams = new TableView( jobMeta, paramsGroup, SWT.FULL_SELECTION | SWT.MULTI, colinf, 1, lsMod, props );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, -margin );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.bottom = new FormAttachment( 100, -margin );\n    m_scriptParams.setLayoutData( fd );\n\n    // ---- buttons ------------------------\n    Button wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) );\n    Button wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n\n    BaseStepDialog.positionBottomButtons( shell, new Button[] { wOK, wCancel }, margin, paramsGroup );\n\n    // Add listeners\n    Listener lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n    Listener lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wOK.addListener( SWT.Selection, lsOK );\n    wCancel.addListener( SWT.Selection, lsCancel );\n\n    SelectionAdapter lsDef = new SelectionAdapter() {\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n    m_wName.addSelectionListener( lsDef );\n\n    // Detect [X] or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      public void shellClosed( ShellEvent e ) {\n        // cancel();\n      }\n    } );\n\n    getData();\n\n    BaseStepDialog.setSize( shell );\n\n    shell.open();\n    props.setDialogSize( shell, \"JobTransDialogSize\" );\n    while ( !shell.isDisposed() ) {\n      if ( !m_display.readAndDispatch() ) {\n        m_display.sleep();\n      }\n    }\n\n    return m_jobEntry;\n  }\n\n  private void cancel() {\n    m_jobEntry.setChanged( m_backupChanged );\n\n    m_jobEntry = null;\n    dispose();\n  }\n\n  /**\n   * Dispose this dialog\n   */\n  public void dispose() {\n    WindowProperty winprop = new WindowProperty( shell );\n    props.setScreen( winprop );\n    shell.dispose();\n  }\n\n  protected void setEnabledStatus() {\n    if ( m_isMapR ) {\n      m_localExecutionBut.setEnabled( false );\n      m_localExecutionBut.setSelection( false );\n    }\n\n    boolean local = m_localExecutionBut.getSelection();\n\n    namedClusterWidgetImpl.setEnabled( !local );\n  }\n\n  protected void openDialog() {\n    FileDialog openDialog = new FileDialog( shell, SWT.OPEN );\n    openDialog.setFilterExtensions( new String[] { \"*\" + PIG_FILE_EXT, \"*\" } );\n    openDialog.setFilterNames( new String[] { \"Pig script files\", \"All files\" } );\n\n    // String prevName = jobMeta.environmentSubstitute(m_pigScriptText.getText());\n    String parentFolder = null;\n\n    Spoon spoon = Spoon.getInstance();\n    try {\n      parentFolder =\n        KettleVFS.getFilename( KettleVFS.getInstance( spoon.getExecutionBowl() )\n          .getFileObject( jobMeta.environmentSubstitute( jobMeta.getFilename() ) ) );\n\n      if ( !Const.isEmpty( parentFolder ) ) {\n        openDialog.setFileName( parentFolder );\n      }\n    } catch ( Exception ex ) {\n      // Ignore for now, should log this!\n    }\n\n    if ( openDialog.open() != null ) {\n      m_pigScriptText.setText( openDialog.getFilterPath() + System.getProperty( \"file.separator\" )\n        + openDialog.getFileName() );\n    }\n  }\n\n  protected void getData() {\n    m_wName.setText( Const.NVL( m_jobEntry.getName(), \"\" ) );\n\n    // need setSelectItem\n    NamedCluster namedCluster = m_jobEntry.getNamedCluster();\n    String namedClusterName = null;\n    if ( namedCluster != null ) {\n      namedClusterName = namedCluster.getName();\n    }\n    namedClusterWidgetImpl.setSelectedNamedCluster( namedClusterName == null ? \"\" : namedClusterName );\n\n    m_pigScriptText.setText( Const.NVL( m_jobEntry.getScriptFilename(), \"\" ) );\n    m_enableBlockingBut.setSelection( m_jobEntry.getEnableBlocking() );\n    m_localExecutionBut.setSelection( m_jobEntry.getLocalExecution() );\n\n    Map<String, String> params = m_jobEntry.getScriptParameters();\n    if ( params.size() > 0 ) {\n      for ( String name : params.keySet() ) {\n        String value = params.get( name );\n        TableItem item = new TableItem( m_scriptParams.table, SWT.NONE );\n        item.setText( 1, name );\n        item.setText( 2, value );\n      }\n    }\n\n    m_scriptParams.removeEmptyRows();\n    m_scriptParams.setRowNums();\n    m_scriptParams.optWidth( true );\n\n    setEnabledStatus();\n  }\n\n  protected void ok() {\n    if ( Const.isEmpty( m_wName.getText() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"System.StepJobEntryNameMissing.Title\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"System.JobEntryNameMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n\n    m_jobEntry.setName( m_wName.getText() );\n\n    NamedCluster nc = namedClusterWidgetImpl.getSelectedNamedCluster();\n    if ( nc != null ) {\n      m_jobEntry.setNamedCluster( nc );\n    } else {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"Dialog.Error\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"JobEntryPigScriptExecutor.NamedClusterMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n\n    m_jobEntry.setScriptFilename( m_pigScriptText.getText() );\n    m_jobEntry.setEnableBlocking( m_enableBlockingBut.getSelection() );\n    m_jobEntry.setLocalExecution( m_localExecutionBut.getSelection() );\n\n    int numNonEmpty = m_scriptParams.nrNonEmpty();\n    HashMap<String, String> params = new HashMap<String, String>();\n    if ( numNonEmpty > 0 ) {\n      for ( int i = 0; i < numNonEmpty; i++ ) {\n        TableItem item = m_scriptParams.getNonEmpty( i );\n        String name = item.getText( 1 ).trim();\n        String value = item.getText( 2 ).trim();\n\n        params.put( name, value );\n      }\n    }\n\n    m_jobEntry.setScriptParameters( params );\n\n    m_jobEntry.setChanged();\n    dispose();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/pig/core/src/main/resources/org/pentaho/big/data/kettle/plugins/pig/messages/messages_en_US.properties",
    "content": "\nHadoopPigScriptExecutorPlugin.Name=Pig script executor\nHadoopPigScriptExecutorPlugin.Description=Execute Pig Scripts in Hadoop \n\nJobEntryDialog.Title=Job Entry Name\nJobEntry.Name.Label=Name:\n\nJobEntryPigScriptExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}], {1}\nJobEntryPigScriptExecutor.Error.NoHDFSHostSpecified=No HDFS host specified!\nJobEntryPigScriptExecutor.Error.NoJobTrackerHostSpecified=No job tracker host specified!\nJobEntryPigScriptExecutor.Error.NoPigScriptSpecified=No Pig script specified!\n\nJobEntryPigScriptExecutor.JobCompletionStatus=Num successful jobs: {0} num failed jobs: {1}\n\nJobEntryPigScriptExecutor.NamedCluster.Label=Hadoop Cluster\nJobEntryPigScriptExecutor.HDFSHostname.Label=HDFS hostname\nJobEntryPigScriptExecutor.HDFSPort.Label=HDFS port\nJobEntryPigScriptExecutor.JobtrackerHostname.Label=Job tracker hostname\nJobEntryPigScriptExecutor.JobtrackerPort.Label=Job tracker port\nJobEntryPigScriptExecutor.PigScript.Label=Pig script\nJobEntryPigScriptExecutor.EnableBlocking.Label=Enable blocking\nJobEntryPigScriptExecutor.LocalExecution.Label=Local execution\nJobEntryPigScriptExecutor.ScriptParameters.Label=Script parameters\nJobEntryPigScriptExecutor.ScriptParameters.ParamterName.Label=Parameter name\nJobEntryPigScriptExecutor.ScriptParameters.ParamterValue.Label=Value\n\nJobEntryPigScriptExecutor.Warning.LocalExecution=Local execution is not supported for this Hadoop configuration\nJobEntryPigScriptExecutor.Warning.AsynctaskStillRunning={0} in {1} has been started asynchronously. {1} has been finished and logs from {0} can be lost\n\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\n\nJobEntryPigScriptExecutor.NamedClusterMissing.Msg=You must select a Hadoop cluster.\nJobEntryPigScriptExecutor.NamedClusterMissingValues.Msg=The selected Hadoop cluster is missing required values."
  },
  {
    "path": "kettle-plugins/pig/core/src/test/java/org/pentaho/big/data/kettle/plugins/pig/JobEntryPigScriptExecutorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.pig;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.hadoop.shim.api.HadoopClientServices;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.cluster.ClusterInitializationException;\nimport org.pentaho.big.data.impl.cluster.NamedClusterImpl;\n\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.entry.loadSave.LoadSaveTester;\nimport org.pentaho.di.trans.steps.loadsave.validator.FieldLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.MapLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.StringLoadSaveValidator;\nimport org.pentaho.hadoop.shim.api.pig.PigResult;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 7/10/15.\n */\npublic class JobEntryPigScriptExecutorTest {\n  private NamedClusterService namedClusterService;\n  private RuntimeTestActionService runtimeTestActionService;\n  private NamedClusterServiceLocator namedClusterServiceLocator;\n\n  private HadoopClientServices hadoopClientServices;\n  private JobEntryPigScriptExecutor jobEntryPigScriptExecutor;\n  private NamedCluster namedCluster;\n  private String jobEntryName;\n  private String namedClusterName;\n  private String namedClusterHdfsHost;\n  private String namedClusterHdfsPort;\n  private String namedClusterJobTrackerPort;\n  private String namedClusterJobTrackerHost;\n  private RuntimeTester runtimeTester;\n  private Result result;\n  private LogChannelInterface logChannelInterface;\n  private PigResult pigResult;\n  private Job job;\n\n  @Before\n  public void setup() throws ClusterInitializationException {\n    namedClusterService = mock( NamedClusterService.class );\n    when( namedClusterService.getClusterTemplate() ).thenReturn( new NamedClusterImpl() );\n    namedClusterServiceLocator = mock( NamedClusterServiceLocator.class );\n    pigResult = mock( PigResult.class );\n    runtimeTester = mock( RuntimeTester.class );\n    runtimeTestActionService = mock( RuntimeTestActionService.class );\n    result = mock( Result.class );\n    job = mock( Job.class );\n    logChannelInterface = mock( LogChannelInterface.class );\n    jobEntryPigScriptExecutor =\n      new JobEntryPigScriptExecutor( namedClusterService, runtimeTestActionService, runtimeTester,\n        namedClusterServiceLocator );\n    jobEntryPigScriptExecutor.setScriptFilename(\n      getClass().getClassLoader().getResource( \"org/pentaho/big/data/kettle/plugins/pig/pig.script\" ).toString() );\n    jobEntryPigScriptExecutor.setParentJob( job );\n    jobEntryPigScriptExecutor.setLog( logChannelInterface );\n\n    jobEntryName = \"jobEntryName\";\n    namedClusterName = \"namedClusterName\";\n    namedClusterHdfsHost = \"namedClusterHdfsHost\";\n    namedClusterHdfsPort = \"namedClusterHdfsPort\";\n    namedClusterJobTrackerHost = \"namedClusterJobTrackerHost\";\n    namedClusterJobTrackerPort = \"namedClusterJobTrackerPort\";\n\n    hadoopClientServices = mock( HadoopClientServices.class );\n    namedCluster = mock( NamedCluster.class );\n    when( namedClusterServiceLocator.getService( namedCluster, HadoopClientServices.class ) ).thenReturn( hadoopClientServices );\n    when( namedCluster.getName() ).thenReturn( namedClusterName );\n    when( namedCluster.getHdfsHost() ).thenReturn( namedClusterHdfsHost );\n    when( namedCluster.getHdfsPort() ).thenReturn( namedClusterHdfsPort );\n    when( namedCluster.getJobTrackerHost() ).thenReturn( namedClusterJobTrackerHost );\n    when( namedCluster.getJobTrackerPort() ).thenReturn( namedClusterJobTrackerPort );\n    jobEntryPigScriptExecutor.setNamedCluster( namedCluster );\n  }\n\n  @Test\n  public void testLoadSave() throws KettleException {\n    List<String> commonAttributes = new ArrayList<>();\n    commonAttributes.add( \"namedCluster\" );\n    commonAttributes.add( \"enableBlocking\" );\n    commonAttributes.add( \"localExecution\" );\n    commonAttributes.add( \"scriptFilename\" );\n    commonAttributes.add( \"scriptParameters\" );\n\n    Map<String, FieldLoadSaveValidator<?>> fieldLoadSaveValidatorTypeMap = new HashMap<>();\n    fieldLoadSaveValidatorTypeMap.put( NamedCluster.class.getCanonicalName(), new PigNamedClusterValidator() );\n\n    Map<String, FieldLoadSaveValidator<?>> fieldLoadSaveValidatorAttributeMap = new HashMap<>();\n    fieldLoadSaveValidatorAttributeMap.put( \"scriptParameters\",\n      new MapLoadSaveValidator<>( new StringLoadSaveValidator(), new StringLoadSaveValidator() ) );\n\n    LoadSaveTester<JobEntryPigScriptExecutor> jobEntryPigScriptExecutorLoadSaveTester =\n      new LoadSaveTester<JobEntryPigScriptExecutor>( JobEntryPigScriptExecutor.class, commonAttributes,\n        new HashMap<String, String>(), new HashMap<String, String>(), fieldLoadSaveValidatorAttributeMap,\n        fieldLoadSaveValidatorTypeMap ) {\n        @Override public JobEntryPigScriptExecutor createMeta() {\n          return new JobEntryPigScriptExecutor( namedClusterService, runtimeTestActionService, runtimeTester,\n            namedClusterServiceLocator );\n        }\n      };\n\n    jobEntryPigScriptExecutorLoadSaveTester.testSerialization();\n  }\n\n  @Test\n  public void testGetNamedClusterService() {\n    assertEquals( namedClusterService, jobEntryPigScriptExecutor.getNamedClusterService() );\n  }\n\n  @Test\n  public void testGetRuntimeTestActionService() {\n    assertEquals( runtimeTestActionService, jobEntryPigScriptExecutor.getRuntimeTestActionService() );\n  }\n\n  @Test\n  public void testGetRuntimeTester() {\n    assertEquals( runtimeTester, jobEntryPigScriptExecutor.getRuntimeTester() );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testExecuteNoScriptFile() throws KettleException {\n    jobEntryPigScriptExecutor.setScriptFilename( \"\" );\n    try {\n      jobEntryPigScriptExecutor.execute( result, 0 );\n    } catch ( KettleException e ) {\n      assertEquals( BaseMessages.getString( JobEntryPigScriptExecutor.PKG,\n        JobEntryPigScriptExecutor.JOB_ENTRY_PIG_SCRIPT_EXECUTOR_ERROR_NO_PIG_SCRIPT_SPECIFIED ), e.getSuperMessage() );\n      throw e;\n    }\n  }\n\n  @Test\n  public void testNonzeroBlocking() throws KettleException {\n    jobEntryPigScriptExecutor.setEnableBlocking( true );\n    when( pigResult.getResult() ).thenReturn( new int[] { 0, 10 } );\n    when( hadoopClientServices\n      .runPig( eq( jobEntryPigScriptExecutor.getScriptFilename() ), eq( HadoopClientServices.PigExecutionMode.MAPREDUCE ),\n        eq( new ArrayList<String>() ), eq( (String) null ), eq( logChannelInterface ), eq( jobEntryPigScriptExecutor ),\n        eq( (LogLevel) null ) ) )\n      .thenReturn( pigResult );\n    assertEquals( result, jobEntryPigScriptExecutor.execute( result, 0 ) );\n    verify( result ).setStopped( true );\n    verify( result ).setNrErrors( 10 );\n    verify( result ).setResult( false );\n  }\n\n  @Test\n  public void testGettingFailedStatusIfExecutionException() {\n    pigResult = mock( PigResult.class );\n    result = new Result(  );\n    when( pigResult.getResult() ).thenReturn( null );\n    when( pigResult.getException() ).thenReturn( new Exception(  ) );\n    jobEntryPigScriptExecutor.processScriptExecutionResult( pigResult, result );\n    assertFalse( result.getResult() );\n    assertTrue( result.isStopped() );\n    assertEquals( 1L, result.getNrErrors() );\n  }\n\n  @Test\n  public void testGettingFailedStatusIfNrErrors() {\n    pigResult = mock( PigResult.class );\n    result = new Result(  );\n    when( pigResult.getResult() ).thenReturn( new int[] { 0, 1} ).thenReturn( null );\n    when( pigResult.getException() ).thenReturn( null );\n    jobEntryPigScriptExecutor.processScriptExecutionResult(  pigResult, result );\n    assertFalse( result.getResult() );\n    assertTrue( result.isStopped() );\n    assertEquals( 1L, result.getNrErrors() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/pig/core/src/test/java/org/pentaho/big/data/kettle/plugins/pig/PigNamedClusterValidator.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.pig;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.impl.cluster.NamedClusterImpl;\nimport org.pentaho.di.trans.steps.loadsave.validator.FieldLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.StringLoadSaveValidator;\n\n/**\n * Created by bryan on 10/19/15.\n */\npublic class PigNamedClusterValidator implements FieldLoadSaveValidator<NamedCluster> {\n  private final StringLoadSaveValidator stringLoadSaveValidator = new StringLoadSaveValidator();\n\n  @Override public NamedCluster getTestObject() {\n    NamedClusterImpl namedCluster = new NamedClusterImpl();\n    namedCluster.setHdfsHost( stringLoadSaveValidator.getTestObject() );\n    namedCluster.setHdfsPort( stringLoadSaveValidator.getTestObject() );\n    namedCluster.setJobTrackerHost( stringLoadSaveValidator.getTestObject() );\n    namedCluster.setJobTrackerPort( stringLoadSaveValidator.getTestObject() );\n    return namedCluster;\n  }\n\n  /**\n   * Only cares about hdfs host, port, jobtracker host, port\n   *\n   * @param namedCluster\n   * @param o\n   * @return\n   */\n  @Override public boolean validateTestObject( NamedCluster namedCluster, Object o ) {\n    if ( o instanceof NamedCluster ) {\n      NamedCluster namedCluster2 = (NamedCluster) o;\n      return stringLoadSaveValidator.validateTestObject( namedCluster.getHdfsHost(), namedCluster2.getHdfsHost() )\n        && stringLoadSaveValidator.validateTestObject( namedCluster.getHdfsPort(), namedCluster2.getHdfsPort() )\n        && stringLoadSaveValidator.validateTestObject( namedCluster.getJobTrackerHost(),\n        namedCluster2.getJobTrackerHost() )\n        && stringLoadSaveValidator.validateTestObject( namedCluster.getJobTrackerPort(),\n        namedCluster2.getJobTrackerPort() );\n    }\n    return false;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/pig/core/src/test/resources/org/pentaho/big/data/kettle/plugins/pig/pig.script",
    "content": "testPigScript"
  },
  {
    "path": "kettle-plugins/pig/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-pig</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <modules>\n    <module>common</module>\n    <module>hdfs</module>\n    <module>mapreduce</module>\n    <module>pig</module>\n    <module>guiTestActionHandlers</module>\n    <module>oozie</module>\n    <module>hbase-meta</module>\n    <module>hbase</module>\n    <module>sqoop</module>\n    <module>hive</module>\n    <module>spark</module>\n    <module>formats-meta</module>\n    <module>formats</module>\n    <module>hadoop-cluster</module>\n    <module>browse</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/spark/README.md",
    "content": "# pdi-spark-plugin - Pentaho spark plugin\n\"Spark submit\" job entry allows submitting spark jobs from PDI jobs.\n\n## Building\nMaven is used to build the project, simply run the following command to compile and package it:\n\n\tmvn package"
  },
  {
    "path": "kettle-plugins/spark/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>spark-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-spark-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Spark Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-spark-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/spark/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-spark-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-spark-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-spark-core:*</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/spark/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/spark/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-spark</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>spark-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Spark Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/spark/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"\n         xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-spark</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-spark-core</artifactId>\n  <name>PDI Spark Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <dependency.commons-validator.version>1.3.1</dependency.commons-validator.version>\n    <net.java.dev.jna.version>5.12.0</net.java.dev.jna.version>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n  <Import-Package>org.eclipse.swt*;resolution:=optional,\n              org.pentaho.di.ui.xul*;resolution:=optional,\n              org.pentaho.ui.xul*;resolution:=optional,\n              org.pentaho.di.osgi,\n              org.pentaho.di.core.plugins,\n              *\n            </Import-Package>\n  -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <dependencies>\n    <dependency>\n      <groupId>net.java.dev.jna</groupId>\n      <artifactId>jna</artifactId>\n      <version>${net.java.dev.jna.version}</version>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>net.java.dev.jna</groupId>\n      <artifactId>jna-platform</artifactId>\n      <version>${net.java.dev.jna.version}</version>\n      <scope>provided</scope>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>metastore</artifactId>\n      <version>${metastore.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-validator</groupId>\n      <artifactId>commons-validator</artifactId>\n      <version>${dependency.commons-validator.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-metaverse-api</artifactId>\n      <version>${pentaho-metaverse.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/java/org/pentaho/di/job/entries/spark/JobEntrySparkSubmit.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Joiner;\nimport com.sun.jna.Platform;\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.AliasedFileObject;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.JobEntryListener;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryCopy;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.dictionary.MetaverseAnalyzers;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.StreamTokenizer;\nimport java.io.StringReader;\nimport java.util.ArrayList;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport static org.pentaho.di.job.entry.validator.AndValidator.putValidators;\nimport static org.pentaho.di.job.entry.validator.JobEntryValidatorUtils.andValidator;\nimport static org.pentaho.di.job.entry.validator.JobEntryValidatorUtils.fileExistsValidator;\nimport static org.pentaho.di.job.entry.validator.JobEntryValidatorUtils.notBlankValidator;\n\n/**\n * This job entry submits a JAR to Spark and executes a class. It uses the spark-submit script to submit a command like\n * this: spark-submit --class org.pentaho.spark.SparkExecTest --master yarn-cluster my-spark-job.jar arg1 arg2\n * <p>\n * More information on the options is here: http://spark.apache.org/docs/1.2.0/submitting-applications.html\n */\n\n@JobEntry( image = \"org/pentaho/di/ui/job/entries/spark/img/spark.svg\",\n  id = MetaverseAnalyzers.JobEntrySparkSubmitAnalyzer.ID,\n  name = \"JobEntrySparkSubmit.Title\", description = \"JobEntrySparkSubmit.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.job.entries.spark\" )\npublic class JobEntrySparkSubmit extends JobEntryBase implements Cloneable, JobEntryInterface, JobEntryListener {\n  public static final String JOB_TYPE_JAVA_SCALA = \"Java or Scala\";\n  public static final String JOB_TYPE_PYTHON = \"Python\";\n  public static final String HADOOP_CLUSTER_PREFIX = \"hc://\";\n\n  private static Class<?> PKG = JobEntrySparkSubmit.class; // for i18n purposes, needed by Translator2!!\n\n  private String jobType = JOB_TYPE_JAVA_SCALA;\n  private String scriptPath; // the path for the spark-submit utility\n  private String master = \"yarn-cluster\"; // the URL for the Spark master\n  private Map<String, String> libs = new LinkedHashMap<>();\n  // supporting documents options, \"path->environment\"\n  private List<String> configParams = new ArrayList<String>(); // configuration options, \"key=value\"\n  private String jar; // the path for the jar containing the Spark code to run\n  private String pyFile; // path to python file for python jobs\n  private String className; // the name of the class to run\n  private String args; // arguments for the Spark code\n  private boolean blockExecution = true; // wait for job to complete\n  private String executorMemory; // memory allocation config param for the executor\n  private String driverMemory; // memory allocation config param for the driver\n\n  protected Process proc; // the process for the spark-submit command\n\n  public JobEntrySparkSubmit( String n ) {\n    super( n, \"\" );\n  }\n\n  public JobEntrySparkSubmit() {\n    this( \"\" );\n  }\n\n  public Object clone() {\n    JobEntrySparkSubmit je = (JobEntrySparkSubmit) super.clone();\n    return je;\n  }\n\n  /**\n   * Converts the state into XML and returns it\n   *\n   * @return The XML for the current state\n   */\n  public String getXML() {\n    StringBuffer retval = new StringBuffer( 200 );\n\n    retval.append( super.getXML() );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"scriptPath\", scriptPath ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"jobType\", jobType ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"master\", master ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"jar\", jar ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"pyFile\", pyFile ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"className\", className ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"args\", args ) );\n    retval.append( \"      \" ).append( XMLHandler.openTag( \"configParams\" ) ).append( Const.CR );\n    for ( String param : configParams ) {\n      retval.append( \"            \" ).append( XMLHandler.addTagValue( \"param\", param ) );\n    }\n    retval.append( \"      \" ).append( XMLHandler.closeTag( \"configParams\" ) ).append( Const.CR );\n    retval.append( \"      \" ).append( XMLHandler.openTag( \"libs\" ) ).append( Const.CR );\n\n    for ( String key : libs.keySet() ) {\n      retval.append( \"            \" ).append( XMLHandler.addTagValue( \"env\", libs.get( key ) ) );\n      retval.append( \"            \" ).append( XMLHandler.addTagValue( \"path\", key ) );\n    }\n    retval.append( \"      \" ).append( XMLHandler.closeTag( \"libs\" ) ).append( Const.CR );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"driverMemory\", driverMemory ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"executorMemory\", executorMemory ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"blockExecution\", blockExecution ) );\n    return retval.toString();\n  }\n\n  /**\n   * Parses XML and recreates the state\n   */\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers, Repository rep,\n                       IMetaStore metaStore ) throws KettleXMLException {\n    try {\n      super.loadXML( entrynode, databases, slaveServers );\n\n      scriptPath = XMLHandler.getTagValue( entrynode, \"scriptPath\" );\n      master = XMLHandler.getTagValue( entrynode, \"master\" );\n      jobType = XMLHandler.getTagValue( entrynode, \"jobType\" );\n      if ( jobType == null ) {\n        this.jobType = JOB_TYPE_JAVA_SCALA;\n      }\n      jar = XMLHandler.getTagValue( entrynode, \"jar\" );\n      pyFile = XMLHandler.getTagValue( entrynode, \"pyFile\" );\n      className = XMLHandler.getTagValue( entrynode, \"className\" );\n      args = XMLHandler.getTagValue( entrynode, \"args\" );\n      Node configParamsNode = XMLHandler.getSubNode( entrynode, \"configParams\" );\n      List<Node> paramNodes = XMLHandler.getNodes( configParamsNode, \"param\" );\n      for ( Node paramNode : paramNodes ) {\n        configParams.add( paramNode.getTextContent() );\n      }\n      Node libsNode = XMLHandler.getSubNode( entrynode, \"libs\" );\n      if ( libsNode != null ) {\n        List<Node> envNodes = XMLHandler.getNodes( libsNode, \"env\" );\n        List<Node> pathNodes = XMLHandler.getNodes( libsNode, \"path\" );\n        for ( int i = 0; i < envNodes.size(); i++ ) {\n          libs.put( pathNodes.get( i ).getTextContent(), envNodes.get( i ).getTextContent() );\n        }\n      }\n      driverMemory = XMLHandler.getTagValue( entrynode, \"driverMemory\" );\n      executorMemory = XMLHandler.getTagValue( entrynode, \"executorMemory\" );\n      blockExecution = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"blockExecution\" ) );\n    } catch ( KettleXMLException xe ) {\n      throw new KettleXMLException( \"Unable to load job entry of type 'SparkSubmit' from XML node\", xe );\n    }\n  }\n\n  /**\n   * Reads the state from the repository\n   */\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    try {\n      scriptPath = rep.getJobEntryAttributeString( id_jobentry, \"scriptPath\" );\n      master = rep.getJobEntryAttributeString( id_jobentry, \"master\" );\n      jobType = rep.getJobEntryAttributeString( id_jobentry, \"jobType\" );\n      if ( jobType == null ) {\n        this.jobType = JOB_TYPE_JAVA_SCALA;\n      }\n      jar = rep.getJobEntryAttributeString( id_jobentry, \"jar\" );\n      pyFile = rep.getJobEntryAttributeString( id_jobentry, \"pyFile\" );\n      className = rep.getJobEntryAttributeString( id_jobentry, \"className\" );\n      args = rep.getJobEntryAttributeString( id_jobentry, \"args\" );\n      for ( int i = 0; i < rep.countNrJobEntryAttributes( id_jobentry, \"param\" ); i++ ) {\n        configParams.add( rep.getJobEntryAttributeString( id_jobentry, i, \"param\" ) );\n      }\n      for ( int i = 0; i < rep.countNrJobEntryAttributes( id_jobentry, \"libsEnv\" ); i++ ) {\n        libs.put( rep.getJobEntryAttributeString( id_jobentry, i, \"libsPath\" ),\n          rep.getJobEntryAttributeString( id_jobentry, i, \"libsEnv\" ) );\n      }\n      driverMemory = rep.getJobEntryAttributeString( id_jobentry, \"driverMemory\" );\n      executorMemory = rep.getJobEntryAttributeString( id_jobentry, \"executorMemory\" );\n      blockExecution = rep.getJobEntryAttributeBoolean( id_jobentry, \"blockExecution\" );\n    } catch ( KettleException dbe ) {\n      throw new KettleException( \"Unable to load job entry of type 'SparkSubmit' from the repository for id_jobentry=\"\n        + id_jobentry, dbe );\n    }\n  }\n\n  /**\n   * Saves the current state into the repository\n   */\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_job ) throws KettleException {\n    try {\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"scriptPath\", scriptPath );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"master\", master );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"jobType\", jobType );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"jar\", jar );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"pyFile\", pyFile );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"className\", className );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"args\", args );\n      for ( int i = 0; i < configParams.size(); i++ ) {\n        rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"param\", configParams.get( i ) );\n      }\n\n      int i = 0;\n      for ( String key : libs.keySet() ) {\n        rep.saveJobEntryAttribute( id_job, getObjectId(), i, \"libsEnv\", libs.get( key ) );\n        rep.saveJobEntryAttribute( id_job, getObjectId(), i++, \"libsPath\", key );\n      }\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"driverMemory\", driverMemory );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"executorMemory\", executorMemory );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"blockExecution\", blockExecution );\n    } catch ( KettleDatabaseException dbe ) {\n      throw new KettleException( \"Unable to save job entry of type 'SparkSubmit' to the repository for id_job=\"\n        + id_job, dbe );\n    }\n  }\n\n  /**\n   * Returns the path for the spark-submit utility\n   *\n   * @return The script path\n   */\n  public String getScriptPath() {\n    return scriptPath;\n  }\n\n  /**\n   * Sets the path for the spark-submit utility\n   *\n   * @param scriptPath path to spark-submit utility\n   */\n  public void setScriptPath( String scriptPath ) {\n    this.scriptPath = scriptPath;\n  }\n\n  /**\n   * Returns the URL for the Spark master node\n   *\n   * @return The URL for the Spark master node\n   */\n  public String getMaster() {\n    return master;\n  }\n\n  /**\n   * Sets the URL for the Spark master node\n   *\n   * @param master URL for the Spark master node\n   */\n  public void setMaster( String master ) {\n    this.master = master;\n  }\n\n  /**\n   * Returns map of configuration params\n   *\n   * @return map of configuration params\n   */\n  public List<String> getConfigParams() {\n    return configParams;\n  }\n\n  /**\n   * Sets configuration params\n   */\n  public void setConfigParams( List<String> configParams ) {\n    this.configParams = configParams;\n  }\n\n  /**\n   * Returns list of library-env pairs.\n   *\n   * @return list of libs.\n   */\n  public Map<String, String> getLibs() {\n    return libs;\n  }\n\n  /**\n   * Sets path-env pairs for libraries\n   */\n  public void setLibs( Map<String, String> docs ) {\n    this.libs = docs;\n  }\n\n  /**\n   * Returns the path for the jar containing the Spark code to execute\n   *\n   * @return The path for the jar\n   */\n  public String getJar() {\n    return jar;\n  }\n\n  /**\n   * Sets the path for the jar containing the Spark code to execute\n   *\n   * @param jar path for the jar\n   */\n  public void setJar( String jar ) {\n    this.jar = jar;\n  }\n\n  /**\n   * Returns the name of the class containing the Spark code to execute\n   *\n   * @return The name of the class\n   */\n  public String getClassName() {\n    return className;\n  }\n\n  /**\n   * Sets the name of the class containing the Spark code to execute\n   *\n   * @param className name of the class\n   */\n  public void setClassName( String className ) {\n    this.className = className;\n  }\n\n  /**\n   * Returns the arguments for the Spark class. This is a space-separated list of strings, e.g. \"http.log 1000\"\n   *\n   * @return The arguments\n   */\n  public String getArgs() {\n    return args;\n  }\n\n  /**\n   * Sets the arguments for the Spark class. This is a space-separated list of strings, e.g. \"http.log 1000\"\n   *\n   * @param args arguments\n   */\n  public void setArgs( String args ) {\n    this.args = args;\n  }\n\n  /**\n   * Returns executor memory config param's value\n   *\n   * @return executor memory config param\n   */\n  public String getExecutorMemory() {\n    return executorMemory;\n  }\n\n  /**\n   * Sets executor memory config param's value\n   *\n   * @param executorMemory amount of memory executor process is allowed to consume\n   */\n  public void setExecutorMemory( String executorMemory ) {\n    this.executorMemory = executorMemory;\n  }\n\n  /**\n   * Returns driver memory config param's value\n   *\n   * @return driver memory config param\n   */\n  public String getDriverMemory() {\n    return driverMemory;\n  }\n\n  /**\n   * Sets driver memory config param's value\n   *\n   * @param driverMemory amount of memory driver process is allowed to consume\n   */\n  public void setDriverMemory( String driverMemory ) {\n    this.driverMemory = driverMemory;\n  }\n\n  /**\n   * Returns if the job entry will wait till job execution completes\n   *\n   * @return blocking mode\n   */\n  public boolean isBlockExecution() {\n    return blockExecution;\n  }\n\n  /**\n   * Sets if the job entry will wait for job execution to complete\n   *\n   * @param blockExecution blocking mode\n   */\n  public void setBlockExecution( boolean blockExecution ) {\n    this.blockExecution = blockExecution;\n  }\n\n  /**\n   * Returns type of job, valid types are {@link #JOB_TYPE_JAVA_SCALA} and {@link #JOB_TYPE_PYTHON}.\n   *\n   * @return spark job's type\n   */\n  public String getJobType() {\n    return jobType;\n  }\n\n  /**\n   * Sets spark job type to be executed, valid types are {@link #JOB_TYPE_JAVA_SCALA} and {@link #JOB_TYPE_PYTHON}..\n   *\n   * @param jobType to be set\n   */\n  public void setJobType( String jobType ) {\n    this.jobType = jobType;\n  }\n\n  /**\n   * Returns path to job's python file. Valid for jobs of {@link #JOB_TYPE_PYTHON} type.\n   *\n   * @return path to python script\n   */\n  public String getPyFile() {\n    return pyFile;\n  }\n\n  /**\n   * Sets path to python script to be executed. Valid for jobs of {@link #JOB_TYPE_PYTHON} type.\n   *\n   * @param pyFile path to set\n   */\n  public void setPyFile( String pyFile ) {\n    this.pyFile = pyFile;\n  }\n\n  /**\n   * Returns the spark-submit command as a list of strings. e.g. <path to spark-submit> --class <main-class> --master\n   * <master-url> --deploy-mode <deploy-mode> --conf <key>=<value> <application-jar> \\ [application-arguments]\n   *\n   * @return The spark-submit command\n   */\n  public List<String> getCmds() throws IOException {\n    List<String> cmds = new ArrayList<String>();\n\n    cmds.add( environmentSubstitute( scriptPath ) );\n    cmds.add( \"--master\" );\n    cmds.add( environmentSubstitute( master ) );\n\n    for ( String confParam : configParams ) {\n      cmds.add( \"--conf\" );\n      cmds.add( environmentSubstitute( confParam ) );\n    }\n\n    if ( !Const.isEmpty( driverMemory ) ) {\n      cmds.add( \"--driver-memory\" );\n      cmds.add( environmentSubstitute( driverMemory ) );\n    }\n\n    if ( !Const.isEmpty( executorMemory ) ) {\n      cmds.add( \"--executor-memory\" );\n      cmds.add( environmentSubstitute( executorMemory ) );\n    }\n\n    switch ( jobType ) {\n      case JOB_TYPE_JAVA_SCALA: {\n        if ( !Const.isEmpty( className ) ) {\n          cmds.add( \"--class\" );\n          cmds.add( environmentSubstitute( className ) );\n        }\n\n        if ( !libs.isEmpty() ) {\n          cmds.add( \"--jars\" );\n          cmds.add( environmentSubstitute( Joiner.on( ',' ).join( libs.keySet() ) ) );\n        }\n\n        cmds.add( resolvePath( environmentSubstitute( jar ) ) );\n\n        break;\n      }\n      case JOB_TYPE_PYTHON: {\n        if ( !libs.isEmpty() ) {\n          cmds.add( \"--py-files\" );\n          cmds.add( environmentSubstitute( Joiner.on( ',' ).join( libs.keySet() ) ) );\n        }\n\n        cmds.add( environmentSubstitute( pyFile ) );\n\n        break;\n      }\n    }\n\n    if ( !Const.isEmpty( args ) ) {\n      List<String> argArray = parseCommandLine( args );\n      for ( String anArg : argArray ) {\n        if ( !Const.isEmpty( anArg ) ) {\n          if ( anArg.startsWith( HADOOP_CLUSTER_PREFIX ) ) {\n            anArg = resolvePath( environmentSubstitute( anArg ) );\n          }\n          cmds.add( anArg );\n        }\n      }\n    }\n\n    return cmds;\n  }\n\n  @VisibleForTesting\n  protected boolean validate() {\n    boolean valid = true;\n    if ( Const.isEmpty( scriptPath ) || !new File( environmentSubstitute( scriptPath ) ).exists() ) {\n      logError( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.SparkSubmitPathInvalid\" ) );\n      valid = false;\n    }\n\n    if ( Const.isEmpty( master ) ) {\n      logError( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.MasterURLEmpty\" ) );\n      valid = false;\n    }\n\n    if ( JOB_TYPE_JAVA_SCALA.equals( getJobType() ) ) {\n      if ( Const.isEmpty( jar ) ) {\n        logError( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.JarPathEmpty\" ) );\n        valid = false;\n      }\n    } else {\n      if ( Const.isEmpty( pyFile ) ) {\n        logError( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.PyFilePathEmpty\" ) );\n        valid = false;\n      }\n    }\n\n    return valid;\n  }\n\n\n  /**\n   * Executes the spark-submit command and returns a Result\n   *\n   * @return The Result of the operation\n   */\n  public Result execute( Result result, int nr ) {\n    if ( !validate() ) {\n      result.setResult( false );\n      return result;\n    }\n\n    try {\n      List<String> cmds = getCmds();\n\n      logBasic( \"Submitting Spark Script\" );\n\n      if ( log.isDetailed() ) {\n        logDetailed( cmds.toString() );\n      }\n\n      // Build the environment variable list...\n      ProcessBuilder procBuilder = new ProcessBuilder( cmds );\n      Map<String, String> env = procBuilder.environment();\n      String[] variables = listVariables();\n      for ( String variable : variables ) {\n        env.put( variable, getVariable( variable ) );\n      }\n      proc = procBuilder.start();\n\n      String[] jobSubmittedPatterns = new String[] { \"tracking URL:\" };\n\n      final AtomicBoolean jobSubmitted = new AtomicBoolean( false );\n\n      // any error message?\n      PatternMatchingStreamLogger errorLogger =\n        new PatternMatchingStreamLogger( log, proc.getErrorStream(), jobSubmittedPatterns, jobSubmitted );\n\n      // any output?\n      PatternMatchingStreamLogger outputLogger =\n        new PatternMatchingStreamLogger( log, proc.getInputStream(), jobSubmittedPatterns, jobSubmitted );\n\n      if ( !blockExecution ) {\n        PatternMatchingStreamLogger.PatternMatchedListener cb =\n          new PatternMatchingStreamLogger.PatternMatchedListener() {\n            @Override\n            public void onPatternFound( String pattern ) {\n              log.logDebug( \"Found match in output, considering job submitted, stopping spark-submit\" );\n              jobSubmitted.set( true );\n              proc.destroy();\n            }\n          };\n        errorLogger.addPatternMatchedListener( cb );\n        outputLogger.addPatternMatchedListener( cb );\n      }\n\n      // kick them off\n      Thread errorLoggerThread = new Thread( errorLogger );\n      errorLoggerThread.start();\n      Thread outputLoggerThread = new Thread( outputLogger );\n      outputLoggerThread.start();\n\n      // Stop on job stop\n      final AtomicBoolean processFinished = new AtomicBoolean( false );\n      new Thread( new Runnable() {\n        @Override\n        public void run() {\n          while ( !getParentJob().isStopped() && !processFinished.get() ) {\n            try {\n              Thread.sleep( 5000 );\n            } catch ( InterruptedException e ) {\n              e.printStackTrace();\n            }\n          }\n          proc.destroy();\n        }\n      } ).start();\n\n      proc.waitFor();\n\n      processFinished.set( true );\n\n      prepareProcessThreadsToStop( proc, errorLoggerThread, outputLoggerThread );\n\n      if ( log.isDetailed() ) {\n        logDetailed( \"Spark submit finished\" );\n      }\n\n      // What's the exit status?\n      int exitCode;\n      if ( blockExecution ) {\n        exitCode = proc.exitValue();\n      } else {\n        exitCode = jobSubmitted.get() ? 0 : proc.exitValue();\n      }\n\n      result.setExitStatus( exitCode );\n      if ( exitCode != 0 ) {\n        if ( log.isDetailed() ) {\n          logDetailed( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.ExitStatus\", result.getExitStatus() ) );\n        }\n\n        result.setNrErrors( 1 );\n      }\n\n      result.setResult( exitCode == 0 );\n    } catch ( Exception e ) {\n      result.setNrErrors( 1 );\n      logError( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.SubmittingScript\", e.getMessage() ) );\n      logError( Const.getStackTracker( e ) );\n      result.setResult( false );\n    }\n\n    return result;\n  }\n\n  private void waitForThreadsFinishToRead( Thread errorLoggerThread, Thread outputLoggerThread )\n    throws InterruptedException {\n    // wait until loggers read all data from stdout and stderr\n    errorLoggerThread.join();\n    outputLoggerThread.join();\n  }\n\n  private void prepareProcessThreadsToStop( Process proc, Thread errorLoggerThread, Thread outputLoggerThread )\n    throws Exception {\n    if ( blockExecution ) {\n      waitForThreadsFinishToRead( errorLoggerThread, outputLoggerThread );\n    } else {\n      killChildProcesses();\n    }\n    // close the streams\n    // otherwise you get \"Too many open files, java.io.IOException\" after a lot of iterations\n    proc.getErrorStream().close();\n    proc.getOutputStream().close();\n  }\n\n  @VisibleForTesting\n  void killChildProcesses() {\n    if ( Platform.isWindows() ) {\n      try {\n        WinProcess process = new WinProcess( WinProcess.getPID( proc ) );\n        process.killChildProcesses();\n      } catch ( IOException e ) {\n        if ( log.isDetailed() ) {\n          logDetailed(\n            ( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Error.KillWindowsChildProcess\", e.getMessage() ) ) );\n        }\n      }\n    }\n  }\n\n  public boolean evaluates() {\n    return true;\n  }\n\n  /**\n   * Checks that the minimum options have been provided.\n   */\n  @Override\n  public void check( List<CheckResultInterface> remarks, JobMeta jobMeta, VariableSpace space, Repository repository,\n                     IMetaStore metaStore ) {\n    andValidator().validate( jobMeta.getBowl(), this, \"scriptPath\", remarks, putValidators( notBlankValidator() ) );\n    andValidator().validate( jobMeta.getBowl(), this, \"scriptPath\", remarks, putValidators( fileExistsValidator() ) );\n    andValidator().validate( jobMeta.getBowl(), this, \"master\", remarks, putValidators( notBlankValidator() ) );\n    if ( JOB_TYPE_JAVA_SCALA.equals( getJobType() ) ) {\n      andValidator().validate( jobMeta.getBowl(), this, \"jar\", remarks, putValidators( notBlankValidator() ) );\n    } else {\n      andValidator().validate( jobMeta.getBowl(), this, \"pyFile\", remarks, putValidators( notBlankValidator() ) );\n    }\n  }\n\n  /**\n   * Parse a string into arguments as if it were provided on the command line.\n   *\n   * @param commandLineString A command line string.\n   * @return List of parsed arguments\n   * @throws IOException when the command line could not be parsed\n   */\n  public List<String> parseCommandLine( String commandLineString ) throws IOException {\n    List<String> args = new ArrayList<String>();\n    StringReader reader = new StringReader( commandLineString );\n    try {\n      StreamTokenizer tokenizer = new StreamTokenizer( reader );\n      // Treat a dash as an ordinary character so it gets included in the token\n      tokenizer.ordinaryChar( '-' );\n      tokenizer.ordinaryChar( '.' );\n      tokenizer.ordinaryChars( '0', '9' );\n      // Treat all characters as word characters so nothing is parsed out\n      tokenizer.wordChars( '\\u0000', '\\uFFFF' );\n\n      // Re-add whitespace characters\n      tokenizer.whitespaceChars( 0, ' ' );\n\n      // Use \" and ' as quote characters\n      tokenizer.quoteChar( '\"' );\n      tokenizer.quoteChar( '\\'' );\n\n      // Add all non-null string values tokenized from the string to the argument list\n      while ( tokenizer.nextToken() != StreamTokenizer.TT_EOF ) {\n        if ( tokenizer.sval != null ) {\n          String s = tokenizer.sval;\n          s = environmentSubstitute( s );\n          args.add( s );\n        }\n      }\n    } finally {\n      reader.close();\n    }\n\n    return args;\n  }\n\n  @Override\n  public void afterExecution( Job arg0, JobEntryCopy arg1, JobEntryInterface arg2, Result arg3 ) {\n    proc.destroy();\n  }\n\n  @Override\n  public void beforeExecution( Job arg0, JobEntryCopy arg1, JobEntryInterface arg2 ) {\n  }\n\n  private String resolvePath( String path ) {\n    if ( path != null && !path.isEmpty() ) {\n      try {\n        FileObject fileObject = KettleVFS.getInstance( parentJobMeta.getBowl() ).getFileObject( path );\n        if ( AliasedFileObject.isAliasedFile( fileObject ) ) {\n          return  ( (AliasedFileObject) fileObject ).getAELSafeURIString();\n        }\n      } catch ( KettleFileException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    return path;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/java/org/pentaho/di/job/entries/spark/JobEntrySparkSubmitAnalyzer.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport org.apache.commons.lang.StringEscapeUtils;\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.dictionary.MetaverseAnalyzers;\nimport org.pentaho.metaverse.api.IMetaverseNode;\nimport org.pentaho.metaverse.api.MetaverseAnalyzerException;\nimport org.pentaho.metaverse.api.analyzer.kettle.jobentry.JobEntryAnalyzer;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.util.HashSet;\nimport java.util.Set;\n\n/**\n * A  data lineage analyzer for the \"Spark Submit\" job entry.\n */\npublic class JobEntrySparkSubmitAnalyzer extends JobEntryAnalyzer<JobEntrySparkSubmit> {\n\n  private static final String CLASS_NAME = \"className\";\n  private static final String PY_FILE = \"pyFile\";\n  private static final String ARGUMENTS = \"arguments\";\n  private static final String EXEC_MEMORY = \"executorMemory\";\n  private static final String DRIVER_MEMORY = \"driverMemory\";\n  private static final String MASTER_URL = \"masterUrl\";\n\n  private Logger log = LogManager.getLogger( JobEntrySparkSubmitAnalyzer.class );\n\n  @Override\n  public Set<Class<? extends JobEntryInterface>> getSupportedEntries() {\n    Set<Class<? extends JobEntryInterface>> supportedEntries = new HashSet<Class<? extends JobEntryInterface>>();\n    supportedEntries.add( JobEntrySparkSubmit.class );\n    return supportedEntries;\n  }\n\n  @Override\n  protected void customAnalyze( JobEntrySparkSubmit entry, IMetaverseNode rootNode ) throws MetaverseAnalyzerException {\n    // -- Common properties\n    rootNode.setProperty( ARGUMENTS, entry.environmentSubstitute( entry.getArgs() ) );\n    rootNode.setProperty( EXEC_MEMORY, entry.environmentSubstitute( entry.getExecutorMemory() ) );\n    rootNode.setProperty( DRIVER_MEMORY, entry.environmentSubstitute( entry.getDriverMemory() ) );\n    rootNode.setProperty( MASTER_URL, entry.environmentSubstitute( entry.getMaster() ) );\n    if ( JobEntrySparkSubmit.JOB_TYPE_JAVA_SCALA.equals( entry.getJobType() ) ) {\n      // --- Java / Scala properties\n      rootNode.setProperty( CLASS_NAME, entry.environmentSubstitute( entry.getClassName() ) );\n      if ( StringUtils.isNotBlank( entry.getJar() ) ) {\n        rootNode.setProperty( MetaverseAnalyzers.JobEntrySparkSubmitAnalyzer.APPLICATION_JAR,\n          normalizePath( entry.environmentSubstitute( entry.getJar() ) ) );\n      }\n    } else if ( JobEntrySparkSubmit.JOB_TYPE_PYTHON.equals( entry.getJobType() ) ) {\n      // Python properties\n      if ( StringUtils.isNotBlank( entry.getPyFile() ) ) {\n        rootNode.setProperty( MetaverseAnalyzers.JobEntrySparkSubmitAnalyzer.APPLICATION_JAR,\n          normalizePath( entry.environmentSubstitute( entry.getPyFile() ) ) );\n      }\n    }\n  }\n\n  private String normalizePath( final String path ) {\n    if ( StringUtils.isNotBlank( path ) ) {\n      return path.replaceAll( \"/\", StringEscapeUtils.escapeJava( \"/\" ) )\n        .replaceAll(  \"\\\\\\\\\", StringEscapeUtils.escapeJava( \"/\" ) );\n    }\n    return path;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/java/org/pentaho/di/job/entries/spark/PatternMatchingStreamLogger.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport org.pentaho.di.core.logging.LogChannelInterface;\n\n/**\n * Class pumps input stream to output stream while searching it's content for patterns and notifying listener if any.\n *\n * @author Pavel Sakun\n */\npublic class PatternMatchingStreamLogger implements Runnable {\n  private LogChannelInterface log;\n  private InputStream is;\n  private String[] patterns;\n  private PatternMatchedListener listener;\n  private AtomicBoolean stop;\n\n  public PatternMatchingStreamLogger( LogChannelInterface log, InputStream is, String[] patterns, AtomicBoolean stop ) {\n    this.log = log;\n    this.is = is;\n    this.patterns = patterns;\n    this.stop = stop;\n  }\n\n  public void run() {\n    BufferedReader br = new BufferedReader( new InputStreamReader( is ) );\n    String line;\n\n    try {\n      while ( !stop.get() && ( line = br.readLine() ) != null ) {\n        log.logBasic( line );\n        for ( String pattern : patterns ) {\n          if ( line.contains( pattern ) ) {\n            if ( listener != null ) {\n              listener.onPatternFound( pattern );\n            }\n          }\n        }\n      }\n    } catch ( IOException e ) {\n      log.logError( \"\", e );\n    }\n  }\n\n  public void addPatternMatchedListener( PatternMatchedListener pml ) {\n    listener = pml;\n  }\n\n  public static interface PatternMatchedListener {\n    public void onPatternFound( String pattern );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/java/org/pentaho/di/job/entries/spark/WinProcess.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport com.sun.jna.Native;\nimport com.sun.jna.Platform;\nimport com.sun.jna.Pointer;\nimport com.sun.jna.platform.win32.Kernel32;\nimport com.sun.jna.platform.win32.Kernel32Util;\nimport com.sun.jna.platform.win32.Tlhelp32;\nimport com.sun.jna.platform.win32.WinDef;\nimport com.sun.jna.platform.win32.WinNT;\nimport com.sun.jna.win32.W32APIOptions;\n\nimport java.io.IOException;\nimport java.lang.reflect.Field;\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class WinProcess {\n\n  private int pid;\n  private WinNT.HANDLE handle;\n\n  private static final int PROCESS_QUERY_INFORMATION = 0x0400;\n  private static final int PROCESS_SUSPEND_RESUME = 0x0800;\n  private static final int PROCESS_TERMINATE = 0x0001;\n  private static final int PROCESS_SYNCHRONIZE = 0x00100000;\n\n  WinProcess( int pid ) throws IOException {\n    handle = Kernel32.INSTANCE\n      .OpenProcess( PROCESS_QUERY_INFORMATION | PROCESS_SUSPEND_RESUME | PROCESS_TERMINATE | PROCESS_SYNCHRONIZE, false,\n        pid );\n    if ( handle == null ) {\n      throw new IOException(\n        \"OpenProcess failed: \" + Kernel32Util.formatMessageFromLastErrorCode( Kernel32.INSTANCE.GetLastError() ) );\n    }\n    this.pid = pid;\n  }\n\n  public void terminate() {\n    Kernel32.INSTANCE.TerminateProcess( handle, 0 );\n  }\n\n  private List<WinProcess> getChildProcesses() throws IOException {\n\n    int childPID;\n    List<WinProcess> processList = new ArrayList<>();\n    List<Integer> pidList = new ArrayList<>();\n    pidList.add( pid );\n    int parentPID;\n\n    Kernel32 kernel32 = Native.loadLibrary( Kernel32.class, W32APIOptions.UNICODE_OPTIONS );\n    Tlhelp32.PROCESSENTRY32.ByReference processEntry = new Tlhelp32.PROCESSENTRY32.ByReference();\n    WinNT.HANDLE snapshot = kernel32.CreateToolhelp32Snapshot( Tlhelp32.TH32CS_SNAPPROCESS, new WinDef.DWORD( 0 ) );\n    try {\n      while ( kernel32.Process32Next( snapshot, processEntry ) ) {\n        parentPID = processEntry.th32ParentProcessID.intValue();\n        if ( pidList.contains( parentPID ) ) {\n          childPID = processEntry.th32ProcessID.intValue();\n          pidList.add( childPID );\n          processList.add( new WinProcess( childPID ) );\n        }\n      }\n    } finally {\n      kernel32.CloseHandle( snapshot );\n    }\n    return processList;\n  }\n\n  public String killChildProcesses() throws IOException {\n    StringBuilder builder = new StringBuilder();\n    if ( Platform.isWindows() ) {\n      List<WinProcess> children = getChildProcesses();\n      if ( !children.isEmpty() ) {\n        for ( WinProcess child : children ) {\n          builder.append( child.getWinProcessPID() + \" \" );\n          child.terminate();\n        }\n      }\n    }\n    return builder.toString().trim();\n  }\n\n  public static int getPID( Process proc ) {\n    int pid = -1;\n    try {\n      if ( proc.getClass().getName().equals( \"java.lang.Win32Process\" ) || proc.getClass().getName()\n        .equals( \"java.lang.ProcessImpl\" ) ) {\n        Field f = proc.getClass().getDeclaredField( \"handle\" );\n        f.setAccessible( true );\n        long handl = f.getLong( proc );\n\n        Kernel32 kernel = Kernel32.INSTANCE;\n        WinNT.HANDLE handle = new WinNT.HANDLE();\n        handle.setPointer( Pointer.createConstant( handl ) );\n        pid = kernel.GetProcessId( handle );\n      }\n    } catch ( Exception e ) {\n      e.printStackTrace();\n    }\n    return pid;\n  }\n\n  public int getWinProcessPID() {\n    return pid;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/java/org/pentaho/di/ui/job/entries/spark/JobEntrySparkSubmitDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.job.entries.spark;\n\nimport java.util.ArrayList;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Combo;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.FileDialog;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Table;\nimport org.eclipse.swt.widgets.TableColumn;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entries.spark.JobEntrySparkSubmit;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.ConstUI;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.gui.GUIResource;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.ComboVar;\nimport org.pentaho.di.ui.core.widget.ControlSpaceKeyAdapter;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.core.widget.TextVarButtonRenderCallback;\nimport org.pentaho.di.ui.job.dialog.JobDialog;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.dictionary.MetaverseAnalyzers;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\n/**\n * Dialog that allows you to enter the settings for a Spark submit job entry.\n *\n * @author Alexander Buloichik\n * @author jdixon\n * @since Dec-4-2014\n */\n@PluginDialog( id = MetaverseAnalyzers.JobEntrySparkSubmitAnalyzer.ID,\n  image = \"org/pentaho/di/ui/job/entries/spark/img/spark.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/spark-submit\" )\npublic class JobEntrySparkSubmitDialog extends JobEntryDialog implements JobEntryDialogInterface {\n  private static Class<?> PKG = JobEntrySparkSubmit.class; // for i18n purposes, needed by Translator2!!\n  private static final int SHELL_MINIMUM_WIDTH = 400;\n\n  private static final String[] FILEFORMATS =\n      new String[] { BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Fileformat.All\" ) };\n\n  private static final String[] MASTER_URLS = new String[] { \"yarn-cluster\", \"yarn-client\" };\n\n  public static final String LOCAL_ENVIRONMENT = \"Local\";\n  public static final String STATIC_ENVIRONMENT = \"<Static>\";\n\n  protected static final String[] FILETYPES =\n      new String[] { BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Fileformat.All\" ) };\n\n  protected Shell shell;\n  private Text txtEntryName;\n  private TextVar txtSparkSubmitUtility;\n  private Button btnSparkSubmitUtility;\n  private Button btnOK;\n  private Button btnCancel;\n  private TextVar txtClass;\n  private TextVar txtFilesApplicationJar;\n  private Text txtArguments;\n  private Button btnFilesApplicationJar;\n  private TextVar txtFilesPyFile;\n  private Button btnFilesPyFile;\n  private TableView tblFilesSupportingDocs;\n  private TableView tblUtilityParameters;\n  private Combo cmbType;\n  private ComboVar cmbMasterURL;\n  private Composite filesHeader;\n  private Composite tabFilesComposite;\n  private TextVar txtExecutorMemory;\n  private TextVar txtDriverMemory;\n  private Button chkEnableBlocking;\n\n  private JobEntrySparkSubmit jobEntry;\n  private boolean backupChanged;\n\n  public static void main( String[] a ) {\n    Display display = new Display();\n    PropsUI.init( display, Props.TYPE_PROPERTIES_SPOON );\n    Shell shell = new Shell( display );\n\n    JobEntrySparkSubmitDialog sh =\n        new JobEntrySparkSubmitDialog( shell, new JobEntrySparkSubmit( \"Spark submit job entry\" ), null,\n            new JobMeta() );\n\n    sh.open();\n  }\n\n  public JobEntrySparkSubmitDialog( Shell parent, JobEntryInterface jobEntryInt, Repository rep, JobMeta jobMeta ) {\n    super( parent, jobEntryInt, rep, jobMeta );\n    jobEntry = (JobEntrySparkSubmit) jobEntryInt;\n  }\n\n  @Override\n  public JobEntryInterface open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, props.getJobsDialogStyle() );\n\n    props.setLook( shell );\n    JobDialog.setShellImage( shell, jobEntry );\n\n    backupChanged = jobEntry.hasChanged();\n\n    createContents();\n\n    getData();\n    BaseStepDialog.setSize( shell );\n\n    shell.pack();\n    shell.setMinimumSize( SHELL_MINIMUM_WIDTH, shell.getSize().y );\n    shell.setSize( 530, 652 );\n    shell.open();\n    props.setDialogSize( shell, \"JobEntrySparkSubmitDialogSize\" );\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return jobEntry;\n  }\n\n  /**\n   * Create contents of the shell.\n   */\n  private void createContents() {\n    shell.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Title\" ) );\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = 15;\n    formLayout.marginHeight = 15;\n    shell.setLayout( formLayout );\n\n    // shell = new Shell( getParent(), SWT.DIALOG_TRIM | SWT.RESIZE | SWT.TITLE | SWT.APPLICATION_MODAL );\n    // shell.setSize( 621, 557 );\n\n    Label lblIcon = new Label( shell, SWT.NONE );\n    props.setLook( lblIcon );\n    lblIcon.setImage( GUIResource.getInstance().getImage( \"org/pentaho/di/ui/job/entries/spark/img/spark.svg\",\n        getClass().getClassLoader(), ConstUI.LARGE_ICON_SIZE, ConstUI.LARGE_ICON_SIZE ) );\n    lblIcon.setLayoutData( fd( null, fa( 0, 0 ), fa( 100, 0 ) ) );\n\n    Label lblEntryName = new Label( shell, SWT.NONE );\n    props.setLook( lblEntryName );\n    lblEntryName.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Name.Label\" ) );\n    lblEntryName.setLayoutData( fd() );\n\n    txtEntryName = new Text( shell, SWT.BORDER );\n    props.setLook( txtEntryName );\n    txtEntryName.setLayoutData( fdwidth( 300, null, fa( lblEntryName, 5 ) ) );\n    txtEntryName.addModifyListener( lsMod );\n\n    Label sep = new Label( shell, SWT.SEPARATOR | SWT.HORIZONTAL );\n    props.setLook( sep );\n    sep.setLayoutData( fd( fa( 0, 0 ), fa( txtEntryName, 15 ), fa( 100, 0 ) ) );\n\n    Label lblSparkSubmitUtility = new Label( shell, SWT.NONE );\n    props.setLook( lblSparkSubmitUtility );\n    lblSparkSubmitUtility.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.ScriptPath.Label\" ) );\n    lblSparkSubmitUtility.setLayoutData( fd( null, fa( sep, 15 ) ) );\n\n    txtSparkSubmitUtility = new TextVar( jobMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtSparkSubmitUtility );\n    txtSparkSubmitUtility.setLayoutData( fdwidth( 300, null, fa( lblSparkSubmitUtility, 5 ) ) );\n\n    btnSparkSubmitUtility = new Button( shell, SWT.PUSH );\n    props.setLook( btnSparkSubmitUtility );\n    btnSparkSubmitUtility.setText( BaseMessages.getString( PKG, \"System.Button.Browse\" ) );\n    btnSparkSubmitUtility.setLayoutData( fd( fa( txtSparkSubmitUtility, 10 ), fa( txtSparkSubmitUtility, 0, SWT.TOP ),\n        null ) );\n    btnSparkSubmitUtility.addSelectionListener( btnSparkSubmitUtilityListener );\n\n    Label lblMasterURL = new Label( shell, SWT.NONE );\n    props.setLook( lblMasterURL );\n    lblMasterURL.setLayoutData( fd( null, fa( txtSparkSubmitUtility, 10 ) ) );\n    lblMasterURL.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.SparkMaster.Label\" ) );\n\n    cmbMasterURL = new ComboVar( jobMeta, shell, SWT.BORDER );\n    props.setLook( cmbMasterURL );\n    cmbMasterURL.setLayoutData( fd( fa( 0, 0 ), fa( lblMasterURL, 5 ), fa( txtSparkSubmitUtility, 0, SWT.RIGHT ) ) );\n\n    Label lblType = new Label( shell, SWT.NONE );\n    props.setLook( lblType );\n    lblType.setLayoutData( fd( fa( cmbMasterURL, 10 ), fa( txtSparkSubmitUtility, 10 ) ) );\n    lblType.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Type.Label\" ) );\n\n    cmbType = new Combo( shell, SWT.NONE | SWT.READ_ONLY );\n    props.setLook( cmbType );\n    cmbType.setLayoutData( fdwidth( 300, fa( cmbMasterURL, 10 ), fa( lblType, 5 ), fa( 100, 0 ) ) );\n    cmbType.addSelectionListener( typeSelectionListener );\n\n    btnCancel = new Button( shell, SWT.PUSH );\n    props.setLook( btnCancel );\n    btnCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n    btnCancel.setLayoutData( fd( null, null, fa( 100, 0 ), fa( 100, 0 ) ) );\n\n    btnOK = new Button( shell, SWT.PUSH );\n    props.setLook( btnOK );\n    btnOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) );\n    btnOK.setLayoutData( fd( null, null, fa( btnCancel, -5 ), fa( 100, 0 ) ) );\n\n    Label sep2 = new Label( shell, SWT.SEPARATOR | SWT.HORIZONTAL );\n    props.setLook( sep2 );\n    sep2.setLayoutData( fd( fa( 0, 0 ), null, fa( 100, 0 ), fa( btnOK, -15 ) ) );\n\n    chkEnableBlocking = new Button( shell, SWT.CHECK );\n    props.setLook( chkEnableBlocking );\n    chkEnableBlocking.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.BlockExecution.Label\" ) );\n    chkEnableBlocking.setLayoutData( fd( fa( 0, 0 ), null, null, fa( sep2, -15 ) ) );\n\n    CTabFolder tabs = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( tabs, Props.WIDGET_STYLE_TAB );\n    props.setLook( tabs );\n    tabs.setLayoutData( fd( fa( 0, 0 ), fa( cmbMasterURL, 15 ), fa( 100, 0 ), fa( chkEnableBlocking, -15 ) ) );\n\n    CTabItem tabFiles = new CTabItem( tabs, SWT.NONE );\n    tabFiles.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Files.TabLabel\" ) );\n    tabFilesComposite = new Composite( tabs, SWT.NONE );\n    props.setLook( tabFilesComposite );\n    tabFiles.setControl( tabFilesComposite );\n    addOnFilesTab( tabFilesComposite );\n\n    CTabItem tabArguments = new CTabItem( tabs, SWT.NONE );\n    tabArguments.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Arguments.TabLabel\" ) );\n    Composite tabArgumentsComposite = new Composite( tabs, SWT.NONE );\n    props.setLook( tabArgumentsComposite );\n    tabArguments.setControl( tabArgumentsComposite );\n    addOnArgumentsTab( tabArgumentsComposite );\n\n    CTabItem tabOptions = new CTabItem( tabs, SWT.NONE );\n    tabOptions.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Options.TabLabel\" ) );\n    Composite tabOptionsComposite = new Composite( tabs, SWT.NONE );\n    props.setLook( tabOptionsComposite );\n    tabOptions.setControl( tabOptionsComposite );\n    addOnOptionsTab( tabOptionsComposite );\n\n    tabs.setSelection( tabFiles );\n\n    typeSelectionListener.widgetSelected( null );\n\n    btnOK.addListener( SWT.Selection, new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    } );\n    btnCancel.addListener( SWT.Selection, new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    } );\n\n    // Detect [X] or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n  }\n\n  private void addOnArgumentsTab( Composite tab ) {\n    tab.setLayout( new FormLayout() );\n\n    Label lblArguments = new Label( tab, SWT.NONE );\n    props.setLook( lblArguments );\n    lblArguments.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Args.Label\" ) );\n    lblArguments.setLayoutData( fd( fa( 0, 15 ), fa( 0, 15 ) ) );\n\n    txtArguments = new Text( tab, SWT.BORDER | SWT.MULTI | SWT.WRAP );\n    props.setLook( txtArguments );\n    txtArguments.setLayoutData( fd( fa( 0, 15 ), fa( lblArguments, 5 ), fa( 100, -15 ), fa( 100, -15 ) ) );\n    ControlSpaceKeyAdapter controlSpaceKeyAdapter = new ControlSpaceKeyAdapter( jobMeta, txtArguments, null, null );\n    txtArguments.addKeyListener( controlSpaceKeyAdapter );\n\n    Label txtArgumentsVar = new Label( tab, SWT.NONE );\n    txtArgumentsVar.setImage( GUIResource.getInstance().getImageVariable() );\n    txtArgumentsVar.setToolTipText( BaseMessages.getString( TextVar.class, \"TextVar.tooltip.InsertVariable\" ) );\n    props.setLook( txtArgumentsVar );\n    FormData fdArgumentsVar = new FormData();\n    fdArgumentsVar.right = new FormAttachment( 100, -15 );\n    fdArgumentsVar.bottom = new FormAttachment( txtArguments, 0, SWT.TOP );\n    txtArgumentsVar.setLayoutData( fdArgumentsVar );\n  }\n\n  private void addOnOptionsTab( Composite tab ) {\n    tab.setLayout( new FormLayout() );\n\n    Label lblExecutorMemory = new Label( tab, SWT.NONE );\n    props.setLook( lblExecutorMemory );\n    lblExecutorMemory.setLayoutData( fd( fa( 0, 15 ), fa( 0, 15 ) ) );\n    lblExecutorMemory.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.MemoryAllocation.Executor.Label\" ) );\n\n    txtExecutorMemory = new TextVar( jobMeta, tab, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtExecutorMemory );\n    txtExecutorMemory.setLayoutData( fdwidth( 200, fa( 0, 15 ), fa( lblExecutorMemory, 5 ) ) );\n\n    Label lblDriverMemory = new Label( tab, SWT.NONE );\n    props.setLook( lblDriverMemory );\n    lblDriverMemory.setLayoutData( fd( fa( txtExecutorMemory, 50 ), fa( 0, 15 ) ) );\n    lblDriverMemory.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.MemoryAllocation.Driver.Label\" ) );\n\n    txtDriverMemory = new TextVar( jobMeta, tab, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtDriverMemory );\n    txtDriverMemory.setLayoutData( fdwidth( 200, fa( txtExecutorMemory, 50 ), fa( lblDriverMemory, 5 ) ) );\n\n    Label lblUtilityParameters = new Label( tab, SWT.NONE );\n    props.setLook( lblUtilityParameters );\n    lblUtilityParameters.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.UtilityParameters.Label\" ) );\n    lblUtilityParameters.setLayoutData( fd( fa( 0, 15 ), fa( txtExecutorMemory, 10 ) ) );\n\n    ColumnInfo[] columns =\n        new ColumnInfo[] { new ColumnInfo( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.NameColumn.Label\" ),\n            ColumnInfo.COLUMN_TYPE_TEXT ), new ColumnInfo( BaseMessages.getString( PKG,\n                \"JobEntrySparkSubmit.ValueColumn.Label\" ), ColumnInfo.COLUMN_TYPE_TEXT ) };\n\n    tblUtilityParameters =\n        new TableView( jobEntry, tab, SWT.BORDER | SWT.FULL_SELECTION | SWT.SINGLE, columns, jobEntry.getConfigParams()\n            .size(), false, null, props, false );\n    props.setLook( tblUtilityParameters );\n    tblUtilityParameters.setLayoutData( fd( fa( 0, 15 ), fa( lblUtilityParameters, 5 ), fa( 100, -15 ), fa( 100,\n        -15 ) ) );\n    tblUtilityParameters.getTable().addListener( SWT.Resize, new ColumnsResizer( 0, 50, 50 ) );\n  }\n\n  private void addOnFilesTab( Composite tab ) {\n    tab.setLayout( new FormLayout() );\n\n    filesHeader = new Composite( tab, SWT.NONE );\n    props.setLook( filesHeader );\n    filesHeader.setLayoutData( fd( fa( 0, 15 ), fa( 0, 15 ), fa( 100, -15 ) ) );\n\n    if ( JobEntrySparkSubmit.JOB_TYPE_PYTHON.equals( jobEntry.getJobType() ) ) {\n      addOnFilesTabPython( filesHeader );\n    } else {\n      addOnFilesTabJavaScala( filesHeader );\n    }\n\n    Label lblSupportingDocs = new Label( tab, SWT.NONE );\n    props.setLook( lblSupportingDocs );\n    lblSupportingDocs.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.SupportingDocuments.Label\" ) );\n    lblSupportingDocs.setLayoutData( fd( fa( 0, 15 ), fa( filesHeader, 10 ) ) );\n\n    ColumnInfo[] columns =\n        new ColumnInfo[] { new ColumnInfo( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.EnvironmentColumn.Label\" ),\n            ColumnInfo.COLUMN_TYPE_CCOMBO ), new ColumnInfo( BaseMessages.getString( PKG,\n                \"JobEntrySparkSubmit.PathColumn.Label\" ), ColumnInfo.COLUMN_TYPE_TEXT_BUTTON ) };\n    columns[0].setComboValues( new String[] { LOCAL_ENVIRONMENT, STATIC_ENVIRONMENT } );\n    columns[0].setReadOnly( true );\n    columns[1].setUsingVariables( true );\n    columns[1].setTextVarButtonSelectionListener( pathSelection );\n\n    TextVarButtonRenderCallback callback = new TextVarButtonRenderCallback() {\n      public boolean shouldRenderButton() {\n        String\n            envType =\n            tblFilesSupportingDocs.getActiveTableItem().getText( tblFilesSupportingDocs.getActiveTableColumn() - 1 );\n        return !STATIC_ENVIRONMENT.equalsIgnoreCase( envType );\n      }\n    };\n\n    columns[1].setRenderTextVarButtonCallback( callback );\n\n    tblFilesSupportingDocs =\n        new TableView( jobEntry, tab, SWT.BORDER | SWT.FULL_SELECTION | SWT.SINGLE, columns, jobEntry\n            .getLibs().size(), false, null, props, false );\n    props.setLook( tblFilesSupportingDocs );\n    tblFilesSupportingDocs.setLayoutData( fd( fa( 0, 15 ), fa( lblSupportingDocs, 5 ), fa( 100, -15 ), fa( 100,\n        -15 ) ) );\n    tblFilesSupportingDocs.getTable().addListener( SWT.Resize, new ColumnsResizer( 0, 25, 75 ) );\n  }\n\n  private void addOnFilesTabJavaScala( Composite panel ) {\n    panel.setLayout( new FormLayout() );\n\n    Label lblClass = new Label( panel, SWT.NONE );\n    props.setLook( lblClass );\n    lblClass.setLayoutData( fd( fa( 0, 0 ), fa( 0, 0 ) ) );\n    lblClass.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Class.Label\" ) );\n\n    txtClass = new TextVar( jobMeta, panel, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtClass );\n    txtClass.setLayoutData( fdwidth( 300, fa( 0, 0 ), fa( lblClass, 5 ) ) );\n    txtClass.setText( Const.nullToEmpty( jobEntry.getClassName() ) );\n\n    Label lblApplicationJar = new Label( panel, SWT.NONE );\n    props.setLook( lblApplicationJar );\n    lblApplicationJar.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.Jar.Label\" ) );\n    lblApplicationJar.setLayoutData( fd( fa( 0, 0 ), fa( txtClass, 10 ) ) );\n\n    txtFilesApplicationJar = new TextVar( jobMeta, panel, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtFilesApplicationJar );\n    txtFilesApplicationJar.setLayoutData( fdwidth( 300, fa( 0, 0 ), fa( lblApplicationJar, 5 ) ) );\n    txtFilesApplicationJar.setText( Const.nullToEmpty( jobEntry.getJar() ) );\n\n    btnFilesApplicationJar = new Button( panel, SWT.PUSH );\n    props.setLook( btnFilesApplicationJar );\n    btnFilesApplicationJar.setText( BaseMessages.getString( PKG, \"System.Button.Browse\" ) );\n    btnFilesApplicationJar.setLayoutData( fd( fa( txtFilesApplicationJar, 10 ), fa( txtFilesApplicationJar, 0,\n        SWT.TOP ), null ) );\n    btnFilesApplicationJar.addSelectionListener( btnFilesApplicationJarListener );\n  }\n\n  private void addOnFilesTabPython( Composite panel ) {\n    panel.setLayout( new FormLayout() );\n\n    Label lblPyFile = new Label( panel, SWT.NONE );\n    props.setLook( lblPyFile );\n    lblPyFile.setText( BaseMessages.getString( PKG, \"JobEntrySparkSubmit.PyFile.Label\" ) );\n    lblPyFile.setLayoutData( fd( fa( 0, 0 ), fa( 0, 0 ) ) );\n\n    txtFilesPyFile = new TextVar( jobMeta, panel, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( txtFilesPyFile );\n    txtFilesPyFile.setLayoutData( fdwidth( 300, fa( 0, 0 ), fa( lblPyFile, 5 ) ) );\n    txtFilesPyFile.setText( Const.nullToEmpty( jobEntry.getPyFile() ) );\n\n    btnFilesPyFile = new Button( panel, SWT.PUSH );\n    props.setLook( btnFilesPyFile );\n    btnFilesPyFile.setText( BaseMessages.getString( PKG, \"System.Button.Browse\" ) );\n    btnFilesPyFile.setLayoutData( fd( fa( txtFilesPyFile, 10 ), fa( txtFilesPyFile, 0, SWT.TOP ), null ) );\n    btnFilesPyFile.addSelectionListener( btnFilesPyFileListener );\n  }\n\n  private SelectionAdapter btnSparkSubmitUtilityListener = new SelectionAdapter() {\n    public void widgetSelected( SelectionEvent e ) {\n      FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n      dialog.setFilterExtensions( new String[] { \"*;*.*\" } );\n      dialog.setFilterNames( FILEFORMATS );\n\n      if ( txtSparkSubmitUtility.getText() != null ) {\n        dialog.setFileName( txtSparkSubmitUtility.getText() );\n      }\n\n      if ( dialog.open() != null ) {\n        txtSparkSubmitUtility.setText( dialog.getFilterPath() + Const.FILE_SEPARATOR + dialog.getFileName() );\n      }\n    }\n  };\n\n  private ModifyListener lsMod = new ModifyListener() {\n    public void modifyText( ModifyEvent e ) {\n      jobEntry.setChanged();\n    }\n  };\n\n  SelectionAdapter pathSelection = new SelectionAdapter() {\n    public void widgetSelected( SelectionEvent e ) {\n\n      FileObject selectedFile = null;\n\n      try {\n        // Get current file\n        FileObject rootFile = null;\n        FileObject initialFile = null;\n        FileObject defaultInitialFile = null;\n\n        String original =\n            tblFilesSupportingDocs.getActiveTableItem().getText( tblFilesSupportingDocs.getActiveTableColumn() );\n\n        if ( original != null ) {\n\n          String fileName = jobMeta.environmentSubstitute( original );\n\n          if ( fileName != null && !fileName.equals( \"\" ) ) {\n            try {\n              initialFile = KettleVFS.getInstance( jobMeta.getBowl() ).getFileObject( fileName );\n            } catch ( KettleException ex ) {\n              initialFile = KettleVFS.getInstance( jobMeta.getBowl() ).getFileObject( \"\" );\n            }\n            defaultInitialFile = KettleVFS.getInstance( jobMeta.getBowl() ).getFileObject( \"file:///c:/\" );\n            rootFile = initialFile.getFileSystem().getRoot();\n          } else {\n            defaultInitialFile = KettleVFS.getInstance( jobMeta.getBowl() )\n              .getFileObject( Spoon.getInstance().getLastFileOpened() );\n          }\n        }\n\n        if ( rootFile == null ) {\n          rootFile = defaultInitialFile.getFileSystem().getRoot();\n          initialFile = defaultInitialFile;\n        }\n        VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n        fileChooserDialog.defaultInitialFile = defaultInitialFile;\n\n        selectedFile =\n            fileChooserDialog.open( shell, new String[] { \"file\" }, \"file\", true, null, new String[] { \"*.*\" },\n                FILETYPES, true, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE_OR_DIRECTORY, false, false );\n\n        if ( selectedFile != null ) {\n          String url = selectedFile.getURL().toString();\n          tblFilesSupportingDocs.getActiveTableItem().setText( tblFilesSupportingDocs.getActiveTableColumn(), url );\n        }\n\n      } catch ( KettleFileException ex ) {\n        // Ignore\n      } catch ( FileSystemException ex ) {\n        // Ignore\n      }\n    }\n  };\n\n  private SelectionAdapter btnFilesApplicationJarListener = new SelectionAdapter() {\n    public void widgetSelected( SelectionEvent e ) {\n      FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n      dialog.setFilterExtensions( new String[] { \"*;*.*\" } );\n      dialog.setFilterNames( FILEFORMATS );\n\n      if ( txtFilesApplicationJar.getText() != null ) {\n        dialog.setFileName( txtFilesApplicationJar.getText() );\n      }\n\n      if ( dialog.open() != null ) {\n        txtFilesApplicationJar.setText( dialog.getFilterPath() + Const.FILE_SEPARATOR + dialog.getFileName() );\n      }\n    }\n  };\n\n  private SelectionAdapter btnFilesPyFileListener = new SelectionAdapter() {\n    public void widgetSelected( SelectionEvent e ) {\n      FileDialog dialog = new FileDialog( shell, SWT.OPEN );\n      dialog.setFilterExtensions( new String[] { \"*;*.*\" } );\n      dialog.setFilterNames( FILEFORMATS );\n\n      if ( txtFilesPyFile.getText() != null ) {\n        dialog.setFileName( txtFilesPyFile.getText() );\n      }\n\n      if ( dialog.open() != null ) {\n        txtFilesPyFile.setText( dialog.getFilterPath() + Const.FILE_SEPARATOR + dialog.getFileName() );\n      }\n    }\n  };\n\n  private SelectionAdapter typeSelectionListener = new SelectionAdapter() {\n    @Override\n    public void widgetSelected( SelectionEvent event ) {\n      String text = cmbType.getText();\n\n      if ( Const.isEmpty( text ) ) {\n        return;\n      }\n\n      for ( Control c : filesHeader.getChildren() ) {\n        c.dispose();\n      }\n      if ( JobEntrySparkSubmit.JOB_TYPE_PYTHON.equals( text ) ) {\n        addOnFilesTabPython( filesHeader );\n        jobEntry.setPyFile( txtFilesPyFile.getText() );\n        jobEntry.setJobType( JobEntrySparkSubmit.JOB_TYPE_PYTHON );\n      } else {\n        addOnFilesTabJavaScala( filesHeader );\n        jobEntry.setJar( txtFilesApplicationJar.getText() );\n        jobEntry.setClassName( txtClass.getText() );\n        jobEntry.setJobType( JobEntrySparkSubmit.JOB_TYPE_JAVA_SCALA );\n      }\n      tabFilesComposite.layout( true );\n      filesHeader.layout( true );\n    }\n  };\n\n  public void dispose() {\n    WindowProperty winprop = new WindowProperty( shell );\n    props.setScreen( winprop );\n    shell.dispose();\n  }\n\n  private void cancel() {\n    jobEntry.setChanged( backupChanged );\n    jobEntry = null;\n    dispose();\n  }\n\n  public void getData() {\n    txtEntryName.setText( Const.nullToEmpty( jobEntry.getName() ) );\n    txtSparkSubmitUtility.setText( Const.nullToEmpty( jobEntry.getScriptPath() ) );\n\n    for ( String url : MASTER_URLS ) {\n      cmbMasterURL.add( url );\n    }\n    cmbMasterURL.setText( Const.nullToEmpty( jobEntry.getMaster() ) );\n\n    cmbType.add( JobEntrySparkSubmit.JOB_TYPE_JAVA_SCALA );\n    cmbType.add( JobEntrySparkSubmit.JOB_TYPE_PYTHON );\n\n    if ( JobEntrySparkSubmit.JOB_TYPE_PYTHON.equals( jobEntry.getJobType() ) ) {\n      cmbType.select( 1 );\n    } else {\n      cmbType.select( 0 );\n    }\n\n    txtArguments.setText( Const.nullToEmpty( jobEntry.getArgs() ) );\n    chkEnableBlocking.setSelection( jobEntry.isBlockExecution() );\n\n    List<String> params = jobEntry.getConfigParams();\n    for ( int i = 0; i < params.size(); i++ ) {\n      TableItem ti = tblUtilityParameters.table.getItem( i );\n      String[] nameValue = params.get( i ).split( \"=\", 2 );\n      ti.setText( 1, nameValue[0] );\n      ti.setText( 2, nameValue[1] );\n    }\n    tblUtilityParameters.setRowNums();\n    tblUtilityParameters.optWidth( true );\n\n    Map<String, String> docs = jobEntry.getLibs();\n\n    int i = 0;\n    for ( String path : docs.keySet() ) {\n      TableItem ti = tblFilesSupportingDocs.table.getItem( i++ );\n      ti.setText( 1, docs.get( path ) );\n      ti.setText( 2, path );\n    }\n\n    tblFilesSupportingDocs.setRowNums();\n    tblFilesSupportingDocs.optWidth( true );\n\n    txtExecutorMemory.setText( Const.nullToEmpty( jobEntry.getExecutorMemory() ) );\n    txtDriverMemory.setText( Const.nullToEmpty( jobEntry.getDriverMemory() ) );\n\n    txtEntryName.selectAll();\n    txtEntryName.setFocus();\n  }\n\n  protected void ok() {\n    if ( Const.isEmpty( txtEntryName.getText() ) ) {\n      MessageBox mb = new MessageBox( shell, SWT.OK | SWT.ICON_ERROR );\n      mb.setText( BaseMessages.getString( PKG, \"System.StepJobEntryNameMissing.Title\" ) );\n      mb.setMessage( BaseMessages.getString( PKG, \"System.JobEntryNameMissing.Msg\" ) );\n      mb.open();\n      return;\n    }\n    jobEntry.setName( txtEntryName.getText() );\n    jobEntry.setScriptPath( txtSparkSubmitUtility.getText() );\n    jobEntry.setMaster( cmbMasterURL.getText() );\n    switch ( jobEntry.getJobType() ) {\n      case JobEntrySparkSubmit.JOB_TYPE_JAVA_SCALA: {\n        jobEntry.setJar( txtFilesApplicationJar.getText() );\n        jobEntry.setClassName( txtClass.getText() );\n        jobEntry.setPyFile( null );\n\n        break;\n      }\n      case JobEntrySparkSubmit.JOB_TYPE_PYTHON: {\n        jobEntry.setPyFile( txtFilesPyFile.getText() );\n        jobEntry.setJar( null );\n        jobEntry.setClassName( null );\n\n        break;\n      }\n    }\n    jobEntry.setArgs( txtArguments.getText() );\n    jobEntry.setBlockExecution( chkEnableBlocking.getSelection() );\n\n    List<String> configParams = new ArrayList<String>( this.tblUtilityParameters.getItemCount() );\n    for ( int i = 0; i < this.tblUtilityParameters.getItemCount(); i++ ) {\n      String[] item = this.tblUtilityParameters.getItem( i );\n      if ( !Const.isEmpty( item[0] ) && !Const.isEmpty( item[1] ) ) {\n        configParams.add( item[0].trim() + \"=\" + item[1].trim() );\n      }\n    }\n    jobEntry.setConfigParams( configParams );\n\n    Map<String, String> supportingDocuments = new LinkedHashMap<>();\n    for ( int i = 0; i < this.tblFilesSupportingDocs.getItemCount(); i++ ) {\n      String[] item = this.tblFilesSupportingDocs.getItem( i );\n      if ( !Const.isEmpty( item[0] ) && !Const.isEmpty( item[1] ) ) {\n        supportingDocuments.put( item[1].trim(), item[0].trim() );\n      }\n    }\n    jobEntry.setLibs( supportingDocuments );\n\n    jobEntry.setDriverMemory( txtDriverMemory.getText() );\n    jobEntry.setExecutorMemory( txtExecutorMemory.getText() );\n\n    dispose();\n  }\n\n  private FormData fd( FormAttachment... att ) {\n    FormData fd = new FormData();\n    if ( att.length >= 1 ) {\n      fd.left = att[0];\n    }\n    if ( att.length >= 2 ) {\n      fd.top = att[1];\n    }\n    if ( att.length >= 3 ) {\n      fd.right = att[2];\n    }\n    if ( att.length >= 4 ) {\n      fd.bottom = att[3];\n    }\n    return fd;\n  }\n\n  private FormData fdwidth( int width, FormAttachment... att ) {\n    FormData fd = fd( att );\n    fd.width = width;\n    return fd;\n  }\n\n  private FormAttachment fa( int numerator, int offset ) {\n    return new FormAttachment( numerator, offset );\n  }\n\n  private FormAttachment fa( Control control, int offset ) {\n    return new FormAttachment( control, offset );\n  }\n\n  private FormAttachment fa( Control control, int offset, int alignment ) {\n    return new FormAttachment( control, offset, alignment );\n  }\n\n  public class ColumnsResizer implements Listener {\n    private int[] weights;\n\n    public ColumnsResizer( int... weights ) {\n      this.weights = weights;\n    }\n\n    @Override\n    public void handleEvent( Event event ) {\n      Table table = (Table) event.widget;\n      float width = table.getSize().x - 2;\n      TableColumn[] columns = table.getColumns();\n\n      int f = 0;\n      for ( int w : weights ) {\n        f += w;\n      }\n      for ( int i = 0; i < weights.length; i++ ) {\n        int cw = weights[i] == 0 ? 0 : Math.round( width / f * weights[i] );\n        width -= cw + 1;\n        columns[i].setWidth( cw );\n        f -= weights[i];\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/main/resources/org/pentaho/di/job/entries/spark/messages/messages_en_US.properties",
    "content": "# Dialog\nJobEntrySparkSubmit.Title=Spark submit\nJobEntrySparkSubmit.Name.Label=Entry Name:\nJobEntrySparkSubmit.ScriptPath.Label=Spark Submit Utility:\nJobEntrySparkSubmit.SparkMaster.Label=Master URL:\nJobEntrySparkSubmit.Class.Label=Class:\nJobEntrySparkSubmit.Jar.Label=Application Jar:\nJobEntrySparkSubmit.Args.Label=Arguments:\nJobEntrySparkSubmit.UtilityParameters.Label=Utility Parameters:\nJobEntrySparkSubmit.BlockExecution.Label=Enable Blocking\nJobEntrySparkSubmit.Type.Label=Type:\nJobEntrySparkSubmit.SupportingDocuments.Label=Dependencies:\nJobEntrySparkSubmit.PyFile.Label=Py File:\nJobEntrySparkSubmit.Files.TabLabel=Files\nJobEntrySparkSubmit.Arguments.TabLabel=Arguments\nJobEntrySparkSubmit.Options.TabLabel=Options\n\nJobEntrySparkSubmit.NameColumn.Label=Name\nJobEntrySparkSubmit.ValueColumn.Label=Value\nJobEntrySparkSubmit.EnvironmentColumn.Label=Environment\nJobEntrySparkSubmit.PathColumn.Label=Path\n\nJobEntrySparkSubmit.Fileformat.All=All\n\nJobEntrySparkSubmit.MemoryAllocation.Executor.Label=Executor Memory:\nJobEntrySparkSubmit.MemoryAllocation.Driver.Label=Driver Memory:\n\n\nJobEntrySparkSubmit.Description=Submits a JAR and class to Spark to be executed\nJobEntrySparkSubmit.ExitStatus=Exit status: {0}\nJobEntrySparkSubmit.JobSetupTab.Label=Job Setup\nJobEntrySparkSubmit.ParametersTab.Label=Parameters\nJobEntrySparkSubmit.MemoryAllocation.Label=Memory Allocation\n\n# Error messages\nJobEntrySparkSubmit.Error.SubmittingScript=Could not submit Spark task: {0}\nJobEntrySparkSubmit.Error.SparkSubmitPathInvalid=Path to spark-submit is invalid.\nJobEntrySparkSubmit.Error.MasterURLEmpty=Master URL is empty.\nJobEntrySparkSubmit.Error.JarPathEmpty=Path to application jar is empty.\nJobEntrySparkSubmit.Error.PyFilePathEmpty=Path to python file is empty.\nJobEntrySparkSubmit.Error.KillWindowsChildProcess=Could not kill child process"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/java/org/pentaho/di/job/entries/spark/JobEntrySparkSubmitLoadSaveTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport org.pentaho.di.job.entry.loadSave.JobEntryLoadSaveTestSupport;\nimport org.pentaho.di.trans.steps.loadsave.validator.FieldLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.MapLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.StringLoadSaveValidator;\n\nimport static java.util.Arrays.asList;\n\npublic class JobEntrySparkSubmitLoadSaveTest extends JobEntryLoadSaveTestSupport<JobEntrySparkSubmit> {\n  @Override\n  protected Class<JobEntrySparkSubmit> getJobEntryClass() {\n    return JobEntrySparkSubmit.class;\n  }\n\n  @Override\n  protected Map<String, FieldLoadSaveValidator<?>> createTypeValidatorsMap() {\n    Map<String, FieldLoadSaveValidator<?>> validators = new HashMap<>();\n\n    validators.put( \"java.util.Map<java.lang.String,java.lang.String>\", new MapLoadSaveValidator<>(\n        new StringLoadSaveValidator(), new StringLoadSaveValidator() ) );\n    return validators;\n  }\n  @Override\n  protected List<String> listCommonAttributes() {\n    return asList( \"scriptPath\", \"master\", \"jar\", \"className\", \"args\", \"configParams\", \"driverMemory\",\n        \"executorMemory\", \"blockExecution\", \"jobType\", \"pyFile\", \"libs\" );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/java/org/pentaho/di/job/entries/spark/JobEntrySparkSubmitTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport org.junit.Assert;\nimport org.junit.Test;\nimport org.pentaho.di.core.CheckResultInterface;\n\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.doNothing;\nimport static org.mockito.Mockito.spy;\nimport static org.pentaho.di.job.entries.spark.JobEntrySparkSubmit.JOB_TYPE_JAVA_SCALA;\nimport static org.pentaho.di.job.entries.spark.JobEntrySparkSubmit.JOB_TYPE_PYTHON;\nimport org.pentaho.di.job.JobMeta;\n\npublic class JobEntrySparkSubmitTest {\n  @Test\n  public void testGetCmds() throws IOException {\n    JobEntrySparkSubmit ss = new JobEntrySparkSubmit();\n    ss.setScriptPath( \"scriptPath\" );\n    ss.setMaster( \"master_url\" );\n    ss.setJobType( JOB_TYPE_JAVA_SCALA );\n    ss.setJar( \"jar_path\" );\n    ss.setArgs( \"arg1 arg2\" );\n    ss.setClassName( \"class_name\" );\n    ss.setDriverMemory( \"driverMemory\" );\n    ss.setExecutorMemory( \"executorMemory\" );\n    ss.setParentJobMeta( new JobMeta() );\n\n    List<String> configParams = new ArrayList<String>();\n    configParams.add( \"name1=value1\" );\n    configParams.add( \"name2=value 2\" );\n    ss.setConfigParams( configParams );\n\n    Map<String, String> libs = new LinkedHashMap<>();\n    libs.put(\"file:///path/to/lib1\", \"Local\");\n    libs.put(\"/path/to/lib2\", \"<Static>\");\n    ss.setLibs( libs );\n\n    String[] expected = new String[] { \"scriptPath\", \"--master\", \"master_url\", \"--conf\", \"name1=value1\", \"--conf\",\n        \"name2=value 2\", \"--driver-memory\", \"driverMemory\", \"--executor-memory\",\n        \"executorMemory\", \"--class\", \"class_name\", \"--jars\", \"file:///path/to/lib1,/path/to/lib2\",  \"jar_path\", \"arg1\", \"arg2\" };\n    Assert.assertArrayEquals( expected, ss.getCmds().toArray() );\n\n    ss.setJobType( JOB_TYPE_PYTHON );\n    ss.setPyFile( \"pyFile-path\" );\n    expected = new String[] { \"scriptPath\", \"--master\", \"master_url\", \"--conf\", \"name1=value1\", \"--conf\",\n        \"name2=value 2\", \"--driver-memory\", \"driverMemory\", \"--executor-memory\",\n        \"executorMemory\", \"--py-files\", \"file:///path/to/lib1,/path/to/lib2\",  \"pyFile-path\", \"arg1\", \"arg2\" };\n    Assert.assertArrayEquals( expected, ss.getCmds().toArray() );\n  }\n\n  @Test\n  public void testValidate () {\n    JobEntrySparkSubmit ss = spy( new JobEntrySparkSubmit() );\n    doNothing().when( ss ).logError( anyString() );\n    Assert.assertFalse( ss.validate() );\n    // Use working dir which exists\n    ss.setScriptPath( \".\" );\n    ss.setMaster( \"\" );\n    Assert.assertFalse( ss.validate() );\n    ss.setMaster( \"master-url\" );\n    Assert.assertFalse( \"Jar path\", ss.validate() );\n    ss.setJobType( JOB_TYPE_JAVA_SCALA );\n    Assert.assertFalse( \"Jar path should not be empty\", ss.validate() );\n    ss.setJar( \"jar-path\" );\n    Assert.assertTrue( \"Validation should pass\", ss.validate() );\n    ss.setJobType( JobEntrySparkSubmit.JOB_TYPE_PYTHON );\n    Assert.assertFalse( \"Pyfile path should not be empty\", ss.validate() );\n    ss.setPyFile( \"pyfile-path\" );\n    Assert.assertTrue( \"Validation should pass\", ss.validate() );\n  }\n\n  @Test\n  public void testArgsParsing() throws IOException {\n    JobEntrySparkSubmit ss = new JobEntrySparkSubmit();\n    ss.setArgs( \"${VAR1} \\\"double quoted string\\\" 'single quoted string'\" );\n    ss.setVariable( \"VAR1\", \"VAR_VALUE\" );\n    List<String> cmds = ss.getCmds();\n    Assert.assertTrue( cmds.containsAll( Arrays.asList( \"VAR_VALUE\", \"double quoted string\", \"single quoted string\" ) ) );\n    Assert.assertFalse( cmds.contains( \"${VAR1}\" ) );\n  }\n\n  @Test\n  public void testCheck() {\n    JobEntrySparkSubmit je = new JobEntrySparkSubmit( \"SparkSubmit\" );\n    je.setJobType( JOB_TYPE_JAVA_SCALA );\n    List<CheckResultInterface> remarks = new ArrayList<>();\n    je.setMaster( \"\" );\n    je.check( remarks, new JobMeta(), null, null, null );\n    Assert.assertEquals( \"Number of remarks should be 4\", 4, remarks.size() );\n\n    int errors = 0;\n    for ( CheckResultInterface remark : remarks ) {\n      if ( remark.getType() == CheckResultInterface.TYPE_RESULT_ERROR ) {\n        errors++;\n      }\n    }\n    Assert.assertEquals( \"Number of errors should be 4\", 4, errors );\n\n    remarks.clear();\n    je.setJobType( JobEntrySparkSubmit.JOB_TYPE_PYTHON );\n    je.check( remarks, new JobMeta(), null, null, null );\n    Assert.assertEquals( \"Number of remarks should be 4\", 4, remarks.size() );\n\n    errors = 0;\n    for ( CheckResultInterface remark : remarks ) {\n      if ( remark.getType() == CheckResultInterface.TYPE_RESULT_ERROR ) {\n        errors++;\n      }\n    }\n    Assert.assertEquals( \"Number of errors should be 4\", 4, errors );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/java/org/pentaho/di/job/entries/spark/PatternMatchingStreamLoggerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport static org.mockito.Mockito.mock;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.InputStream;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.Future;\nimport java.util.concurrent.TimeUnit;\nimport java.util.concurrent.TimeoutException;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.util.Assert;\n\npublic class PatternMatchingStreamLoggerTest {\n  private static final String log = \"Line1\\nSome other line\\nOne more line\\n\";\n  private static final String[] matchingPatterns = new String[] { \"other\" };\n  private static final String[] nonMatchingPatterns = new String[] { \"non matching pattern\" };\n  private InputStream input;\n  private AtomicBoolean stop;\n\n  @Before\n  public void setUp() {\n    input = new ByteArrayInputStream( log.getBytes() );\n    stop = new AtomicBoolean( false );\n  }\n\n  private PatternMatchingStreamLogger createTestee( String[] patterns, final AtomicBoolean listenerNotified ) {\n    PatternMatchingStreamLogger testee = new PatternMatchingStreamLogger( mock( LogChannelInterface.class ), input, patterns, stop );\n    testee.addPatternMatchedListener( new PatternMatchingStreamLogger.PatternMatchedListener() {\n      @Override public void onPatternFound( String pattern ) {\n        listenerNotified.set( true );\n      }\n    } );\n\n    return testee;\n  }\n\n  private void doTest( String[] patterns, boolean listenerShouldBeNotified )\n      throws InterruptedException, TimeoutException, ExecutionException {\n    AtomicBoolean listenerNotified = new AtomicBoolean( false );\n\n    ExecutorService executor = Executors.newSingleThreadExecutor();\n    Future f = executor.submit( createTestee( patterns, listenerNotified ), true );\n\n    Assert.assertTrue( (Boolean) f.get( 1, TimeUnit.SECONDS ) );\n    Assert.assertTrue( listenerNotified.get() == listenerShouldBeNotified );\n\n    executor.shutdown();\n    executor.awaitTermination( 1, TimeUnit.SECONDS );\n    Assert.assertTrue( executor.isTerminated() );\n  }\n\n  @Test\n  public void positiveTest() throws InterruptedException, TimeoutException, ExecutionException {\n    doTest( matchingPatterns, true );\n  }\n\n  @Test\n  public void negativeTest() throws InterruptedException, TimeoutException, ExecutionException {\n    doTest( nonMatchingPatterns, false );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/java/org/pentaho/di/job/entries/spark/WinProcessTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.job.entries.spark;\n\nimport com.sun.jna.Platform;\nimport org.junit.Assert;\nimport org.junit.Assume;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.concurrent.atomic.AtomicBoolean;\n\n\npublic class WinProcessTest {\n\n  private static String processCmdResource;\n  private static String javaFileResource;\n  private static String classPath;\n  private static String childProcessClassName = \"ChildProcessTester\";\n\n  @BeforeClass\n  public static void setUp() {\n    Assume.assumeTrue( Platform.isWindows() );\n\n    processCmdResource =\n      Thread.currentThread().getContextClassLoader().getResource( \"process.cmd\" ).getPath().toString().substring( 1 );\n    javaFileResource =\n      Thread.currentThread().getContextClassLoader().getResource( childProcessClassName + \".java\" ).getPath().toString()\n        .substring( 1 );\n    int index = javaFileResource.lastIndexOf( '/' );\n    classPath = javaFileResource.substring( 0, index );\n  }\n\n  @Test\n  public void getPIDWhenProcessIsNull() {\n    int pid = WinProcess.getPID( null );\n    Assert.assertEquals( -1, pid );\n  }\n\n  @Test\n  public void getPIDWhenProcessExists() {\n    int pid = -1;\n    Process process = null;\n    List<String> cmds = new ArrayList<>();\n    cmds.add( \"cmd.exe\" );\n    ProcessBuilder processBuilder = new ProcessBuilder( cmds );\n    try {\n      process = processBuilder.start();\n      pid = WinProcess.getPID( process );\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } finally {\n      process.destroy();\n    }\n\n    Assert.assertTrue( pid > 0 );\n    Assert.assertFalse( process.isAlive() );\n  }\n\n  @Test\n  public void createWinProcessWrapperAndKillProcess() {\n    int pid;\n    Process process = null;\n    WinProcess winProcess = null;\n    List<String> cmds = new ArrayList<>();\n    cmds.add( \"cmd.exe\" );\n    ProcessBuilder processBuilder = new ProcessBuilder( cmds );\n    try {\n      process = processBuilder.start();\n      winProcess = new WinProcess( WinProcess.getPID( process ) );\n      pid = winProcess.getWinProcessPID();\n\n      Assert.assertNotNull( winProcess );\n      Assert.assertTrue( pid > 0 );\n\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } finally {\n      winProcess.terminate();\n    }\n\n    Assert.assertFalse( process.isAlive() );\n  }\n\n  @Test\n  public void tryToKillChildProcessIfItNotExists() {\n    String childProcessPIDs = \" \";\n    Process process = null;\n    WinProcess winProcess = null;\n    List<String> cmds = new ArrayList<>();\n    cmds.add( \"cmd.exe\" );\n    ProcessBuilder processBuilder = new ProcessBuilder( cmds );\n    try {\n      process = processBuilder.start();\n      winProcess = new WinProcess( WinProcess.getPID( process ) );\n      childProcessPIDs = winProcess.killChildProcesses();\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } finally {\n      process.destroy();\n    }\n    Assert.assertEquals( childProcessPIDs, \"\" );\n    Assert.assertTrue( childProcessPIDs.isEmpty() );\n    Assert.assertFalse( process.isAlive() );\n  }\n\n  @Test\n  public void killExistingChildProcess() {\n\n    int parentPID;\n    boolean isChildProcessExists;\n    WinProcess winProcess;\n    String childProcessPIDs = \" \";\n    String pattern = \"child process started\";\n\n    List<String> cmds = new ArrayList<>();\n\n    cmds.add( \"cmd.exe\" );\n    cmds.add( \"/c\" );\n    cmds.add( processCmdResource );\n    cmds.add( javaFileResource );\n    cmds.add( classPath );\n    cmds.add( childProcessClassName );\n\n    ProcessBuilder builder = new ProcessBuilder( cmds );\n    try {\n      Process proc = builder.start();\n\n      InputStream inStream = proc.getInputStream();\n      InputStream errStream = proc.getErrorStream();\n\n      Thread inStreamReader = new Thread(\n        () -> {\n          try {\n            String line = null;\n            BufferedReader in = new BufferedReader( new InputStreamReader( inStream ) );\n            while ( ( line = in.readLine() ) != null ) {\n              System.out.println( line );\n              if ( line.contains( pattern ) ) {\n                proc.destroy();\n              }\n            }\n          } catch ( IOException e ) {\n            e.printStackTrace();\n          }\n        } );\n\n      inStreamReader.start();\n\n      Thread errStreamReader = new Thread(\n        () -> {\n          try {\n            String line = null;\n            BufferedReader in = new BufferedReader( new InputStreamReader( errStream ) );\n            while ( ( line = in.readLine() ) != null ) {\n              System.out.println( line );\n              if ( line.contains( pattern ) ) {\n                proc.destroy();\n              }\n            }\n          } catch ( IOException e ) {\n            e.printStackTrace();\n          }\n        } );\n\n      errStreamReader.start();\n\n      proc.waitFor();\n\n      parentPID = WinProcess.getPID( proc );\n      winProcess = new WinProcess( parentPID );\n      childProcessPIDs = winProcess.killChildProcesses();\n      isChildProcessExists = isChildProcessAlive( parentPID, childProcessPIDs );\n\n      if ( isChildProcessExists ) {\n        winProcess = new WinProcess( new Integer( childProcessPIDs ) );\n        winProcess.terminate();\n      }\n\n      Assert.assertNotEquals( childProcessPIDs, \"\" );\n      Assert.assertTrue( new Integer( childProcessPIDs ) > 0 );\n      Assert.assertFalse( childProcessPIDs.isEmpty() );\n      Assert.assertFalse( isChildProcessExists );\n\n      proc.getErrorStream().close();\n      proc.getInputStream().close();\n\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } catch ( InterruptedException e ) {\n      e.printStackTrace();\n    }\n  }\n\n  private static boolean isChildProcessAlive( Integer parentPID, String childPID ) {\n\n    List<String> cmds = new ArrayList<>();\n    final AtomicBoolean childProcessExists = new AtomicBoolean( false );\n\n    cmds.add( \"wmic.exe\" );\n    cmds.add( \"process\" );\n    cmds.add( \"where\" );\n    cmds.add( \"parentProcessId=\" + parentPID );\n    cmds.add( \"get\" );\n    cmds.add( \"processId\" );\n\n    ProcessBuilder builder = new ProcessBuilder( cmds );\n\n    try {\n      Process proc = builder.start();\n\n      InputStream inStream = proc.getInputStream();\n      InputStream errStream = proc.getErrorStream();\n\n      Thread inStreamReader = new Thread(\n        () -> {\n          try {\n            String line = null;\n            BufferedReader in = new BufferedReader( new InputStreamReader( inStream ) );\n            while ( ( line = in.readLine() ) != null ) {\n              System.out.println( line );\n              if ( line.contains( childPID ) ) {\n                childProcessExists.set( true );\n                proc.destroy();\n              }\n            }\n          } catch ( IOException e ) {\n            e.printStackTrace();\n          }\n        } );\n\n      inStreamReader.start();\n\n      Thread errStreamReader = new Thread(\n        () -> {\n          try {\n            String line = null;\n            BufferedReader in = new BufferedReader( new InputStreamReader( errStream ) );\n            while ( ( line = in.readLine() ) != null ) {\n              System.out.println( line );\n              if ( line.contains( childPID ) ) {\n                childProcessExists.set( true );\n                proc.destroy();\n              }\n            }\n          } catch ( IOException e ) {\n            e.printStackTrace();\n          }\n        } );\n\n      errStreamReader.start();\n\n      proc.waitFor();\n\n      proc.getErrorStream().close();\n      proc.getInputStream().close();\n\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } catch ( InterruptedException e ) {\n      e.printStackTrace();\n    }\n    return childProcessExists.get();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/resources/ChildProcessTester.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npublic class ChildProcessTester implements Runnable {\n\n    private String pattern = \"child process started\";\n\n    @Override public void run() {\n        for ( int i = 0; i < 1000; i++ ) {\n            System.out.println( Thread.currentThread().getName() + \" \" + i );\n            System.out.println( pattern );\n            try {\n                Thread.sleep( 10000 );\n            } catch ( Exception e ) {\n                e.printStackTrace();\n            }\n        }\n    }\n\n    public static void main( String[] args ) {\n        Thread thread = new Thread( new ChildProcessTester() );\n        thread.start();\n    }\n}\n"
  },
  {
    "path": "kettle-plugins/spark/core/src/test/resources/process.cmd",
    "content": "javac %1\njava -cp %2 %3"
  },
  {
    "path": "kettle-plugins/spark/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-spark</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/sqoop/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>sqoop-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-sqoop-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Sqoop Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-sqoop-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/sqoop/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-sqoop-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-sqoop-core:jar</include>\n      </includes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-sqoop-core:*</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "kettle-plugins/sqoop/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "kettle-plugins/sqoop/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-sqoop</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>sqoop-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Sqoop Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-kettle-plugins-sqoop</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-sqoop-core</artifactId>\n  <name>PDI Sqoop Core</name>\n  <properties>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n   <Import-Package>org.eclipse.swt*;resolution:=optional,org.pentaho.di.ui.xul*;resolution:=optional,org.pentaho.ui.xul*;resolution:=optional,org.pentaho.di.osgi,org.pentaho.di.core.plugins,org.pentaho.hadoop.shim.api.cluster,*</Import-Package>\n   -->\n  <build>\n    <resources>\n      <resource>\n        <directory>src/main/resources</directory>\n        <filtering>false</filtering>\n      </resource>\n      <resource>\n        <directory>src/main/resources-filtered</directory>\n        <filtering>true</filtering>\n      </resource>\n    </resources>\n  </build>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-ui</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-kettle-plugins-common-job</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.logging.log4j</groupId>\n      <artifactId>log4j-core</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-hdfs-core</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-impl-cluster</artifactId>\n      <version>${project.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/AbstractSqoopJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Strings;\nimport com.google.common.base.Throwables;\nimport com.google.common.collect.Maps;\nimport org.apache.logging.log4j.Level;\nimport org.apache.logging.log4j.Logger;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.ThreadContext;\nimport org.apache.logging.log4j.core.Appender;\nimport org.apache.logging.log4j.core.Filter;\nimport org.pentaho.big.data.kettle.plugins.job.AbstractJobEntry;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryUtils;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.database.DatabaseInterface;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.log4j.KettleLogChannelAppender;\nimport org.pentaho.di.core.logging.log4j.Log4jKettleLayout;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.HadoopClientServices;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.core.ShimIdentifierInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.platform.api.util.LogUtil;\nimport org.pentaho.platform.engine.core.system.PentahoSystem;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.w3c.dom.Node;\n\nimport java.nio.charset.StandardCharsets;\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.stream.Collectors;\n\n/**\n * Base class for all Sqoop job entries.\n */\npublic abstract class AbstractSqoopJobEntry<S extends SqoopConfig> extends AbstractJobEntry<S> implements Cloneable,\n    JobEntryInterface {\n\n  private final String NamedClusterNameProperty = \"pentahoNamedCluster\";\n  private final NamedClusterService namedClusterService;\n  private final NamedClusterServiceLocator namedClusterServiceLocator;\n  private final RuntimeTestActionService runtimeTestActionService;\n  private final RuntimeTester runtimeTester;\n\n  private DatabaseMeta usedDbConnection;\n\n  /**\n   * Log4j appender that redirects all Log4j logging to a Kettle {@link org.pentaho.di.core.logging.LogChannel}\n   */\n  private Appender sqoopToKettleAppender;\n\n  /**\n   * Logging proxy that redirects all {@link java.io.PrintStream} output to a Log4j logger.\n   */\n  private LoggingProxy stdErrProxy;\n\n  /**\n   * Logging categories to monitor and log within Kettle\n   */\n  private String[] LOGS_TO_MONITOR = new String[] { \"org.apache.sqoop\", \"org.apache.hadoop\", \"com.pentaho.big.data.bundles.impl.shim.sqoop.knox\" };\n\n  /**\n   * Cache for the levels of loggers we changed so we can revert them when we remove our appender\n   */\n  private final Map<String, Level> logLevelCache;\n\n  /**\n   * Declare the Sqoop tool used in this job entry.\n   * \n   * @return the name of the sqoop tool to use, e.g. \"import\"\n   */\n  protected abstract String getToolName();\n\n  protected AbstractSqoopJobEntry( NamedClusterService namedClusterService,\n                                   NamedClusterServiceLocator namedClusterServiceLocator,\n                                   RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    this.namedClusterService = namedClusterService;\n    this.namedClusterServiceLocator = namedClusterServiceLocator;\n    this.runtimeTestActionService = runtimeTestActionService;\n    this.runtimeTester = runtimeTester;\n    logLevelCache = Maps.newHashMap();\n  }\n\n  @Override\n  public void loadXML( Node node, List<DatabaseMeta> databaseMetas, List<SlaveServer> slaveServers,\n      Repository repository ) throws KettleXMLException {\n    super.loadXML( node, databaseMetas, slaveServers, repository );\n    loadUsedDataBaseConnection( databaseMetas, getJobConfig() );\n\n    if ( !loadNamedCluster( metaStore ) ) {\n      // load default values for cluster & legacy fallback\n      getJobConfig().loadClusterConfig( node );\n    }\n  }\n\n  @Override\n  public void loadRep( Repository rep, ObjectId id_jobentry, List<DatabaseMeta> databases,\n      List<SlaveServer> slaveServers ) throws KettleException {\n    super.loadRep( rep, id_jobentry, databases, slaveServers );\n    loadUsedDataBaseConnection( databases, getJobConfig() );\n    if ( !loadNamedCluster( metaStore ) ) {\n      // load default values for cluster & legacy fallback\n      try {\n        getJobConfig().loadClusterConfig( rep, id_jobentry );\n      } catch ( KettleException ke ) {\n        logError( ke.getMessage(), ke );\n      }\n    }\n  }\n\n  @Override public String getXML() {\n    return super.getXML() + getJobConfig().getClusterXML();\n  }\n\n  @Override public void saveRep( Repository rep, ObjectId id_job ) throws KettleException {\n    super.saveRep( rep, id_job );\n    getJobConfig().saveClusterConfig( rep, id_job, this );\n  }\n\n  private boolean loadNamedCluster( IMetaStore metaStore ) {\n    try {\n      // attempt to load from named cluster\n      String clusterName = getParentJobMeta() == null ? getJobConfig().getClusterName()\n        : getParentJob().getJobMeta().environmentSubstitute( getJobConfig().getClusterName() );\n      return loadNamedCluster( clusterName );\n    } catch ( Throwable t ) {\n      logDebug( t.getMessage(), t );\n    }\n    return false;\n  }\n\n  private boolean loadNamedCluster( String clusterName ) {\n    try {\n      // load from system first, then fall back to copy stored with job (AbstractMeta)\n      NamedCluster namedCluster = null;\n      if ( !Strings.isNullOrEmpty( clusterName ) && namedClusterService.contains( clusterName, metaStore ) ) {\n        // pull config from NamedCluster\n        namedCluster = namedClusterService.read( clusterName, metaStore );\n      }\n      if ( namedCluster != null ) {\n        getJobConfig().setNamedCluster( namedCluster );\n        return true;\n      }\n    } catch ( Throwable t ) {\n      logDebug( t.getMessage(), t );\n    }\n    return false;\n  }\n\n  @VisibleForTesting\n  void loadUsedDataBaseConnection( List<DatabaseMeta> databases, S config ) {\n    String database = config.getDatabase();\n    DatabaseMeta databaseMeta = DatabaseMeta.findDatabase( databases, database );\n    setUsedDbConnection( databaseMeta );\n    if ( database == null ) {\n      // sync up the advanced configuration if no database type is set\n      config.copyConnectionInfoToAdvanced();\n    }\n  }\n\n  /**\n   * Attach a log appender to all Loggers used by Sqoop so we can redirect the output to Kettle's logging facilities.\n   */\n  public void attachLoggingAppenders() {\n    sqoopToKettleAppender = new KettleLogChannelAppender( log, new Log4jKettleLayout( StandardCharsets.UTF_8, true ) );\n    Filter filter = new SqoopLog4jFilter( log.getLogChannelId() );\n    ThreadContext.put( \"logChannelId\", log.getLogChannelId() );\n    // Redirect all stderr logging to the first log to monitor so it shows up in the Kettle LogChannel\n    Logger sqoopLogger = LogManager.getLogger( LOGS_TO_MONITOR[ 0 ] );\n    if ( sqoopLogger != null ) {\n      stdErrProxy = new LoggingProxy( System.err, sqoopLogger, Level.INFO );\n      System.setErr( stdErrProxy );\n    }\n    LogUtil.addAppender( sqoopToKettleAppender, sqoopLogger, Level.INFO, filter );\n  }\n\n  /**\n   * Remove our log appender from all loggers used by Sqoop.\n   */\n  public void removeLoggingAppenders() {\n    try {\n      if ( sqoopToKettleAppender != null ) {\n        Logger sqoopLogger = LogManager.getLogger( LOGS_TO_MONITOR[0] );\n        LogUtil.removeAppender( sqoopToKettleAppender, sqoopLogger );\n        sqoopToKettleAppender = null;\n      }\n      if ( stdErrProxy != null ) {\n        System.setErr( stdErrProxy.getWrappedStream() );\n        stdErrProxy = null;\n      }\n    } catch ( Exception ex ) {\n      logError( getString( \"ErrorDetachingLogging\" ) );\n      logDebug( Throwables.getStackTraceAsString( ex ) );\n    }\n  }\n\n  /**\n   * Validate any configuration option we use directly that could be invalid at runtime.\n   *\n   * @param config\n   *          Configuration to validate\n   * @return List of warning messages for any invalid configuration options we use directly in this job entry.\n   */\n  @Override\n  public List<String> getValidationWarnings( SqoopConfig config ) {\n    List<String> warnings = new ArrayList<String>();\n\n    if ( StringUtil.isEmpty( config.getConnect() ) ) {\n      warnings.add( getString( \"ValidationError.Connect.Message\", config.getConnect() ) );\n    }\n\n    try {\n      JobEntryUtils.asLong( config.getBlockingPollingInterval(), variables );\n    } catch ( NumberFormatException ex ) {\n      warnings.add( getString(\n          \"ValidationError.BlockingPollingInterval.Message\", config.getBlockingPollingInterval() ) );\n    }\n\n    return warnings;\n  }\n\n  /**\n   * Handle any clean up required when our execution thread encounters an unexpected {@link Exception}.\n   *\n   * @param t\n   *          Thread that encountered the uncaught exception\n   * @param e\n   *          Exception that was encountered\n   * @param jobResult\n   *          Job result for the execution that spawned the thread\n   */\n  @Override\n  protected void handleUncaughtThreadException( Thread t, Throwable e, Result jobResult ) {\n    logError( getString( \"ErrorRunningSqoopTool\" ), e );\n    removeLoggingAppenders();\n    setJobResultFailed( jobResult );\n  }\n\n  @Override\n  protected Runnable getExecutionRunnable( final Result jobResult ) throws KettleException {\n    return new Runnable() {\n\n      @Override public void run() {\n        executeSqoop( jobResult );\n      }\n    };\n  }\n\n  /**\n   * Executes Sqoop using the provided configuration objects. The {@code jobResult} will accurately reflect the\n   * completed execution state when finished.\n   * @param jobResult\n   *          Result to update based on feedback from the Sqoop tool\n   */\n  protected void executeSqoop( Result jobResult ) {\n    S config = getJobConfig();\n    Properties properties = new Properties();\n\n    attachLoggingAppenders();\n    try {\n      configure( config, properties );\n      List<String> args = SqoopUtils.getCommandLineArgs( config, getVariables() );\n      args.add( 0, getToolName() ); // push the tool command-line argument on the top of the args list\n\n      String configuredShimIdentifier = config.getNamedCluster().getShimIdentifier();\n      if ( !StringUtil.isEmpty( configuredShimIdentifier ) ) {\n        List<ShimIdentifierInterface> shimIdentifers = PentahoSystem.getAll( ShimIdentifierInterface.class );\n        if ( shimIdentifers.stream().noneMatch( identifier -> identifier.getId().equals( configuredShimIdentifier ) ) ) {\n          String installedShimIdentifiers = shimIdentifers.stream().map( ShimIdentifierInterface::<String>getId ).collect( Collectors.joining( \",\", \"{\", \"}\" ) );\n          throw new KettleException( \"Invalid driver version value: \" +  config.getNamedCluster().getShimIdentifier() + \" Available valid values: \" + installedShimIdentifiers );\n        }\n      }\n\n      if ( !loadNamedCluster( getMetaStore() ) ) {\n        PropertyEntry entry = config.getCustomArguments().stream()\n                .filter( p -> p.getKey() != null && p.getKey().equals( NamedClusterNameProperty ) )\n                .findAny()\n                .orElse( null );\n        if ( entry != null ) {\n          loadNamedCluster( entry.getValue() );\n        }\n      }\n\n      NamedCluster tempCluster = null;\n      if ( StringUtil.isEmpty( config.getNamedCluster().getName() ) ) {\n        tempCluster = namedClusterService.getNamedClusterByHost( config.getNamedCluster().getHdfsHost(), getMetaStore() );\n        if ( tempCluster != null ) {\n          config.setNamedCluster( tempCluster );\n        } else {\n          throw new KettleException( \"An Hadoop Cluster matching Namenode Host could not be found\" );\n        }\n      }\n\n      if ( !StringUtil.isEmpty( configuredShimIdentifier ) ) {\n        config.getNamedCluster().setShimIdentifier( configuredShimIdentifier );\n      }\n\n      // Clone named cluster and copy in variable space\n      NamedCluster namedCluster = config.getNamedCluster().clone();\n      namedCluster.copyVariablesFrom( this );\n\n      HadoopClientServices hadoopClientServices = namedClusterServiceLocator.getService( namedCluster, HadoopClientServices.class );\n\n      int result = hadoopClientServices.runSqoop( args, properties );\n      if ( result != 0 ) {\n        setJobResultFailed( jobResult );\n      }\n    } catch ( Exception ex ) {\n      logError( getString( \"ErrorRunningSqoopTool\" ), ex );\n      setJobResultFailed( jobResult );\n    } finally {\n      removeLoggingAppenders();\n    }\n  }\n\n  /**\n   * Configure the Hadoop environment\n   *\n   * @param sqoopConfig\n   *          Sqoop configuration settings\n   * @param properties\n   *          Execution properties\n   * @throws KettleException\n   *\n   */\n  public void configure( S sqoopConfig, Properties properties ) throws KettleException {\n    configureDatabase( sqoopConfig );\n  }\n\n  /**\n   * Configure database connection information\n   *\n   * @param sqoopConfig - Sqoop configuration\n   */\n  public void configureDatabase( S sqoopConfig ) throws KettleException {\n    DatabaseMeta databaseMeta = getParentJob().getJobMeta().findDatabase( sqoopConfig.getDatabase() );\n\n    // if databaseMeta == null we assume \"USE_ADVANCED_MODE\" is selected on QUICK_SETUP\n    if ( sqoopConfig.getModeAsEnum() == JobEntryMode.QUICK_SETUP && databaseMeta != null ) {\n      sqoopConfig.setConnectionInfo(\n          databaseMeta.environmentSubstitute( databaseMeta.getName() ),\n          databaseMeta.environmentSubstitute( databaseMeta.getURL() ),\n          databaseMeta.environmentSubstitute( databaseMeta.getUsername() ),\n          Encr.decryptPasswordOptionallyEncrypted( databaseMeta.environmentSubstitute( databaseMeta.getPassword() ) ) );\n    }\n  }\n\n  /**\n   * Determine if a database type is supported.\n   *\n   * @param databaseType\n   *          Database type to check for compatibility\n   * @return {@code true} if this database is supported for this tool\n   */\n  public boolean isDatabaseSupported( Class<? extends DatabaseInterface> databaseType ) {\n    // For now all database types are supported\n    return true;\n  }\n\n  @Override\n  public DatabaseMeta[] getUsedDatabaseConnections() {\n    return new DatabaseMeta[] { usedDbConnection, };\n  }\n\n  public DatabaseMeta getUsedDbConnection() {\n    return usedDbConnection;\n  }\n\n  public void setUsedDbConnection( DatabaseMeta usedDbConnection ) {\n    this.usedDbConnection = usedDbConnection;\n  }\n\n  @VisibleForTesting\n  protected void setLogChannel( LogChannelInterface logChannel ) {\n    this.log = logChannel;\n  }\n\n  public NamedClusterService getNamedClusterService() {\n    return namedClusterService;\n  }\n\n  public RuntimeTestActionService getRuntimeTestActionService() {\n    return runtimeTestActionService;\n  }\n\n  public RuntimeTester getRuntimeTester() {\n    return runtimeTester;\n  }\n\n  private static String getString( String key, String... parameters ) {\n    return BaseMessages.getString( AbstractSqoopJobEntry.class, key, parameters );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/ArgumentWrapper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.pentaho.ui.xul.XulEventSource;\n\nimport java.beans.PropertyChangeListener;\nimport java.lang.reflect.Method;\n\n/**\n * Represents a command line argument. This is required to display an argument in a list of arguments for the UI.\n */\npublic class ArgumentWrapper implements XulEventSource {\n  private String name;\n  private String displayName;\n  private boolean flag;\n  private String prefix;\n  private int order;\n\n  private Object target;\n  private Method getter;\n  private Method setter;\n\n  public ArgumentWrapper( String name, String displayName,\n      boolean flag, String prefix, int order,\n      Object target, Method getter, Method setter ) {\n    if ( name == null || target == null || getter == null || setter == null ) {\n      throw new NullPointerException();\n    }\n    validateAccessors( getter, setter );\n\n    this.name = name;\n    this.displayName = displayName;\n    this.flag = flag;\n    this.prefix = prefix;\n    this.order = order;\n    this.target = target;\n    this.getter = getter;\n    this.setter = setter;\n  }\n\n  private void validateAccessors( Method getter, Method setter ) {\n    if ( getter.getReturnType() != String.class ) {\n      throw new IllegalArgumentException( \"Invalid getter method. Method must return a String,\" );\n    }\n    if ( setter.getParameterTypes().length < 1 || setter.getParameterTypes()[0] != String.class ) {\n      throw new IllegalArgumentException( \"Invalid setter method. Method must accept a single String parameter.\" );\n    }\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getDisplayName() {\n    return displayName;\n  }\n\n  public void setDisplayName( String displayName ) {\n    this.displayName = displayName;\n  }\n\n  public void setValue( String value ) {\n    try {\n      if ( \"\".equals( value ) ) {\n        value = null;\n      }\n\n      setter.invoke( target, value );\n    } catch ( Exception ex ) {\n      throw new RuntimeException( \"error setting value for argument \" + getName(), ex );\n    }\n  }\n\n  public String getValue() {\n    try {\n      return String.class.cast( getter.invoke( target ) );\n    } catch ( Exception ex ) {\n      throw new RuntimeException( \"error retrieving value for argument \" + getName(), ex );\n    }\n  }\n\n  public boolean isFlag() {\n    return flag;\n  }\n\n  public void setFlag( boolean flag ) {\n    this.flag = flag;\n  }\n\n  /**\n   * Uses the argument's name to determine equality.\n   * \n   * @param o\n   *          another argument\n   * @return {@code true} if {@code o} is an {@link ArgumentWrapper} and its name equals this argument's name\n   */\n  @Override\n  public boolean equals( Object o ) {\n    if ( this == o ) {\n      return true;\n    }\n    if ( o == null || getClass() != o.getClass() ) {\n      return false;\n    }\n\n    ArgumentWrapper that = (ArgumentWrapper) o;\n\n    return this.name.equals( that.name );\n  }\n\n  @Override\n  public int hashCode() {\n    return name.hashCode();\n  }\n\n  @Override\n  public void addPropertyChangeListener( PropertyChangeListener listener ) {\n    // Do nothing, this object is a wrapper and firing events here propagates to too many objects\n  }\n\n  @Override\n  public void removePropertyChangeListener( PropertyChangeListener listener ) {\n    // Do nothing, this object is a wrapper and firing events here propagates to too many objects\n  }\n\n  public String getPrefix() {\n    return prefix;\n  }\n\n  public void setPrefix( String prefix ) {\n    this.prefix = prefix;\n  }\n\n  public int getOrder() {\n    return order;\n  }\n\n  public void setOrder( int order ) {\n    this.order = order;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/CommandLineArgument.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport java.lang.annotation.Documented;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\n\n/**\n * Marks a field as a command line argument.\n */\n@Documented\n@Retention( RetentionPolicy.RUNTIME )\n@Target( ElementType.FIELD )\npublic @interface CommandLineArgument {\n  /**\n   * @return the name of the command line argument (full name), e.g. --table\n   */\n  String name();\n\n  /**\n   * Optional String to be used when displaying this field in a list.\n   * \n   * @return the friendly display name to be shown to a user instead of the {@link #name()}\n   */\n  String displayName() default \"\";\n\n  /**\n   * @return description of the command line argument\n   */\n  String description() default \"\";\n\n  /**\n   * Arguments either have values to be included or represent a boolean setting/flag. This is to denote a flag\n   * \n   * @return true if this argument represents a flag or switch.\n   */\n  boolean flag() default false;\n\n  /**\n   * Arguments could be prefixed different in a different way (double dash by default)\n   * @return prefix to be used with the argument\n   */\n  String prefix() default \"--\";\n\n  /**\n   * Some arguments have to follow a particular precedence\n   * @return sort order for the argument\n   */\n  int order() default 100;\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/DatabaseItem.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\n/**\n * A database item represents a named database. It is used to display existing databases in a menu or combo box.\n */\npublic class DatabaseItem {\n  private String name;\n  private String displayName;\n\n  public DatabaseItem( String name ) {\n    this( name, name );\n  }\n\n  public DatabaseItem( String name, String displayName ) {\n    this.name = name;\n    this.displayName = displayName;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public String getDisplayName() {\n    return displayName;\n  }\n\n  /**\n   * This is what will be visible in the menu/combo box.\n   * \n   * @return the name of this database item\n   */\n  @Override\n  public String toString() {\n    return getDisplayName();\n  }\n\n  @Override\n  public boolean equals( Object o ) {\n    if ( this == o ) {\n      return true;\n    }\n    if ( o == null || getClass() != o.getClass() ) {\n      return false;\n    }\n    DatabaseItem that = (DatabaseItem) o;\n\n    if ( name != null ? !name.equals( that.name ) : that.name != null ) {\n      return false;\n    }\n    return true;\n  }\n\n  @Override\n  public int hashCode() {\n    return name != null ? name.hashCode() : 0;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/LoggingProxy.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.apache.logging.log4j.Level;\nimport org.apache.logging.log4j.Logger;\n\nimport java.io.PrintStream;\n\n/**\n * Redirect all String-based logging for a {@link PrintStream} to a Log4j logger at a specified logging level.\n */\npublic class LoggingProxy extends PrintStream {\n  private PrintStream wrappedStream;\n  private Logger logger;\n  private Level level;\n\n  /**\n   * Create a new Logging proxy that will log all {@link String}s printed with {@link #print(String)} to the logger\n   * using the level provided.\n   * \n   * @param stream\n   *          Stream to redirect output for\n   * @param logger\n   *          Logger to log to\n   * @param level\n   *          Level to log messages at\n   */\n  public LoggingProxy( PrintStream stream, Logger logger, Level level ) {\n    super( stream );\n    wrappedStream = stream;\n    this.logger = logger;\n    this.level = level;\n  }\n\n  @Override\n  public void print( String s ) {\n    logger.log( level, s );\n  }\n\n  /**\n   * @return the steam this proxy wraps\n   */\n  public PrintStream getWrappedStream() {\n    return wrappedStream;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Strings;\nimport com.google.common.collect.ImmutableMap;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.kettle.plugins.job.BlockableJobConfig;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.Password;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulEventSource;\nimport org.pentaho.ui.xul.util.AbstractModelList;\nimport org.w3c.dom.Node;\n\nimport java.util.Map;\n\n/**\n * A collection of configuration objects for a Sqoop job entry.\n */\npublic abstract class SqoopConfig extends BlockableJobConfig implements XulEventSource, Cloneable {\n  public static final String NAMENODE_HOST = \"namenodeHost\";\n  public static final String NAMENODE_PORT = \"namenodePort\";\n  public static final String SHIM_IDENTIFIER = \"shimIdentifier\";\n  public static final String CLUSTER_NAME = \"clusterName\";\n  public static final String JOBTRACKER_HOST = \"jobtrackerHost\";\n  public static final String JOBTRACKER_PORT = \"jobtrackerPort\";\n\n  public static final String DATABASE = \"database\";\n  public static final String SCHEMA = \"schema\";\n\n  // Common arguments\n  public static final String CONNECT = \"connect\";\n  public static final String USERNAME = \"username\";\n  public static final String PASSWORD = \"password\";\n  public static final String VERBOSE = \"verbose\";\n  public static final String CONNECTION_MANAGER = \"connectionManager\";\n  public static final String DRIVER = \"driver\";\n  public static final String CONNECTION_PARAM_FILE = \"connectionParamFile\";\n  public static final String HADOOP_HOME = \"hadoopHome\";\n\n  // Output line formatting arguments\n  public static final String ENCLOSED_BY = \"enclosedBy\";\n  public static final String ESCAPED_BY = \"escapedBy\";\n  public static final String FIELDS_TERMINATED_BY = \"fieldsTerminatedBy\";\n  public static final String LINES_TERMINATED_BY = \"linesTerminatedBy\";\n  public static final String OPTIONALLY_ENCLOSED_BY = \"optionallyEnclosedBy\";\n  public static final String MYSQL_DELIMITERS = \"mysqlDelimiters\";\n\n  // Input parsing arguments\n  public static final String INPUT_ENCLOSED_BY = \"inputEnclosedBy\";\n  public static final String INPUT_ESCAPED_BY = \"inputEscapedBy\";\n  public static final String INPUT_FIELDS_TERMINATED_BY = \"inputFieldsTerminatedBy\";\n  public static final String INPUT_LINES_TERMINATED_BY = \"inputLinesTerminatedBy\";\n  public static final String INPUT_OPTIONALLY_ENCLOSED_BY = \"inputOptionallyEnclosedBy\";\n\n  // Code generation arguments\n  public static final String BIN_DIR = \"binDir\";\n  public static final String CLASS_NAME = \"className\";\n  public static final String JAR_FILE = \"jarFile\";\n  public static final String OUTDIR = \"outdir\";\n  public static final String PACKAGE_NAME = \"packageName\";\n  public static final String MAP_COLUMN_JAVA = \"mapColumnJava\";\n\n  // Shared Input/Export options\n  public static final String TABLE = \"table\";\n  public static final String NUM_MAPPERS = \"numMappers\";\n  public static final String COMMAND_LINE = \"commandLine\";\n  public static final String MODE = \"mode\";\n\n  // Sqoop 1.4.7 common arguments\n  public static final String DELETE_COMPILE_DIR = \"deleteCompileDir\";\n\n  public static final String HADOOP_MAPRED_HOME = \"hadoopMapredHome\";\n  public static final String PASSWORD_ALIAS = \"passwordAlias\";\n  public static final String PASSWORD_FILE = \"passwordFile\";\n\n  public static final String RELAXED_ISOLATION = \"relaxedIsolation\";\n  public static final String SKIP_DIST_CACHE = \"skipDistCache\";\n  public static final String MAPREDUCE_JOB_NAME = \"mapreduceJobName\";\n  public static final String VALIDATE = \"validate\";\n  public static final String VALIDATION_FAILURE_HANDLER = \"validationFailureHandler\";\n  public static final String VALIDATION_THRESHOLD = \"validationThreshold\";\n  public static final String VALIDATOR = \"validator\";\n\n  public static final String HCATALOG_DATABASE = \"hcatalogDatabase\";\n  public static final String HCATALOG_HOME = \"hcatalogHome\";\n  public static final String HCATALOG_PARTITION_KEYS = \"hcatalogPartitionKeys\";\n  public static final String HCATALOG_PARTITION_VALUES = \"hcatalogPartitionValues\";\n  public static final String HCATALOG_TABLE = \"hcatalogTable\";\n\n  public static final String HIVE_HOME = \"hiveHome\";\n  public static final String HIVE_PARTITION_KEY = \"hivePartitionKey\";\n  public static final String HIVE_PARTITION_VALUE = \"hivePartitionValue\";\n  public static final String MAP_COLUMN_HIVE = \"mapColumnHive\";\n\n  public static final String INPUT_NULL_STRING = \"inputNullString\";\n  public static final String INPUT_NULL_NON_STRING = \"inputNullNonString\";\n\n  public static final String NULL_STRING = \"nullString\";\n  public static final String NULL_NON_STRING = \"nullNonString\";\n\n  public static final String FILES = \"files\";\n  public static final String LIBJARS = \"libjars\";\n  public static final String ARCHIVES = \"archives\";\n\n  private String database;\n  private String schema;\n\n  // Properties to support toggling between quick setup and advanced mode in the UI. These should never be saved.\n  private transient String connectFromAdvanced;\n  private transient String usernameFromAdvanced;\n  private transient String passwordFromAdvanced;\n\n  @CommandLineArgument( name = \"hadoop-mapred-home\" )\n  private String hadoopMapredHome;\n  @CommandLineArgument( name = \"password-alias\" )\n  private String passwordAlias;\n  @CommandLineArgument( name = \"password-file\" )\n  private String passwordFile;\n\n  @CommandLineArgument( name = \"relaxed-isolation\", flag = true )\n  private String relaxedIsolation;\n  @CommandLineArgument( name = \"skip-dist-cache\", flag = true )\n  private String skipDistCache;\n\n  @CommandLineArgument( name = \"mapreduce-job-name\" )\n  private String mapreduceJobName;\n\n  @CommandLineArgument( name = \"validate\", flag = true )\n  private String validate;\n  @CommandLineArgument( name = \"validation-failurehandler\" )\n  private String validationFailureHandler;\n  @CommandLineArgument( name = \"validation-threshold\" )\n  private String validationThreshold;\n  @CommandLineArgument( name = \"validator\" )\n  private String validator;\n\n  @CommandLineArgument( name = \"hcatalog-database\" )\n  private String hcatalogDatabase;\n  @CommandLineArgument( name = \"hcatalog-home\" )\n  private String hcatalogHome;\n  @CommandLineArgument( name = \"hcatalog-partition-keys\" )\n  private String hcatalogPartitionKeys;\n  @CommandLineArgument( name = \"hcatalog-partition-values\" )\n  private String hcatalogPartitionValues;\n  @CommandLineArgument( name = \"hcatalog-table\" )\n  private String hcatalogTable;\n\n  @CommandLineArgument( name = \"hive-home\" )\n  private String hiveHome;\n  @CommandLineArgument( name = \"hive-partition-key\" )\n  private String hivePartitionKey;\n  @CommandLineArgument( name = \"hive-partition-value\" )\n  private String hivePartitionValue;\n  @CommandLineArgument( name = \"map-column-hive\" )\n  private String mapColumnHive;\n\n  @CommandLineArgument( name = \"input-null-string\" )\n  private String inputNullString;\n  @CommandLineArgument( name = \"input-null-non-string\" )\n  private String inputNullNonString;\n  @CommandLineArgument( name = \"null-string\" )\n  private String nullString;\n  @CommandLineArgument( name = \"null-non-string\" )\n  private String nullNonString;\n\n  @CommandLineArgument( name = \"files\", order = 50, prefix = \"-\" )\n  private String files;\n  @CommandLineArgument( name = \"libjars\", order = 50, prefix = \"-\" )\n  private String libjars;\n  @CommandLineArgument( name = \"archives\", order = 50, prefix = \"-\" )\n  private String archives;\n\n// Represents the last visible state of the UI and the execution mode.\n  private String mode;\n\n  // Common arguments\n  @CommandLineArgument( name = CONNECT )\n  private String connect;\n\n  @CommandLineArgument( name = \"connection-manager\" )\n  private String connectionManager;\n  @CommandLineArgument( name = DRIVER )\n  private String driver;\n  @CommandLineArgument( name = USERNAME )\n  private String username;\n  @CommandLineArgument( name = PASSWORD )\n  @Password\n  private String password;\n  @CommandLineArgument( name = VERBOSE, flag = true )\n  private String verbose;\n  @CommandLineArgument( name = \"connection-param-file\" )\n  private String connectionParamFile;\n  @CommandLineArgument( name = \"hadoop-home\" )\n  private String hadoopHome;\n  // Output line formatting arguments\n  @CommandLineArgument( name = \"enclosed-by\" )\n  private String enclosedBy;\n\n  @CommandLineArgument( name = \"escaped-by\" )\n  private String escapedBy;\n  @CommandLineArgument( name = \"fields-terminated-by\" )\n  private String fieldsTerminatedBy;\n  @CommandLineArgument( name = \"lines-terminated-by\" )\n  private String linesTerminatedBy;\n  @CommandLineArgument( name = \"optionally-enclosed-by\" )\n  private String optionallyEnclosedBy;\n  @CommandLineArgument( name = \"mysql-delimiters\", flag = true )\n  private String mysqlDelimiters;\n  // Input parsing arguments\n  @CommandLineArgument( name = \"input-enclosed-by\" )\n  private String inputEnclosedBy;\n\n  @CommandLineArgument( name = \"input-escaped-by\" )\n  private String inputEscapedBy;\n  @CommandLineArgument( name = \"input-fields-terminated-by\" )\n  private String inputFieldsTerminatedBy;\n  @CommandLineArgument( name = \"input-lines-terminated-by\" )\n  private String inputLinesTerminatedBy;\n  @CommandLineArgument( name = \"input-optionally-enclosed-by\" )\n  private String inputOptionallyEnclosedBy;\n  // Code generation arguments\n  @CommandLineArgument( name = \"bindir\" )\n  private String binDir;\n\n  @CommandLineArgument( name = \"class-name\" )\n  private String className;\n  @CommandLineArgument( name = \"jar-file\" )\n  private String jarFile;\n  @CommandLineArgument( name = OUTDIR )\n  private String outdir;\n  @CommandLineArgument( name = \"package-name\" )\n  private String packageName;\n  @CommandLineArgument( name = \"map-column-java\" )\n  private String mapColumnJava;\n\n  @CommandLineArgument( name = \"delete-compile-dir\", flag = true )\n  private String deleteCompileDir;\n\n  // Shared Input/Export options\n  @CommandLineArgument( name = TABLE )\n  private String table;\n  @CommandLineArgument( name = \"num-mappers\" )\n  private String numMappers;\n  private String commandLine;\n\n  private String clusterName;\n\n  private transient NamedCluster namedCluster;\n\n  private AbstractModelList<PropertyEntry> customArguments;\n\n  /**\n   * @return all known arguments for this config object. Some arguments may be synthetic and represent properties\n   *         directly set on this config object for the purpose of showing them in the list view of the UI.\n   */\n  public AbstractModelList<ArgumentWrapper> getAdvancedArgumentsList() {\n    final AbstractModelList<ArgumentWrapper> items = new AbstractModelList<ArgumentWrapper>();\n\n    items.addAll( SqoopUtils.findAllArguments( this ) );\n\n    try {\n      items.add( new ArgumentWrapper( NAMENODE_HOST, BaseMessages.getString( getClass(), \"NamenodeHost.Label\" ), false,\n          \"\", 0,\n          this, getClass().getMethod( \"getNamenodeHost\" ), getClass().getMethod( \"setNamenodeHost\", String.class ) ) );\n      items.add( new ArgumentWrapper( NAMENODE_PORT, BaseMessages.getString( getClass(), \"NamenodePort.Label\" ), false,\n          \"\", 0,\n          this, getClass().getMethod( \"getNamenodePort\" ), getClass().getMethod( \"setNamenodePort\", String.class ) ) );\n\n      items.add( new ArgumentWrapper( CLUSTER_NAME, BaseMessages.getString( getClass(), \"ClusterName.Label\" ), false,\n        \"\", 0,\n        this, getClass().getMethod( \"getClusterName\" ), getClass().getMethod( \"setClusterName\", String.class ) ) );\n\n      items.add( new ArgumentWrapper( SHIM_IDENTIFIER, BaseMessages.getString( getClass(), \"ShimIdentifier.Label\" ), false,\n              \"\", 0,\n              this, getClass().getMethod( \"getShimIdentifier\" ), getClass().getMethod( \"setShimIdentifier\", String.class ) ) );\n\n      items.add( new ArgumentWrapper( JOBTRACKER_HOST, BaseMessages.getString( getClass(), \"JobtrackerHost.Label\" ),\n          false, \"\", 0, this,\n          getClass().getMethod( \"getJobtrackerHost\" ), getClass().getMethod( \"setJobtrackerHost\", String.class ) ) );\n      items.add( new ArgumentWrapper( JOBTRACKER_PORT, BaseMessages.getString( getClass(), \"JobtrackerPort.Label\" ),\n          false, \"\", 0, this,\n          getClass().getMethod( \"getJobtrackerPort\" ), getClass().getMethod( \"setJobtrackerPort\", String.class ) ) );\n      items.add( new ArgumentWrapper( BLOCKING_EXECUTION, BaseMessages\n          .getString( getClass(), \"BlockingExecution.Label\" ),\n          false, \"\", 0, this,\n          getClass().getMethod( \"getBlockingExecution\" ),\n          getClass().getMethod( \"setBlockingExecution\", String.class ) ) );\n      items.add( new ArgumentWrapper( BLOCKING_POLLING_INTERVAL, BaseMessages.getString( getClass(),\n          \"BlockingPollingInterval.Label\" ), false, \"\", 0, this, getClass().getMethod( \"getBlockingPollingInterval\" ),\n          getClass().getMethod( \"setBlockingPollingInterval\", String.class ) ) );\n    } catch ( NoSuchMethodException ex ) {\n      throw new RuntimeException( ex );\n    }\n    return items;\n  }\n\n  public NamedCluster getNamedCluster() {\n    if ( namedCluster == null ) {\n      namedCluster = createClusterTemplate();\n    }\n    return namedCluster;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    this.namedCluster = createClusterTemplate();\n    if ( namedCluster != null ) {\n      setClusterName( namedCluster.getName() );\n      this.namedCluster.replaceMeta( namedCluster );\n    }\n  }\n\n  protected abstract NamedCluster createClusterTemplate();\n\n  @Override\n  public SqoopConfig clone() {\n    return (SqoopConfig) super.clone();\n  }\n\n  /**\n   * Silently set the following properties: {@code database, connect, username, password}.\n   *\n   * @param database\n   *          Database name\n   * @param connect\n   *          Connection string (JDBC connection URL)\n   * @param username\n   *          Username\n   * @param password\n   *          Password\n   */\n  public void setConnectionInfo( String database, String connect, String username, String password ) {\n    this.database = database;\n    this.connect = connect;\n    this.username = username;\n    this.password = password;\n  }\n\n  /**\n   * Copy connection information from temporary \"advanced\" fields into annotated argument fields.\n   */\n  public void copyConnectionInfoFromAdvanced() {\n    database = null;\n    connect = getConnectFromAdvanced();\n    username = getUsernameFromAdvanced();\n    password = getPasswordFromAdvanced();\n  }\n\n  /**\n   * Copy the current connection information into the \"advanced\" fields. These are temporary session properties used to\n   * aid the user during configuration via UI.\n   */\n  public void copyConnectionInfoToAdvanced() {\n    setConnectFromAdvanced( getConnect() );\n    setUsernameFromAdvanced( getUsername() );\n    setPasswordFromAdvanced( getPassword() );\n  }\n\n  // All getters/setters below this line\n\n  public String getDatabase() {\n    return database;\n  }\n\n  public void setDatabase( String database ) {\n    this.database = propertyChange( DATABASE, this.database, database );\n  }\n\n  protected String propertyChange( String propertyName, String oldValue, String newValue ) {\n    pcs.firePropertyChange( propertyName, oldValue, newValue );\n    return newValue;\n  }\n\n  public String getSchema() {\n    return schema;\n  }\n\n  public void setSchema( String schema ) {\n    this.schema = propertyChange( SCHEMA, this.schema, schema );\n  }\n\n  public String getConnect() {\n    return connect;\n  }\n\n  public void setConnect( String connect ) {\n    this.connect = propertyChange( CONNECT, this.connect, connect );\n  }\n\n  public String getUsername() {\n    return username;\n  }\n\n  public void setUsername( String username ) {\n    this.username = propertyChange( USERNAME, this.username, username );\n  }\n\n  public String getPassword() {\n    return password;\n  }\n\n  public void setPassword( String password ) {\n    this.password = propertyChange( PASSWORD, this.password, password );\n  }\n\n  public String getConnectFromAdvanced() {\n    return connectFromAdvanced;\n  }\n\n  public void setConnectFromAdvanced( String connectFromAdvanced ) {\n    this.connectFromAdvanced = connectFromAdvanced;\n  }\n\n  public String getUsernameFromAdvanced() {\n    return usernameFromAdvanced;\n  }\n\n  public void setUsernameFromAdvanced( String usernameFromAdvanced ) {\n    this.usernameFromAdvanced = usernameFromAdvanced;\n  }\n\n  public String getPasswordFromAdvanced() {\n    return passwordFromAdvanced;\n  }\n\n  public void setPasswordFromAdvanced( String passwordFromAdvanced ) {\n    this.passwordFromAdvanced = passwordFromAdvanced;\n  }\n\n  public String getConnectionManager() {\n    return connectionManager;\n  }\n\n  public void setConnectionManager( String connectionManager ) {\n    this.connectionManager = propertyChange( CONNECTION_MANAGER, this.connectionManager, connectionManager );\n  }\n\n  public String getDriver() {\n    return driver;\n  }\n\n  public void setDriver( String driver ) {\n    this.driver = propertyChange( DRIVER, this.driver, driver );\n  }\n\n  public String getVerbose() {\n    return verbose;\n  }\n\n  public void setVerbose( String verbose ) {\n    this.verbose = propertyChange( VERBOSE, this.verbose, verbose );\n  }\n\n  public String getConnectionParamFile() {\n    return connectionParamFile;\n  }\n\n  public void setConnectionParamFile( String connectionParamFile ) {\n    this.connectionParamFile = propertyChange( CONNECTION_PARAM_FILE, this.connectionParamFile, connectionParamFile );\n  }\n\n  public String getHadoopHome() {\n    return hadoopHome;\n  }\n\n  public void setHadoopHome( String hadoopHome ) {\n    this.hadoopHome = propertyChange( HADOOP_HOME, this.hadoopHome, hadoopHome );\n  }\n\n  public String getEnclosedBy() {\n    return enclosedBy;\n  }\n\n  public void setEnclosedBy( String enclosedBy ) {\n    this.enclosedBy = propertyChange( ENCLOSED_BY, this.enclosedBy, enclosedBy );\n  }\n\n  public String getEscapedBy() {\n    return escapedBy;\n  }\n\n  public void setEscapedBy( String escapedBy ) {\n    this.escapedBy = propertyChange( ESCAPED_BY, this.escapedBy, escapedBy );\n  }\n\n  public String getFieldsTerminatedBy() {\n    return fieldsTerminatedBy;\n  }\n\n  public void setFieldsTerminatedBy( String fieldsTerminatedBy ) {\n    this.fieldsTerminatedBy = propertyChange( FIELDS_TERMINATED_BY, this.fieldsTerminatedBy, fieldsTerminatedBy );\n  }\n\n  public String getLinesTerminatedBy() {\n    return linesTerminatedBy;\n  }\n\n  public void setLinesTerminatedBy( String linesTerminatedBy ) {\n    this.linesTerminatedBy = propertyChange( LINES_TERMINATED_BY, this.linesTerminatedBy, linesTerminatedBy );\n  }\n\n  public String getOptionallyEnclosedBy() {\n    return optionallyEnclosedBy;\n  }\n\n  public void setOptionallyEnclosedBy( String optionallyEnclosedBy ) {\n    this.optionallyEnclosedBy = propertyChange( OPTIONALLY_ENCLOSED_BY, this.optionallyEnclosedBy, optionallyEnclosedBy );\n  }\n\n  public String getMysqlDelimiters() {\n    return mysqlDelimiters;\n  }\n\n  public void setMysqlDelimiters( String mysqlDelimiters ) {\n    this.mysqlDelimiters = propertyChange( MYSQL_DELIMITERS, this.mysqlDelimiters, mysqlDelimiters );\n  }\n\n  public String getInputEnclosedBy() {\n    return inputEnclosedBy;\n  }\n\n  public void setInputEnclosedBy( String inputEnclosedBy ) {\n    this.inputEnclosedBy = propertyChange( INPUT_ENCLOSED_BY, this.inputEnclosedBy, inputEnclosedBy );\n  }\n\n  public String getInputEscapedBy() {\n    return inputEscapedBy;\n  }\n\n  public void setInputEscapedBy( String inputEscapedBy ) {\n    this.inputEscapedBy = propertyChange( INPUT_ESCAPED_BY, this.inputEscapedBy, inputEscapedBy );\n  }\n\n  public String getInputFieldsTerminatedBy() {\n    return inputFieldsTerminatedBy;\n  }\n\n  public void setInputFieldsTerminatedBy( String inputFieldsTerminatedBy ) {\n    this.inputFieldsTerminatedBy = propertyChange( INPUT_FIELDS_TERMINATED_BY, this.inputFieldsTerminatedBy, inputFieldsTerminatedBy );\n  }\n\n  public String getInputLinesTerminatedBy() {\n    return inputLinesTerminatedBy;\n  }\n\n  public void setInputLinesTerminatedBy( String inputLinesTerminatedBy ) {\n    this.inputLinesTerminatedBy = propertyChange( INPUT_LINES_TERMINATED_BY, this.inputLinesTerminatedBy, inputLinesTerminatedBy );\n  }\n\n  public String getInputOptionallyEnclosedBy() {\n    return inputOptionallyEnclosedBy;\n  }\n\n  public void setInputOptionallyEnclosedBy( String inputOptionallyEnclosedBy ) {\n    this.inputOptionallyEnclosedBy = propertyChange( INPUT_OPTIONALLY_ENCLOSED_BY, this.inputOptionallyEnclosedBy, inputOptionallyEnclosedBy );\n  }\n\n  public String getBinDir() {\n    return binDir;\n  }\n\n  public void setBinDir( String binDir ) {\n    this.binDir = propertyChange( BIN_DIR, this.binDir, binDir );\n  }\n\n  public String getClassName() {\n    return className;\n  }\n\n  public void setClassName( String className ) {\n    this.className = propertyChange( CLASS_NAME, this.className, className );\n  }\n\n  public String getJarFile() {\n    return jarFile;\n  }\n\n  public void setJarFile( String jarFile ) {\n    this.jarFile = propertyChange( JAR_FILE, this.jarFile, jarFile );\n  }\n\n  public String getOutdir() {\n    return outdir;\n  }\n\n  public void setOutdir( String outdir ) {\n    this.outdir = propertyChange( OUTDIR, this.outdir, outdir );\n  }\n\n  public String getPackageName() {\n    return packageName;\n  }\n\n  public void setPackageName( String packageName ) {\n    this.packageName = propertyChange( PACKAGE_NAME, this.packageName, packageName );\n  }\n\n  public String getMapColumnJava() {\n    return mapColumnJava;\n  }\n\n  public void setMapColumnJava( String mapColumnJava ) {\n    this.mapColumnJava = propertyChange( MAP_COLUMN_JAVA, this.mapColumnJava, mapColumnJava );\n  }\n\n  public String getDeleteCompileDir() {\n    return deleteCompileDir;\n  }\n\n  public void setDeleteCompileDir( String deleteCompileDir ) {\n    this.deleteCompileDir = propertyChange( DELETE_COMPILE_DIR, this.deleteCompileDir, deleteCompileDir );\n  }\n\n  public String getTable() {\n    return table;\n  }\n\n  public void setTable( String table ) {\n    this.table = propertyChange( TABLE, this.table, table );\n  }\n\n  public String getNumMappers() {\n    return numMappers;\n  }\n\n  public void setNumMappers( String numMappers ) {\n    this.numMappers = propertyChange( NUM_MAPPERS, this.numMappers, numMappers );\n  }\n\n  public String getCommandLine() {\n    return commandLine;\n  }\n\n  public void setCommandLine( String commandLine ) {\n    this.commandLine = propertyChange( COMMAND_LINE, this.commandLine, commandLine );\n  }\n\n  public String getMode() {\n    return mode;\n  }\n\n  public JobEntryMode getModeAsEnum() {\n    try {\n      return JobEntryMode.valueOf( getMode() );\n    } catch ( Exception ex ) {\n      // Not a valid ui mode, return the default\n      return JobEntryMode.QUICK_SETUP;\n    }\n  }\n\n  /**\n   * Sets the mode based on the enum value\n   *\n   * @param mode\n   */\n  public void setMode( JobEntryMode mode ) {\n    setMode( mode.name() );\n  }\n\n  public void setMode( String mode ) {\n    this.mode = propertyChange( MODE, this.mode, mode );\n  }\n\n  public String getClusterName() {\n    return clusterName;\n  }\n\n  public void setClusterName( String clusterName ) {\n    this.clusterName = propertyChange( \"clusterName\", this.clusterName, clusterName );\n  }\n\n  public String getNamenodeHost() {\n    return getNamedCluster().getHdfsHost();\n  }\n\n  public void setNamenodeHost( String namenodeHost ) {\n    getNamedCluster().setHdfsHost( propertyChange( NAMENODE_HOST, getNamenodeHost(), namenodeHost ) );\n  }\n\n  public String getShimIdentifier() {\n    return getNamedCluster().getShimIdentifier();\n  }\n\n  public void setShimIdentifier( String shimIdentifier ) {\n    getNamedCluster().setShimIdentifier( propertyChange( SHIM_IDENTIFIER, getShimIdentifier(), shimIdentifier ) );\n  }\n\n  public String getNamenodePort() {\n    return getNamedCluster().getHdfsPort();\n  }\n\n  public void setNamenodePort( String namenodePort ) {\n    getNamedCluster().setHdfsPort( propertyChange( NAMENODE_PORT, getNamenodePort(), namenodePort ) );\n  }\n\n  public String getJobtrackerHost() {\n    return getNamedCluster().getJobTrackerHost();\n  }\n\n  public void setJobtrackerHost( String jobtrackerHost ) {\n    getNamedCluster().setJobTrackerHost( propertyChange( JOBTRACKER_HOST, getJobtrackerHost(), jobtrackerHost ) );\n  }\n\n  public String getJobtrackerPort() {\n    return getNamedCluster().getJobTrackerPort();\n  }\n\n  public void setJobtrackerPort( String jobtrackerPort ) {\n    getNamedCluster().setJobTrackerPort( propertyChange( JOBTRACKER_PORT, getJobtrackerPort(), jobtrackerPort ) );\n  }\n\n  public String getHadoopMapredHome() {\n    return hadoopMapredHome;\n  }\n\n  public void setHadoopMapredHome( String hadoopMapredHome ) {\n    this.hadoopMapredHome = propertyChange( HADOOP_MAPRED_HOME, this.hadoopMapredHome, hadoopMapredHome );\n  }\n\n  public String getPasswordAlias() {\n    return passwordAlias;\n  }\n\n  public void setPasswordAlias( String passwordAlias ) {\n    this.passwordAlias = propertyChange( PASSWORD_ALIAS, this.passwordAlias, passwordAlias );\n  }\n\n  public String getPasswordFile() {\n    return passwordFile;\n  }\n\n  public void setPasswordFile( String passwordFile ) {\n    this.passwordFile = propertyChange( PASSWORD_FILE, this.passwordFile, passwordFile );\n  }\n\n  public String getRelaxedIsolation() {\n    return relaxedIsolation;\n  }\n\n  public void setRelaxedIsolation( String relaxedIsolation ) {\n    this.relaxedIsolation = propertyChange( RELAXED_ISOLATION, this.relaxedIsolation, relaxedIsolation );\n  }\n\n  public String getSkipDistCache() {\n    return skipDistCache;\n  }\n\n  public void setSkipDistCache( String skipDistCache ) {\n    this.skipDistCache = propertyChange( SKIP_DIST_CACHE, this.skipDistCache, skipDistCache );\n  }\n\n  public String getMapreduceJobName() {\n    return mapreduceJobName;\n  }\n\n  public void setMapreduceJobName( String mapreduceJobName ) {\n    this.mapreduceJobName = propertyChange( MAPREDUCE_JOB_NAME, this.mapreduceJobName, mapreduceJobName );\n  }\n\n  public String getValidate() {\n    return validate;\n  }\n\n  public void setValidate( String validate ) {\n    this.validate = propertyChange( VALIDATE, this.validate, validate );\n  }\n\n  public String getValidationFailureHandler() {\n    return validationFailureHandler;\n  }\n\n  public void setValidationFailureHandler( String validationFailureHandler ) {\n    this.validationFailureHandler = propertyChange( VALIDATION_FAILURE_HANDLER, this.validationFailureHandler, validationFailureHandler );\n  }\n\n  public String getValidationThreshold() {\n    return validationThreshold;\n  }\n\n  public void setValidationThreshold( String validationThreshold ) {\n    this.validationThreshold = propertyChange( VALIDATION_THRESHOLD, this.validationThreshold, validationThreshold );\n  }\n\n  public String getValidator() {\n    return validator;\n  }\n\n  public void setValidator( String validator ) {\n    this.validator = propertyChange( VALIDATOR, this.validator, validator );\n  }\n\n  public String getHcatalogDatabase() {\n    return hcatalogDatabase;\n  }\n\n  public void setHcatalogDatabase( String hcatalogDatabase ) {\n    this.hcatalogDatabase = propertyChange( HCATALOG_DATABASE, this.hcatalogDatabase, hcatalogDatabase );\n  }\n\n  public String getHcatalogHome() {\n    return hcatalogHome;\n  }\n\n  public void setHcatalogHome( String hcatalogHome ) {\n    this.hcatalogHome = propertyChange( HCATALOG_HOME, this.hcatalogHome, hcatalogHome );\n  }\n\n  public String getHcatalogPartitionKeys() {\n    return hcatalogPartitionKeys;\n  }\n\n  public void setHcatalogPartitionKeys( String hcatalogPartitionKeys ) {\n    this.hcatalogPartitionKeys = propertyChange( HCATALOG_PARTITION_KEYS, this.hcatalogPartitionKeys, hcatalogPartitionKeys );\n  }\n\n  public String getHcatalogPartitionValues() {\n    return hcatalogPartitionValues;\n  }\n\n  public void setHcatalogPartitionValues( String hcatalogPartitionValues ) {\n    this.hcatalogPartitionValues = propertyChange( HCATALOG_PARTITION_VALUES, this.hcatalogPartitionValues, hcatalogPartitionValues );\n  }\n\n  public String getHcatalogTable() {\n    return hcatalogTable;\n  }\n\n  public void setHcatalogTable( String hcatalogTable ) {\n    this.hcatalogTable = propertyChange( HCATALOG_TABLE, this.hcatalogTable, hcatalogTable );\n  }\n\n  public String getHiveHome() {\n    return hiveHome;\n  }\n\n  public void setHiveHome( String hiveHome ) {\n    this.hiveHome = propertyChange( HIVE_HOME, this.hiveHome, hiveHome );\n  }\n\n  public String getHivePartitionKey() {\n    return hivePartitionKey;\n  }\n\n  public void setHivePartitionKey( String hivePartitionKey ) {\n    this.hivePartitionKey = propertyChange( HIVE_PARTITION_KEY, this.hivePartitionKey, hivePartitionKey );\n  }\n\n  public String getHivePartitionValue() {\n    return hivePartitionValue;\n  }\n\n  public void setHivePartitionValue( String hivePartitionValue ) {\n    this.hivePartitionValue = propertyChange( HIVE_PARTITION_VALUE, this.hivePartitionValue, hivePartitionValue );\n  }\n\n  public String getMapColumnHive() {\n    return mapColumnHive;\n  }\n\n  public void setMapColumnHive( String mapColumnHive ) {\n    this.mapColumnHive = propertyChange( MAP_COLUMN_HIVE, this.mapColumnHive, mapColumnHive );\n  }\n  public String getInputNullString() {\n    return inputNullString;\n  }\n\n  public void setInputNullString( String inputNullString ) {\n    this.inputNullString = propertyChange( INPUT_NULL_STRING, this.inputNullString, inputNullString );\n  }\n\n  public String getInputNullNonString() {\n    return inputNullNonString;\n  }\n\n  public void setInputNullNonString( String inputNullNonString ) {\n    this.inputNullNonString = propertyChange( INPUT_NULL_NON_STRING, this.inputNullNonString, inputNullNonString );\n  }\n  public String getNullString() {\n    return nullString;\n  }\n\n  public void setNullString( String nullString ) {\n    this.nullString = propertyChange( NULL_STRING, this.nullString, nullString );\n  }\n\n  public String getNullNonString() {\n    return nullNonString;\n  }\n\n  public void setNullNonString( String nullNonString ) {\n    this.nullNonString = propertyChange( NULL_NON_STRING, this.nullNonString, nullNonString );\n  }\n\n  public String getFiles() {\n    return files;\n  }\n\n  public void setFiles( String files ) {\n    this.files = propertyChange( FILES, this.files, files );\n  }\n\n  public String getLibjars() {\n    return libjars;\n  }\n\n  public void setLibjars( String libjars ) {\n    this.libjars = propertyChange( LIBJARS, this.libjars, libjars );\n  }\n\n  public String getArchives() {\n    return archives;\n  }\n\n  public void setArchives( String archives ) {\n    this.archives = propertyChange( ARCHIVES, this.archives, archives );\n  }\n\n  public AbstractModelList<PropertyEntry> getCustomArguments() {\n    if ( customArguments == null ) {\n      customArguments = new AbstractModelList<>();\n    }\n    return customArguments;\n  }\n\n  public void setCustomArguments( AbstractModelList<PropertyEntry> customArguments ) {\n    this.customArguments = customArguments;\n  }\n\n  public void loadClusterConfig( Repository rep, ObjectId id ) throws KettleException {\n    setNamedCluster( null );\n    setNamenodeHost( rep.getJobEntryAttributeString( id, NAMENODE_HOST ) );\n    setNamenodePort( rep.getJobEntryAttributeString( id, NAMENODE_PORT ) );\n    setShimIdentifier( rep.getJobEntryAttributeString( id, SHIM_IDENTIFIER ) );\n    setJobtrackerHost( rep.getJobEntryAttributeString( id, JOBTRACKER_HOST ) );\n    setJobtrackerPort( rep.getJobEntryAttributeString( id, JOBTRACKER_PORT ) );\n  }\n\n  public void loadClusterConfig( Node entrynode ) {\n    setNamedCluster( null );\n    setNamenodeHost( XMLHandler.getTagValue( entrynode, NAMENODE_HOST ) );\n    setNamenodePort( XMLHandler.getTagValue( entrynode, NAMENODE_PORT ) );\n    setShimIdentifier( XMLHandler.getTagValue( entrynode, SHIM_IDENTIFIER ) );\n    setJobtrackerHost( XMLHandler.getTagValue( entrynode, JOBTRACKER_HOST ) );\n    setJobtrackerPort( XMLHandler.getTagValue( entrynode, JOBTRACKER_PORT ) );\n  }\n\n  public String getClusterXML() {\n    StringBuilder builder = new StringBuilder();\n    for ( Map.Entry<String, String> entry : namedClusterProperties( getNamedCluster() ).entrySet() ) {\n      builder.append( XMLHandler.addTagValue( entry.getKey(), entry.getValue() ) );\n    }\n    return builder.toString();\n  }\n\n  public void saveClusterConfig( Repository rep, ObjectId id_job, JobEntryInterface jobEntry ) throws KettleException {\n    ObjectId objectId = jobEntry.getObjectId();\n    for ( Map.Entry<String, String> entry : namedClusterProperties( getNamedCluster() ).entrySet() ) {\n      rep.saveJobEntryAttribute( id_job, objectId, entry.getKey(), entry.getValue() );\n    }\n  }\n\n  public boolean isAdvancedClusterConfigSet() {\n    return Strings.isNullOrEmpty( getClusterName() ) && ncPropertiesNotNullOrEmpty( getNamedCluster() );\n  }\n\n  private static Map<String, String> namedClusterProperties( NamedCluster namedCluster ) {\n    return ImmutableMap.of(\n      NAMENODE_HOST, Strings.nullToEmpty( namedCluster.getHdfsHost() ),\n      NAMENODE_PORT, Strings.nullToEmpty( namedCluster.getHdfsPort() ),\n      SHIM_IDENTIFIER, Strings.nullToEmpty( namedCluster.getShimIdentifier() ),\n      JOBTRACKER_HOST, Strings.nullToEmpty( namedCluster.getJobTrackerHost() ),\n      JOBTRACKER_PORT, Strings.nullToEmpty( namedCluster.getJobTrackerPort() )\n    );\n  }\n\n  @VisibleForTesting\n    boolean ncPropertiesNotNullOrEmpty( NamedCluster nc ) {\n    return !Strings.isNullOrEmpty( nc.getHdfsHost() ) || !Strings.isNullOrEmpty( nc.getHdfsPort() ) || !Strings.isNullOrEmpty( nc.getJobTrackerHost() ) || !Strings.isNullOrEmpty( nc\n        .getJobTrackerPort() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopExportConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n\n/**\n * Configuration for a Sqoop Export\n */\npublic class SqoopExportConfig extends SqoopConfig {\n  public static final String EXPORT_DIR = \"exportDir\";\n  public static final String UPDATE_KEY = \"updateKey\";\n  public static final String UPDATE_MODE = \"updateMode\";\n  public static final String DIRECT = \"direct\";\n  public static final String STAGING_TABLE = \"stagingTable\";\n  public static final String CLEAR_STAGING_TABLE = \"clearStagingTable\";\n  public static final String BATCH = \"batch\";\n\n  public static final String CALL = \"call\";\n  public static final String COLUMNS = \"columns\";\n  private final SqoopExportJobEntry jobEntry;\n\n  @CommandLineArgument( name = \"export-dir\" )\n  private String exportDir;\n  @CommandLineArgument( name = \"update-key\" )\n  private String updateKey;\n  @CommandLineArgument( name = \"update-mode\" )\n  private String updateMode;\n  @CommandLineArgument( name = DIRECT, flag = true )\n  private String direct;\n  @CommandLineArgument( name = \"staging-table\" )\n  private String stagingTable;\n  @CommandLineArgument( name = \"clear-staging-table\", flag = true )\n  private String clearStagingTable;\n  @CommandLineArgument( name = BATCH, flag = true )\n  private String batch;\n\n  @CommandLineArgument( name = \"call\" )\n  private String call;\n  @CommandLineArgument( name = \"columns\" )\n  private String columns;\n\n  public SqoopExportConfig( SqoopExportJobEntry jobEntry ) {\n    this.jobEntry = jobEntry;\n  }\n\n  @Override protected NamedCluster createClusterTemplate() {\n    return jobEntry.getNamedClusterService().getClusterTemplate();\n  }\n\n  public String getExportDir() {\n    return exportDir;\n  }\n\n  public void setExportDir( String exportDir ) {\n    String old = this.exportDir;\n    this.exportDir = exportDir;\n    pcs.firePropertyChange( EXPORT_DIR, old, this.exportDir );\n  }\n\n  public String getUpdateKey() {\n    return updateKey;\n  }\n\n  public void setUpdateKey( String updateKey ) {\n    String old = this.updateKey;\n    this.updateKey = updateKey;\n    pcs.firePropertyChange( UPDATE_KEY, old, this.updateKey );\n  }\n\n  public String getUpdateMode() {\n    return updateMode;\n  }\n\n  public void setUpdateMode( String updateMode ) {\n    String old = this.updateMode;\n    this.updateMode = updateMode;\n    pcs.firePropertyChange( UPDATE_MODE, old, this.updateMode );\n  }\n\n  public String getDirect() {\n    return direct;\n  }\n\n  public void setDirect( String direct ) {\n    String old = this.direct;\n    this.direct = direct;\n    pcs.firePropertyChange( DIRECT, old, this.direct );\n  }\n\n  public String getStagingTable() {\n    return stagingTable;\n  }\n\n  public void setStagingTable( String stagingTable ) {\n    String old = this.stagingTable;\n    this.stagingTable = stagingTable;\n    pcs.firePropertyChange( STAGING_TABLE, old, this.stagingTable );\n  }\n\n  public String getClearStagingTable() {\n    return clearStagingTable;\n  }\n\n  public void setClearStagingTable( String clearStagingTable ) {\n    String old = this.clearStagingTable;\n    this.clearStagingTable = clearStagingTable;\n    pcs.firePropertyChange( CLEAR_STAGING_TABLE, old, this.clearStagingTable );\n  }\n\n  public String getBatch() {\n    return batch;\n  }\n\n  public void setBatch( String batch ) {\n    String old = this.batch;\n    this.batch = batch;\n    pcs.firePropertyChange( BATCH, old, this.batch );\n  }\n\n  public String getCall() {\n    return call;\n  }\n\n  public void setCall( String call ) {\n    String old = this.call;\n    this.call = call;\n    pcs.firePropertyChange( CALL, old, this.call );\n  }\n\n  public String getColumns() {\n    return columns;\n  }\n\n  public void setColumns( String columns ) {\n    String old = this.columns;\n    this.columns = columns;\n    pcs.firePropertyChange( COLUMNS, old, this.columns );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopExportJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.ProvidesDatabaseConnectionInformation;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.w3c.dom.Node;\n\nimport java.util.List;\n\n/**\n * Provides a way to orchestrate <a href=\"http://sqoop.apache.org/\">Sqoop</a> exports.\n */\n@JobEntry( id = \"SqoopExport\", name = \"Sqoop.Export.PluginName\", description = \"Sqoop.Export.PluginDescription\",\n    categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\", image = \"sqoop-export.svg\",\n    i18nPackageName = \"org.pentaho.di.job.entries.sqoop\", version = \"1\" )\npublic class SqoopExportJobEntry extends AbstractSqoopJobEntry<SqoopExportConfig> implements\n    ProvidesDatabaseConnectionInformation {\n\n  // Database meta object for UI interactions. Populated during transformation load or configuration changes via UI.\n  private transient DatabaseMeta databaseMeta;\n\n  public SqoopExportJobEntry() {\n    super( NamedClusterManager.getInstance(),\n            BigDataServicesHelper.getNamedClusterServiceLocator(),\n      RuntimeTestActionServiceImpl.getInstance(), RuntimeTesterImpl.getInstance() );\n  }\n\n  public SqoopExportJobEntry( NamedClusterService namedClusterService,\n                              NamedClusterServiceLocator namedClusterServiceLocator,\n                              RuntimeTestActionService runtimeTestActionService, RuntimeTester runtimeTester ) {\n    super( namedClusterService, namedClusterServiceLocator, runtimeTestActionService, runtimeTester );\n  }\n\n  @Override protected SqoopExportConfig createJobConfig() {\n    return new SqoopExportConfig( this );\n  }\n\n  /**\n   * @return the name of the Sqoop export tool: \"export\"\n   */\n  @Override\n  protected String getToolName() {\n    return \"export\";\n  }\n\n  /**\n   * @return the current database meta. Agile BI uses this to generate a model.\n   */\n  @Override\n  public DatabaseMeta getDatabaseMeta() {\n    return databaseMeta;\n  }\n\n  /**\n   * @return the current table name from the configuration. Agile BI uses this to generate a model.\n   */\n  @Override\n  public String getTableName() {\n    return environmentSubstitute( getJobConfig().getTable() );\n  }\n\n  /**\n   * @return the current schema name from the configuration. Agile BI uses this to generate a model.\n   */\n  @Override\n  public String getSchemaName() {\n    return environmentSubstitute( getJobConfig().getSchema() );\n  }\n\n  @Override\n  public String getMissingDatabaseConnectionInformationMessage() {\n    if ( Const.isEmpty( getJobConfig().getDatabase() ) && !Const.isEmpty( getJobConfig().getConnect() ) ) {\n      // We're using advanced configuration, alert the user we cannot visualize unless we're not using a database\n      // managed by Kettle\n      return BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorMustConfigureDatabaseConnectionFromList\" );\n    }\n    // Use the default error message\n    return null;\n  }\n\n  /**\n   * Additionally sets the database meta if a database is set.\n   */\n  @Override\n  public void loadXML( Node node, List<DatabaseMeta> databaseMetas, List<SlaveServer> slaveServers,\n      Repository repository ) throws KettleXMLException {\n    super.loadXML( node, databaseMetas, slaveServers, repository );\n    setDatabaseMeta( DatabaseMeta.findDatabase( databaseMetas, getJobConfig().getDatabase() ) );\n  }\n\n  /**\n   * Additionally sets the database meta if a database is set.\n   */\n  @Override\n  public void loadRep( Repository rep, ObjectId id_jobentry, List<DatabaseMeta> databases,\n      List<SlaveServer> slaveServers ) throws KettleException {\n    super.loadRep( rep, id_jobentry, databases, slaveServers );\n    setDatabaseMeta( DatabaseMeta.findDatabase( databases, getJobConfig().getDatabase() ) );\n  }\n\n  /**\n   * Set the current database meta.\n   * \n   * @param databaseMeta\n   *          Database meta representing the database this job is currently configured to export to\n   */\n  public void setDatabaseMeta( DatabaseMeta databaseMeta ) {\n    this.databaseMeta = databaseMeta;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopExportJobEntryDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.sqoop.ui.AbstractSqoopJobEntryDialog;\nimport org.pentaho.big.data.kettle.plugins.sqoop.ui.SqoopExportJobEntryController;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.BindingFactory;\n\nimport java.lang.reflect.InvocationTargetException;\n\n/**\n * Dialog for the Sqoop Export job entry.\n * \n * @see SqoopExportJobEntry\n */\n@PluginDialog( id = \"SqoopExport\", image = \"sqoop-export.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/sqoop-export-job\" )\npublic class SqoopExportJobEntryDialog extends AbstractSqoopJobEntryDialog<SqoopExportConfig, SqoopExportJobEntry> {\n\n  public SqoopExportJobEntryDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException, InvocationTargetException {\n    super( parent, jobEntry, rep, jobMeta );\n  }\n\n  @Override\n  protected String getXulFile() {\n    return \"org/pentaho/big/data/kettle/plugins/sqoop/xul/SqoopExportJobEntry.xul\";\n  }\n\n  @Override\n  protected Class<?> getMessagesClass() {\n    return SqoopExportJobEntry.class;\n  }\n\n  @Override\n  protected SqoopExportJobEntryController createController( XulDomContainer container, SqoopExportJobEntry jobEntry,\n                                                            BindingFactory bindingFactory ) {\n    return new SqoopExportJobEntryController( jobMeta, container, jobEntry, bindingFactory );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopImportConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\n/**\n * Configuration for a Sqoop Import\n */\npublic class SqoopImportConfig extends SqoopConfig {\n  // Import control arguments\n  public static final String TARGET_DIR = \"targetDir\";\n  public static final String WAREHOUSE_DIR = \"warehouseDir\";\n  public static final String APPEND = \"append\";\n  public static final String AS_AVRODATAFILE = \"asAvrodatafile\";\n  public static final String AS_SEQUENCEFILE = \"asSequencefile\";\n  public static final String AS_TEXTFILE = \"asTextfile\";\n  public static final String BOUNDARY_QUERY = \"boundaryQuery\";\n  public static final String COLUMNS = \"columns\";\n  public static final String DIRECT = \"direct\";\n  public static final String DIRECT_SPLIT_SIZE = \"directSplitSize\";\n  public static final String INLINE_LOB_LIMIT = \"inlineLobLimit\";\n  public static final String SPLIT_BY = \"splitBy\";\n  public static final String QUERY = \"query\";\n  public static final String WHERE = \"where\";\n  public static final String COMPRESS = \"compress\";\n  public static final String COMPRESSION_CODEC = \"compressionCodec\";\n\n  // Incremental import arguments\n  public static final String CHECK_COLUMN = \"checkColumn\";\n  public static final String INCREMENTAL = \"incremental\";\n  public static final String LAST_VALUE = \"lastValue\";\n\n  // Hive arguments\n  public static final String HIVE_IMPORT = \"hiveImport\";\n  public static final String HIVE_OVERWRITE = \"hiveOverwrite\";\n  public static final String CREATE_HIVE_TABLE = \"createHiveTable\";\n  public static final String HIVE_TABLE = \"hiveTable\";\n  public static final String HIVE_DROP_IMPORT_DELIMS = \"hiveDropImportDelims\";\n  public static final String HIVE_DELIMS_REPLACEMENT = \"hiveDelimsReplacement\";\n\n  // HBase arguments\n  public static final String COLUMN_FAMILY = \"columnFamily\";\n  public static final String HBASE_CREATE_TABLE = \"hbaseCreateTable\";\n  public static final String HBASE_ROW_KEY = \"hbaseRowKey\";\n  public static final String HBASE_TABLE = \"hbaseTable\";\n  public static final String HBASE_ZOOKEEPER_QUORUM = \"hbaseZookeeperQuorum\";\n  public static final String HBASE_ZOOKEEPER_CLIENT_PORT = \"hbaseZookeeperClientPort\";\n\n  public static final String AS_PARQUETFILE = \"asParquetfile\";\n  public static final String DELETE_TARGET_DIR = \"deleteTargetDir\";\n\n  public static final String FETCH_SIZE = \"fetchSize\";\n  public static final String MERGE_KEY = \"mergeKey\";\n\n  public static final String HIVE_DATABASE = \"hiveDatabase\";\n  public static final String HBASE_BULKLOADER = \"hbaseBulkload\";\n\n  public static final String CREATE_HCATALOG_TABLE = \"createHcatalogTable\";\n  public static final String HCATALOG_STORAGE_STANZA = \"hcatalogStorageStanza\";\n\n  public static final String ACCUMULO_BATCH_SIZE = \"accumuloBatchSize\";\n  public static final String ACCUMULO_COLUMN_FAMILY = \"accumuloColumnFamily\";\n  public static final String ACCUMULO_CREATE_TABLE = \"accumuloCreateTable\";\n  public static final String ACCUMULO_INSTANCE = \"accumuloInstance\";\n  public static final String ACCUMULO_MAX_LATENCY = \"accumuloMaxLatency\";\n  public static final String ACCUMULO_PASSWORD = \"accumuloPassword\";\n  public static final String ACCUMULO_ROW_KEY = \"accumuloRowKey\";\n  public static final String ACCUMULO_TABLE = \"accumuloTable\";\n  public static final String ACCUMULO_USER = \"accumuloUser\";\n  public static final String ACCUMULO_VISIBILITY = \"accumuloVisibility\";\n  public static final String ACCUMULO_ZOOKEPERS = \"accumuloZookeepers\";\n\n  // Sqoop 1.4.7 Import / HiveServer2 and external table arguments\n  public static final String HS2_URL = \"hs2Url\";\n  public static final String HS2_USER = \"hs2User\";\n  public static final String HS2_PASSWORD = \"hs2Password\";\n  public static final String HS2_KEYTAB = \"hs2Keytab\";\n  public static final String EXTERNAL_TABLE_DIR = \"externalTableDir\";\n  public static final String HBASE_NULL_INCREMENTAL_MODE = \"hbaseNullIncrementalMode\";\n  public static final String HCATALOG_EXTERNAL_TABLE = \"hcatalogExternalTable\";\n  public static final String TEMPORARY_ROOTDIR = \"temporaryRootdir\";\n\n  private final SqoopImportJobEntry jobEntry;\n\n  // Import control arguments\n  @CommandLineArgument( name = \"target-dir\" )\n  private String targetDir;\n  @CommandLineArgument( name = \"warehouse-dir\" )\n  private String warehouseDir;\n  @CommandLineArgument( name = APPEND, flag = true )\n  private String append;\n  @CommandLineArgument( name = \"as-avrodatafile\", flag = true )\n  private String asAvrodatafile;\n  @CommandLineArgument( name = \"as-sequencefile\", flag = true )\n  private String asSequencefile;\n  @CommandLineArgument( name = \"as-textfile\", flag = true )\n  private String asTextfile;\n  @CommandLineArgument( name = \"boundary-query\" )\n  private String boundaryQuery;\n  @CommandLineArgument( name = COLUMNS )\n  private String columns;\n  @CommandLineArgument( name = DIRECT, flag = true )\n  private String direct;\n  @CommandLineArgument( name = \"direct-split-size\" )\n  private String directSplitSize;\n  @CommandLineArgument( name = \"inline-lob-limit\" )\n  private String inlineLobLimit;\n  @CommandLineArgument( name = \"split-by\" )\n  private String splitBy;\n  @CommandLineArgument( name = QUERY )\n  private String query;\n  @CommandLineArgument( name = WHERE )\n  private String where;\n  @CommandLineArgument( name = COMPRESS, flag = true )\n  private String compress;\n  @CommandLineArgument( name = \"compression-codec\" )\n  private String compressionCodec;\n\n  // Incremental import arguments\n  @CommandLineArgument( name = \"check-column\" )\n  private String checkColumn;\n  @CommandLineArgument( name = INCREMENTAL )\n  private String incremental;\n  @CommandLineArgument( name = \"last-value\" )\n  private String lastValue;\n\n  // Hive arguments\n  @CommandLineArgument( name = \"hive-import\", flag = true )\n  private String hiveImport;\n  @CommandLineArgument( name = \"hive-overwrite\", flag = true )\n  private String hiveOverwrite;\n  @CommandLineArgument( name = \"create-hive-table\", flag = true )\n  private String createHiveTable;\n  @CommandLineArgument( name = \"hive-table\" )\n  private String hiveTable;\n  @CommandLineArgument( name = \"hive-drop-import-delims\", flag = true )\n  private String hiveDropImportDelims;\n  @CommandLineArgument( name = \"hive-delims-replacement\" )\n  private String hiveDelimsReplacement;\n\n  // HBase arguments\n  @CommandLineArgument( name = \"column-family\" )\n  private String columnFamily;\n  @CommandLineArgument( name = \"hbase-create-table\", flag = true )\n  private String hbaseCreateTable;\n  @CommandLineArgument( name = \"hbase-row-key\" )\n  private String hbaseRowKey;\n  @CommandLineArgument( name = \"hbase-table\" )\n  private String hbaseTable;\n\n  @CommandLineArgument( name = \"as-parquetfile\", flag = true )\n  private String asParquetfile;\n  @CommandLineArgument( name = \"delete-target-dir\", flag = true )\n  private String deleteTargetDir;\n\n  @CommandLineArgument( name = \"fetch-size\" )\n  private String fetchSize;\n  @CommandLineArgument( name = \"merge-key\" )\n  private String mergeKey;\n\n  @CommandLineArgument( name = \"hive-database\" )\n  private String hiveDatabase;\n\n  @CommandLineArgument( name = \"hbase-bulkload\", flag = true )\n  private String hbaseBulkload;\n\n  @CommandLineArgument( name = \"create-hcatalog-table\", flag = true )\n  private String createHcatalogTable;\n  @CommandLineArgument( name = \"hcatalog-storage-stanza\" )\n  private String hcatalogStorageStanza;\n\n  @CommandLineArgument( name = \"accumulo-batch-size\" )\n  private String accumuloBatchSize;\n  @CommandLineArgument( name = \"accumulo-column-family\" )\n  private String accumuloColumnFamily;\n  @CommandLineArgument( name = \"accumulo-create-table\", flag = true )\n  private String accumuloCreateTable;\n  @CommandLineArgument( name = \"accumulo-instance\" )\n  private String accumuloInstance;\n  @CommandLineArgument( name = \"accumulo-max-latency\" )\n  private String accumuloMaxLatency;\n  @CommandLineArgument( name = \"accumulo-password\" )\n  private String accumuloPassword;\n  @CommandLineArgument( name = \"accumulo-row-key\" )\n  private String accumuloRowKey;\n  @CommandLineArgument( name = \"accumulo-table\" )\n  private String accumuloTable;\n  @CommandLineArgument( name = \"accumulo-user\" )\n  private String accumuloUser;\n  @CommandLineArgument( name = \"accumulo-visibility\" )\n  private String accumuloVisibility;\n  @CommandLineArgument( name = \"accumulo-zookeepers\" )\n  private String accumuloZookeepers;\n\n  // Sqoop 1.4.7 Import / HiveServer2 and external table arguments\n  @CommandLineArgument( name = \"hs2-url\" )\n  private String hs2Url;\n  @CommandLineArgument( name = \"hs2-user\" )\n  private String hs2User;\n  @CommandLineArgument( name = \"hs2-password\" )\n  private String hs2Password;\n  @CommandLineArgument( name = \"hs2-keytab\" )\n  private String hs2Keytab;\n  @CommandLineArgument( name = \"external-table-dir\" )\n  private String externalTableDir;\n  @CommandLineArgument( name = \"hbase-null-incremental-mode\" )\n  private String hbaseNullIncrementalMode;\n  @CommandLineArgument( name = \"hcatalog-external-table\", flag = true )\n  private String hcatalogExternalTable;\n  @CommandLineArgument( name = \"temporary-rootdir\" )\n  private String temporaryRootdir;\n\n  @CommandLineArgument( name = \"input-null-non-string\" )\n  private String inputNullNonString;\n  @CommandLineArgument( name = \"input-null-string\" )\n  private String inputNullString;\n\n  // Non command line arguments for configuring HBase connection information\n  private String hbaseZookeeperQuorum;\n  private String hbaseZookeeperClientPort;\n\n  public SqoopImportConfig( SqoopImportJobEntry jobEntry ) {\n    this.jobEntry = jobEntry;\n  }\n\n  @Override protected NamedCluster createClusterTemplate() {\n    return jobEntry.getNamedClusterService().getClusterTemplate();\n  }\n\n  public String getTargetDir() {\n    return targetDir;\n  }\n\n  public void setTargetDir( String targetDir ) {\n    this.targetDir = propertyChange( TARGET_DIR, this.targetDir, targetDir );\n  }\n\n  public String getWarehouseDir() {\n    return warehouseDir;\n  }\n\n  public void setWarehouseDir( String warehouseDir ) {\n    this.warehouseDir = propertyChange( WAREHOUSE_DIR, this.warehouseDir, warehouseDir );\n  }\n\n  public String getAppend() {\n    return append;\n  }\n\n  public void setAppend( String append ) {\n    this.append = propertyChange( APPEND, this.append, append );\n  }\n\n  public String getAsAvrodatafile() {\n    return asAvrodatafile;\n  }\n\n  public void setAsAvrodatafile( String asAvrodatafile ) {\n    this.asAvrodatafile = propertyChange( AS_AVRODATAFILE, this.asAvrodatafile, asAvrodatafile );\n  }\n\n  public String getAsSequencefile() {\n    return asSequencefile;\n  }\n\n  public void setAsSequencefile( String asSequencefile ) {\n    this.asSequencefile = propertyChange( AS_SEQUENCEFILE, this.asSequencefile, asSequencefile );\n  }\n\n  public String getAsTextfile() {\n    return asTextfile;\n  }\n\n  public void setAsTextfile( String asTextfile ) {\n    this.asTextfile = propertyChange( AS_TEXTFILE, this.asTextfile, asTextfile );\n  }\n\n  public String getBoundaryQuery() {\n    return boundaryQuery;\n  }\n\n  public void setBoundaryQuery( String boundaryQuery ) {\n    this.boundaryQuery = propertyChange( BOUNDARY_QUERY, this.boundaryQuery, boundaryQuery );\n  }\n\n  public String getColumns() {\n    return columns;\n  }\n\n  public void setColumns( String columns ) {\n    this.columns = propertyChange( COLUMNS, this.columns, columns );\n  }\n\n  public String getDirect() {\n    return direct;\n  }\n\n  public void setDirect( String direct ) {\n    this.direct = propertyChange( DIRECT, this.direct, direct );\n  }\n\n  public String getDirectSplitSize() {\n    return directSplitSize;\n  }\n\n  public void setDirectSplitSize( String directSplitSize ) {\n    this.directSplitSize = propertyChange( DIRECT_SPLIT_SIZE, this.directSplitSize, directSplitSize );\n  }\n\n  public String getInlineLobLimit() {\n    return inlineLobLimit;\n  }\n\n  public void setInlineLobLimit( String inlineLobLimit ) {\n    this.inlineLobLimit = propertyChange( INLINE_LOB_LIMIT, this.inlineLobLimit, inlineLobLimit );\n  }\n\n  public String getSplitBy() {\n    return splitBy;\n  }\n\n  public void setSplitBy( String splitBy ) {\n    this.splitBy = propertyChange( SPLIT_BY, this.splitBy, splitBy );\n  }\n\n  public String getQuery() {\n    return query;\n  }\n\n  public void setQuery( String query ) {\n    this.query = propertyChange( QUERY, this.query, query );\n  }\n\n  public String getWhere() {\n    return where;\n  }\n\n  public void setWhere( String where ) {\n    this.where = propertyChange( WHERE, this.where, where );\n  }\n\n  public String getCompress() {\n    return compress;\n  }\n\n  public void setCompress( String compress ) {\n    this.compress = propertyChange( COMPRESS, this.compress, compress );\n  }\n\n  public String getCompressionCodec() {\n    return compressionCodec;\n  }\n\n  public void setCompressionCodec( String compressionCodec ) {\n    this.compressionCodec = propertyChange( COMPRESSION_CODEC, this.compressionCodec, compressionCodec );\n  }\n\n  public String getCheckColumn() {\n    return checkColumn;\n  }\n\n  public void setCheckColumn( String checkColumn ) {\n    this.checkColumn = propertyChange( CHECK_COLUMN, this.checkColumn, checkColumn );\n  }\n\n  public String getIncremental() {\n    return incremental;\n  }\n\n  public void setIncremental( String incremental ) {\n    this.incremental = propertyChange( INCREMENTAL, this.incremental, incremental );\n  }\n\n  public String getLastValue() {\n    return lastValue;\n  }\n\n  public void setLastValue( String lastValue ) {\n    this.lastValue = propertyChange( LAST_VALUE, this.lastValue, lastValue );\n  }\n\n  public String getHiveImport() {\n    return hiveImport;\n  }\n\n  public void setHiveImport( String hiveImport ) {\n    this.hiveImport = propertyChange( HIVE_IMPORT, this.hiveImport, hiveImport );\n  }\n\n  public String getHiveOverwrite() {\n    return hiveOverwrite;\n  }\n\n  public void setHiveOverwrite( String hiveOverwrite ) {\n    this.hiveOverwrite = propertyChange( HIVE_OVERWRITE, this.hiveOverwrite, hiveOverwrite );\n  }\n\n  public String getCreateHiveTable() {\n    return createHiveTable;\n  }\n\n  public void setCreateHiveTable( String createHiveTable ) {\n    this.createHiveTable = propertyChange( CREATE_HIVE_TABLE, this.createHiveTable, createHiveTable );\n  }\n\n  public String getHiveTable() {\n    return hiveTable;\n  }\n\n  public void setHiveTable( String hiveTable ) {\n    this.hiveTable = propertyChange( HIVE_TABLE, this.hiveTable, hiveTable );\n  }\n\n  public String getHiveDropImportDelims() {\n    return hiveDropImportDelims;\n  }\n\n  public void setHiveDropImportDelims( String hiveDropImportDelims ) {\n    this.hiveDropImportDelims =\n      propertyChange( HIVE_DROP_IMPORT_DELIMS, this.hiveDropImportDelims, hiveDropImportDelims );\n  }\n\n  public String getHiveDelimsReplacement() {\n    return hiveDelimsReplacement;\n  }\n\n  public void setHiveDelimsReplacement( String hiveDelimsReplacement ) {\n    this.hiveDelimsReplacement =\n      propertyChange( HIVE_DELIMS_REPLACEMENT, this.hiveDelimsReplacement, hiveDelimsReplacement );\n  }\n\n  public String getColumnFamily() {\n    return columnFamily;\n  }\n\n  public void setColumnFamily( String columnFamily ) {\n    this.columnFamily = propertyChange( COLUMN_FAMILY, this.columnFamily, columnFamily );\n  }\n\n  public String getHbaseCreateTable() {\n    return hbaseCreateTable;\n  }\n\n  public void setHbaseCreateTable( String hbaseCreateTable ) {\n    this.hbaseCreateTable = propertyChange( HBASE_CREATE_TABLE, this.hbaseCreateTable, hbaseCreateTable );\n  }\n\n  public String getHbaseRowKey() {\n    return hbaseRowKey;\n  }\n\n  public void setHbaseRowKey( String hbaseRowKey ) {\n    this.hbaseRowKey = propertyChange( HBASE_ROW_KEY, this.hbaseRowKey, hbaseRowKey );\n  }\n\n  public String getHbaseTable() {\n    return hbaseTable;\n  }\n\n  public void setHbaseTable( String hbaseTable ) {\n    this.hbaseTable = propertyChange( HBASE_TABLE, this.hbaseTable, hbaseTable );\n  }\n\n  public String getHbaseZookeeperQuorum() {\n    return hbaseZookeeperQuorum;\n  }\n\n  public void setHbaseZookeeperQuorum( String hbaseZookeeperQuorum ) {\n    this.hbaseZookeeperQuorum =\n      propertyChange( HBASE_ZOOKEEPER_QUORUM, this.hbaseZookeeperQuorum, hbaseZookeeperQuorum );\n  }\n\n  public String getHbaseZookeeperClientPort() {\n    return hbaseZookeeperClientPort;\n  }\n\n  public void setHbaseZookeeperClientPort( String hbaseZookeeperClientPort ) {\n    this.hbaseZookeeperClientPort =\n      propertyChange( HBASE_ZOOKEEPER_CLIENT_PORT, this.hbaseZookeeperClientPort, hbaseZookeeperClientPort );\n  }\n\n  @Override\n  public AbstractModelList<ArgumentWrapper> getAdvancedArgumentsList() {\n    AbstractModelList<ArgumentWrapper> items = super.getAdvancedArgumentsList();\n\n    // Simple O(N) list walk to find the last index of HBase properties so we can\n    // group the zookeeper properties with them\n    int index = items.size();\n    int i = 0;\n    for ( ; i < items.size(); i++ ) {\n      if ( items.get( i ).getName().startsWith( \"hbase\" ) ) {\n        index = i + 1; // Add after this guy\n      }\n    }\n\n    try {\n      items.add( index, new ArgumentWrapper( HBASE_ZOOKEEPER_QUORUM, BaseMessages.getString( getClass(),\n          \"HBaseZookeeperQuorum.Label\" ),\n          false, \"\", 0, this, getClass().getMethod( \"getHbaseZookeeperQuorum\" ), getClass()\n            .getMethod( \"setHbaseZookeeperQuorum\", String.class ) ) );\n      items.add( index + 1, new ArgumentWrapper( HBASE_ZOOKEEPER_CLIENT_PORT, BaseMessages.getString( getClass(),\n          \"HBaseZookeeperClientPort.Label\" ),\n          false, \"\", 0, this, getClass().getMethod( \"getHbaseZookeeperClientPort\" ),\n          getClass().getMethod( \"setHbaseZookeeperClientPort\", String.class ) ) );\n    } catch ( NoSuchMethodException ex ) {\n      throw new RuntimeException( ex );\n    }\n\n    return items;\n  }\n\n  public String getAsParquetfile() {\n    return asParquetfile;\n  }\n\n  public void setAsParquetfile( String asParquetfile ) {\n    this.asParquetfile = propertyChange( AS_PARQUETFILE, this.asParquetfile, asParquetfile );\n  }\n\n  public String getDeleteTargetDir() {\n    return deleteTargetDir;\n  }\n\n  public void setDeleteTargetDir( String deleteTargetDir ) {\n    this.deleteTargetDir = propertyChange( DELETE_TARGET_DIR, this.deleteTargetDir, deleteTargetDir );\n  }\n\n  public String getFetchSize() {\n    return fetchSize;\n  }\n\n  public void setFetchSize( String fetchSize ) {\n    this.fetchSize = propertyChange( FETCH_SIZE, this.fetchSize, fetchSize );\n  }\n\n  public String getMergeKey() {\n    return mergeKey;\n  }\n\n  public void setMergeKey( String mergeKey ) {\n    this.mergeKey = propertyChange( MERGE_KEY, this.mergeKey, mergeKey );\n  }\n\n  public String getHiveDatabase() {\n    return hiveDatabase;\n  }\n\n  public void setHiveDatabase( String hiveDatabase ) {\n    this.hiveDatabase = propertyChange( HIVE_DATABASE, this.hiveDatabase, hiveDatabase );\n  }\n\n  public String getHbaseBulkload() {\n    return hbaseBulkload;\n  }\n\n  public void setHbaseBulkload( String hbaseBulkload ) {\n    this.hbaseBulkload = propertyChange( HBASE_BULKLOADER, this.hbaseBulkload, hbaseBulkload );\n  }\n\n  public String getCreateHcatalogTable() {\n    return createHcatalogTable;\n  }\n\n  public void setCreateHcatalogTable( String createHcatalogTable ) {\n    this.createHcatalogTable = propertyChange( CREATE_HCATALOG_TABLE, this.createHcatalogTable, createHcatalogTable );\n  }\n\n  public String getHcatalogStorageStanza() {\n    return hcatalogStorageStanza;\n  }\n\n  public void setHcatalogStorageStanza( String hcatalogStorageStanza ) {\n    this.hcatalogStorageStanza =\n      propertyChange( HCATALOG_STORAGE_STANZA, this.hcatalogStorageStanza, hcatalogStorageStanza );\n  }\n\n  public String getAccumuloBatchSize() {\n    return accumuloBatchSize;\n  }\n\n  public void setAccumuloBatchSize( String accumuloBatchSize ) {\n    this.accumuloBatchSize = propertyChange( ACCUMULO_BATCH_SIZE, this.accumuloBatchSize, accumuloBatchSize );\n  }\n\n  public String getAccumuloColumnFamily() {\n    return accumuloColumnFamily;\n  }\n\n  public void setAccumuloColumnFamily( String accumuloColumnFamily ) {\n    this.accumuloColumnFamily =\n      propertyChange( ACCUMULO_COLUMN_FAMILY, this.accumuloColumnFamily, accumuloColumnFamily );\n  }\n\n  public String getAccumuloCreateTable() {\n    return accumuloCreateTable;\n  }\n\n  public void setAccumuloCreateTable( String accumuloCreateTable ) {\n    this.accumuloCreateTable = propertyChange( ACCUMULO_CREATE_TABLE, this.accumuloCreateTable, accumuloCreateTable );\n  }\n\n  public String getAccumuloInstance() {\n    return accumuloInstance;\n  }\n\n  public void setAccumuloInstance( String accumuloInstance ) {\n    this.accumuloInstance = propertyChange( ACCUMULO_INSTANCE, this.accumuloInstance, accumuloInstance );\n  }\n\n  public String getAccumuloMaxLatency() {\n    return accumuloMaxLatency;\n  }\n\n  public void setAccumuloMaxLatency( String accumuloMaxLatency ) {\n    this.accumuloMaxLatency = propertyChange( ACCUMULO_MAX_LATENCY, this.accumuloMaxLatency, accumuloMaxLatency );\n  }\n\n  public String getAccumuloPassword() {\n    return accumuloPassword;\n  }\n\n  public void setAccumuloPassword( String accumuloPassword ) {\n    this.accumuloPassword = propertyChange( ACCUMULO_PASSWORD, this.accumuloPassword, accumuloPassword );\n  }\n\n  public String getAccumuloRowKey() {\n    return accumuloRowKey;\n  }\n\n  public void setAccumuloRowKey( String accumuloRowKey ) {\n    this.accumuloRowKey = propertyChange( ACCUMULO_ROW_KEY, this.accumuloRowKey, accumuloRowKey );\n  }\n\n  public String getAccumuloTable() {\n    return accumuloTable;\n  }\n\n  public void setAccumuloTable( String accumuloTable ) {\n    this.accumuloTable = propertyChange( ACCUMULO_TABLE, this.accumuloTable, accumuloTable );\n  }\n\n  public String getAccumuloUser() {\n    return accumuloUser;\n  }\n\n  public void setAccumuloUser( String accumuloUser ) {\n    this.accumuloUser = propertyChange( ACCUMULO_USER, this.accumuloUser, accumuloUser );\n  }\n\n  public String getAccumuloVisibility() {\n    return accumuloVisibility;\n  }\n\n  public void setAccumuloVisibility( String accumuloVisibility ) {\n    this.accumuloVisibility = propertyChange( ACCUMULO_VISIBILITY, this.accumuloVisibility, accumuloVisibility );\n  }\n\n  public String getAccumuloZookeepers() {\n    return accumuloZookeepers;\n  }\n\n  public void setAccumuloZookeepers( String accumuloZookeepers ) {\n    this.accumuloZookeepers = propertyChange( ACCUMULO_ZOOKEPERS, this.accumuloZookeepers, accumuloZookeepers );\n  }\n\n  public String getHs2Url() {\n    return hs2Url;\n  }\n\n  public void setHs2Url( String hs2Url ) {\n    this.hs2Url = propertyChange( HS2_URL, this.hs2Url, hs2Url );\n  }\n\n  public String getHs2User() {\n    return hs2User;\n  }\n\n  public void setHs2User( String hs2User ) {\n    this.hs2User = propertyChange( HS2_USER, this.hs2User, hs2User );\n  }\n\n  public String getHs2Password() {\n    return hs2Password;\n  }\n\n  public void setHs2Password( String hs2Password ) {\n    this.hs2Password = propertyChange( HS2_PASSWORD, this.hs2Password, hs2Password );\n  }\n\n  public String getHs2Keytab() {\n    return hs2Keytab;\n  }\n\n  public void setHs2Keytab( String hs2Keytab ) {\n    this.hs2Keytab = propertyChange( HS2_KEYTAB, this.hs2Keytab, hs2Keytab );\n  }\n\n  public String getExternalTableDir() {\n    return externalTableDir;\n  }\n\n  public void setExternalTableDir( String externalTableDir ) {\n    this.externalTableDir = propertyChange( EXTERNAL_TABLE_DIR, this.externalTableDir, externalTableDir );\n  }\n\n  public String getHbaseNullIncrementalMode() {\n    return hbaseNullIncrementalMode;\n  }\n\n  public void setHbaseNullIncrementalMode( String hbaseNullIncrementalMode ) {\n    this.hbaseNullIncrementalMode =\n      propertyChange( HBASE_NULL_INCREMENTAL_MODE, this.hbaseNullIncrementalMode, hbaseNullIncrementalMode );\n  }\n\n  public String getHcatalogExternalTable() {\n    return hcatalogExternalTable;\n  }\n\n  public void setHcatalogExternalTable( String hcatalogExternalTable ) {\n    this.hcatalogExternalTable =\n      propertyChange( HCATALOG_EXTERNAL_TABLE, this.hcatalogExternalTable, hcatalogExternalTable );\n  }\n\n  public String getTemporaryRootdir() {\n    return temporaryRootdir;\n  }\n\n  public void setTemporaryRootdir( String temporaryRootdir ) {\n    this.temporaryRootdir = propertyChange( TEMPORARY_ROOTDIR, this.temporaryRootdir, temporaryRootdir );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopImportJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\nimport org.pentaho.runtime.test.action.impl.RuntimeTestActionServiceImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\n\nimport java.util.Properties;\n\n/**\n * Provides a way to orchestrate <a href=\"http://sqoop.apache.org/\">Sqoop</a> imports.\n */\n@JobEntry( id = \"SqoopImport\", name = \"Sqoop.Import.PluginName\", description = \"Sqoop.Import.PluginDescription\",\n    categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\", image = \"sqoop-import.svg\",\n    i18nPackageName = \"org.pentaho.di.job.entries.sqoop\", version = \"1\" )\npublic class SqoopImportJobEntry extends AbstractSqoopJobEntry<SqoopImportConfig> {\n\n  public SqoopImportJobEntry() {\n    super( NamedClusterManager.getInstance(),\n            BigDataServicesHelper.getNamedClusterServiceLocator(),\n      RuntimeTestActionServiceImpl.getInstance(), RuntimeTesterImpl.getInstance() );\n  }\n\n  public SqoopImportJobEntry( NamedClusterService namedClusterService,\n                              NamedClusterServiceLocator serviceLocator,\n                              RuntimeTestActionService runtimeTestActionService,\n                              RuntimeTester runtimeTester ) {\n    super( namedClusterService, serviceLocator, runtimeTestActionService, runtimeTester );\n  }\n\n  @Override protected SqoopImportConfig createJobConfig() {\n    return new SqoopImportConfig( this );\n  }\n\n  /**\n   * @return the name of the Sqoop import tool: \"import\"\n   */\n  @Override\n  protected String getToolName() {\n    return \"import\";\n  }\n\n  @Override\n  public void configure( SqoopImportConfig sqoopConfig, Properties properties ) throws KettleException {\n    super.configure( sqoopConfig, properties );\n    if ( sqoopConfig.getHbaseZookeeperQuorum() != null ) {\n      properties.put( \"hbase.zookeeper.quorum\", environmentSubstitute( sqoopConfig.getHbaseZookeeperQuorum() ) );\n    }\n    if ( sqoopConfig.getHbaseZookeeperClientPort() != null ) {\n      properties.put( \"hbase.zookeeper.property.clientPort\",\n          environmentSubstitute( sqoopConfig.getHbaseZookeeperClientPort() ) );\n    }\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopImportJobEntryDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.big.data.kettle.plugins.sqoop.ui.AbstractSqoopJobEntryDialog;\nimport org.pentaho.big.data.kettle.plugins.sqoop.ui.SqoopImportJobEntryController;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.BindingFactory;\n\n/**\n * Dialog for the Sqoop Import Job Entry\n * \n * @see SqoopImportJobEntry\n */\n@PluginDialog( id = \"SqoopImport\", image = \"sqoop-import.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/sqoop-import-job\" )\npublic class SqoopImportJobEntryDialog extends AbstractSqoopJobEntryDialog<SqoopImportConfig, SqoopImportJobEntry> {\n\n  public SqoopImportJobEntryDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException {\n    super( parent, jobEntry, rep, jobMeta );\n  }\n\n  @Override\n  protected Class<?> getMessagesClass() {\n    return SqoopImportJobEntry.class;\n  }\n\n  @Override\n  protected SqoopImportJobEntryController createController( XulDomContainer container, SqoopImportJobEntry jobEntry,\n                                                            BindingFactory bindingFactory ) {\n    return new SqoopImportJobEntryController( jobMeta, container, jobEntry, bindingFactory );\n  }\n\n  @Override\n  protected String getXulFile() {\n    return \"org/pentaho/big/data/kettle/plugins/sqoop/xul/SqoopImportJobEntry.xul\";\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopLog4jFilter.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.apache.logging.log4j.core.LogEvent;\nimport org.apache.logging.log4j.core.filter.AbstractFilter;\n\n\n\npublic class SqoopLog4jFilter extends AbstractFilter {\n  String logChannelId;\n\n  public SqoopLog4jFilter( String logChannelId ) {\n    this.logChannelId = logChannelId;\n  }\n\n  @Override\n  public Result filter(LogEvent event) {\n    if ( logChannelId.equals( event.getContextData().getValue( \"logChannelId\" ) ) ) {\n      return Result.NEUTRAL;\n    }\n    return Result.DENY;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopUtils.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.apache.commons.lang.StringUtils;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.i18n.BaseMessages;\n\nimport java.io.IOException;\nimport java.io.StreamTokenizer;\nimport java.io.StringReader;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Method;\nimport java.util.ArrayList;\nimport java.util.Comparator;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.LinkedHashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.TreeSet;\nimport java.util.regex.Pattern;\n\n\n/**\n * Collection of utility methods used to support integration with Apache Sqoop.\n */\npublic class SqoopUtils {\n  /**\n   * Prefix to append before an argument's name when building up a list of command-line arguments, e.g. \"--\"\n   */\n  public static final String ARG_PREFIX = \"--\";\n  public static final String ARG_PREFIX_1 = \"-\";\n  public static final String ARG_D = \"-D\";\n\n  // Properties used to escape/unescape strings for command line string (de)serialization\n  private static final String WHITESPACE = \" \";\n  private static final String EQUALS = \"=\";\n  private static final String QUOTE = \"\\\"\";\n  private static final Pattern WHITESPACE_PATTERN = Pattern.compile( \" \" );\n  private static final Pattern QUOTE_PATTERN = Pattern.compile( \"\\\"\" );\n  private static final Pattern BACKSLASH_PATTERN = Pattern.compile( \"\\\\\\\\\" );\n  private static final Pattern EQUALS_PATTERN = Pattern.compile( \"=\" );\n  // Simple map of Patterns that match an escape sequence and a replacement string to replace them with to escape them\n  private static final Object[][] ESCAPE_SEQUENCES = new Object[][] {\n    new Object[] { Pattern.compile( \"\\t\" ), \"\\\\\\\\t\" }, new Object[] { Pattern.compile( \"\\b\" ), \"\\\\\\\\b\" },\n    new Object[] { Pattern.compile( \"\\n\" ), \"\\\\\\\\n\" }, new Object[] { Pattern.compile( \"\\r\" ), \"\\\\\\\\r\" },\n    new Object[] { Pattern.compile( \"\\f\" ), \"\\\\\\\\f\" } };\n\n  /**\n   * Parse a string into arguments as if it were provided on the command line.\n   *\n   * @param commandLineString\n   *          A command line string, e.g. \"sqoop import --table test --connect jdbc:mysql://bogus/bogus\"\n   * @param variableSpace\n   *          Context for resolving variable names. If {@code null}, no variable resolution we happen.\n   * @param ignoreSqoopCommand\n   *          If set, the first \"sqoop <tool>\" arguments will be ignored, e.g. \"sqoop import\" or \"sqoop export\".\n   * @return List of parsed arguments\n   * @throws IOException\n   *           when the command line could not be parsed\n   */\n  public static List<String> parseCommandLine( String commandLineString, VariableSpace variableSpace,\n      boolean ignoreSqoopCommand ) throws IOException {\n    List<String> args = new ArrayList<String>();\n    StringReader reader = new StringReader( commandLineString );\n    try {\n      StreamTokenizer tokenizer = new StreamTokenizer( reader );\n      // Treat a dash as an ordinary character so it gets included in the token\n      tokenizer.ordinaryChar( '-' );\n      tokenizer.ordinaryChar( '.' );\n      tokenizer.ordinaryChars( '0', '9' );\n      // Treat all characters as word characters so nothing is parsed out\n      tokenizer.wordChars( '\\u0000', '\\uFFFF' );\n\n      // Re-add whitespace characters\n      tokenizer.whitespaceChars( 0, ' ' );\n\n      // Use \" and ' as quote characters\n      tokenizer.quoteChar( '\"' );\n      tokenizer.quoteChar( '\\'' );\n\n      // Flag to indicate if the next token needs to be skipped (used to control skipping of the first two arguments,\n      // e.g. \"sqoop <tool>\")\n      boolean skipToken = false;\n      // Add all non-null string values tokenized from the string to the argument list\n      while ( tokenizer.nextToken() != StreamTokenizer.TT_EOF ) {\n        if ( tokenizer.sval != null ) {\n          String s = tokenizer.sval;\n          if ( variableSpace != null ) {\n            s = variableSpace.environmentSubstitute( s );\n          }\n          if ( ignoreSqoopCommand && args.isEmpty() ) {\n            // If we encounter \"sqoop <name>\" we should skip the first two arguments so we can support copy/paste of\n            // arguments directly\n            // from a working command line\n            if ( \"sqoop\".equals( s ) ) {\n              skipToken = true;\n              continue; // skip this one and the next\n            } else if ( skipToken ) {\n              ignoreSqoopCommand = false; // Don't attempt to ignore any more commands\n              // Skip this token too, reset the flag so we no longer skip any tokens, and continue parsing\n              skipToken = false;\n              continue;\n            }\n          }\n\n          if ( s.startsWith( ARG_D ) ) {\n            handleCustomOption( args, s, tokenizer, variableSpace );\n            continue;\n          }\n          args.add( escapeEscapeSequences( s ) );\n        }\n      }\n    } finally {\n      reader.close();\n    }\n    return args;\n  }\n\n  /**\n   * Configure a {@link SqoopConfig} object from a command line string. Variables will be replaced if\n   * {@code variableSpace} is provided.\n   *\n   * @param config\n   *          Configuration to update\n   * @param commandLineString\n   *          Command line string to parse and update config with (string will be parsed via\n   *          {@link #parseCommandLine(String, org.pentaho.di.core.variables.VariableSpace, boolean)})\n   * @param variableSpace\n   *          Context for variable substitution\n   * @throws IOException\n   *           error parsing command line string\n   * @throws KettleException\n   *           Error setting properties from parsed command line arguments\n   */\n  public static void configureFromCommandLine(\n      SqoopConfig config, String commandLineString, VariableSpace variableSpace ) throws IOException, KettleException {\n    List<String> args = parseCommandLine( commandLineString, variableSpace, true );\n\n    Map<String, String> argValues = new HashMap<>();\n    // save the order\n    Map<String, String> customArgValues = new LinkedHashMap<>();\n    int i = 0;\n    int peekAhead = i;\n    while ( i < args.size() ) {\n      String arg = args.get( i );\n      int prefLen = isArgName( arg );\n      if ( prefLen > 0 ) {\n        arg = arg.substring( prefLen );\n      }\n\n      String value = null;\n      peekAhead = i + 1;\n      if ( peekAhead < args.size() ) {\n        value = args.get( peekAhead );\n      }\n\n      if ( ARG_D.equals( arg ) ) {\n        int index = value.indexOf( EQUALS );\n        String customArg = value.substring( 0, index );\n        String customValue = value.substring( index + 1 );\n\n        if ( variableSpace != null ) {\n          customArg = variableSpace.environmentSubstitute( value );\n          customValue = variableSpace.environmentSubstitute( value );\n        }\n\n        customArgValues.put( customArg, customValue );\n        i += 2;\n        continue;\n      }\n\n      if ( isArgName( value ) > 0 ) {\n        // Current arg is possibly a boolean flag, set value to null now\n        value = null;\n        // We're only consuming one element\n        i += 1;\n      } else {\n        // value is a real value, make sure to substitute variables if we can\n        if ( variableSpace != null ) {\n          value = variableSpace.environmentSubstitute( value );\n        }\n        i += 2;\n      }\n\n      argValues.put( arg, value );\n    }\n\n    setArgumentStringValues( config, argValues );\n    setCustomArgumentStringValues( config, customArgValues );\n  }\n\n  /**\n   * Does the string reprsent an argument name as provided on the command line? Format: \"--argname\"\n   *\n   * @param s\n   *          Possible argument name\n   * @return {@code true} if the string represents an argument name (is prefixed with ARG_PREFIX)\n   */\n  private static int isArgName( String s ) {\n    if ( s != null ) {\n      if ( s.startsWith( ARG_PREFIX ) && s.length() > ARG_PREFIX.length() ) {\n        return ARG_PREFIX.length();\n      }\n      if ( ARG_D.equals( s ) ) {\n        return 0;\n      }\n      if ( s.startsWith( ARG_PREFIX_1 ) && s.length() > ARG_PREFIX_1.length() ) {\n        return ARG_PREFIX_1.length();\n      }\n    }\n\n    return 0;\n  }\n\n  /**\n   * Updates arguments of {@code config} based on the map of argument values. All other arguments will be cleared from\n   * {@code config}.\n   *\n   * @param config\n   *          Configuration object to update\n   * @param args\n   *          Argument name and value pairs\n   * @throws KettleException\n   *           when we cannot set the value of the argument either because it doesn't exist or any other reason\n   */\n  protected static void setArgumentStringValues( SqoopConfig config, Map<String, String> args ) throws KettleException {\n    Class<?> aClass = config.getClass();\n\n    while ( aClass != null ) {\n      for ( Field field : aClass.getDeclaredFields() ) {\n        if ( field.isAnnotationPresent( CommandLineArgument.class ) ) {\n          CommandLineArgument arg = field.getAnnotation( CommandLineArgument.class );\n\n          String value = pickupArgumentValueFor( arg, args );\n\n          try {\n            String fieldName = field.getName().substring( 0, 1 ).toUpperCase() + field.getName().substring( 1 );\n            Method setter = findMethod( config.getClass(), fieldName, new Class[] { String.class }, \"set\" );\n            setter.invoke( config, value );\n          } catch ( Exception ex ) {\n            throw new KettleException( \"Cannot set value of argument \\\"\" + arg.name() + \"\\\" to \\\"\" + value + \"\\\"\", ex );\n          }\n        }\n      }\n      aClass = aClass.getSuperclass();\n    }\n\n    // If any arguments weren't handled report them as errors\n    if ( !args.isEmpty() ) {\n      StringBuilder sb = new StringBuilder();\n      Iterator<String> i = args.keySet().iterator();\n      while ( i.hasNext() ) {\n        sb.append( i.next() );\n        if ( i.hasNext() ) {\n          sb.append( \", \" );\n        }\n      }\n      throw new KettleException( BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorUnknownArguments\", sb ) );\n    }\n  }\n\n  private static void setCustomArgumentStringValues( SqoopConfig config, Map<String, String> customArgValues ) {\n    config.getCustomArguments().clear();\n\n    for ( Iterator<Map.Entry<String, String>> iterator = customArgValues.entrySet().iterator(); iterator.hasNext(); ) {\n      Map.Entry<String, String> entry = iterator.next();\n      config.getCustomArguments().add( new PropertyEntry( entry.getKey(), entry.getValue() ) );\n    }\n  }\n\n  private static String pickupArgumentValueFor( CommandLineArgument arg, Map<String, String> args )\n    throws KettleException {\n    String argumentName = arg.name();\n    if ( args.containsKey( argumentName ) ) {\n\n      // Remove the value from the map to indicate it has been processed\n      String value = args.remove( argumentName );\n\n      if ( arg.flag() ) {\n        return Boolean.TRUE.toString();\n      }\n\n      if ( StringUtil.isEmpty( value ) ) {\n        throw new KettleException( BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorProhibitedEmptyString\",\n            argumentName ) );\n      }\n\n      return value;\n    }\n\n    return null;\n  }\n\n  /**\n   * Generate a list of command line arguments and their values for arguments that require them.\n   *\n   * @param config\n   *          Sqoop configuration to build a list of command line arguments from\n   * @param variableSpace\n   *          Variable space to look up argument values from. May be {@code null}\n   * @return All the command line arguments for this configuration object\n   * @throws IOException\n   *           when config mode is {@link JobEntryMode#ADVANCED_COMMAND_LINE} and the command line\n   *           could not be parsed\n   */\n  public static List<String> getCommandLineArgs( SqoopConfig config, VariableSpace variableSpace ) throws IOException {\n    List<String> args = new ArrayList<String>();\n\n    if ( JobEntryMode.ADVANCED_COMMAND_LINE.equals( config.getModeAsEnum() ) ) {\n      return parseCommandLine( config.getCommandLine(), variableSpace, true );\n    } else {\n\n      appendCustomArguments( args, config, variableSpace );\n      appendArguments( args, SqoopUtils.findAllArguments( config ), variableSpace );\n\n      return args;\n    }\n  }\n\n  /**\n   * Generate a command line string for the given configuration. Replace variables with the values from\n   * {@code variableSpace} if provided.\n   *\n   * @param config\n   *          Sqoop configuration\n   * @param variableSpace\n   *          Context for variable substitutions\n   * @return String-representation of the current configuration values. Variable tokens will be replaced if\n   *         {@code variableSpace} is provided.\n   */\n  public static String generateCommandLineString( SqoopConfig config, VariableSpace variableSpace ) {\n    StringBuilder sb = new StringBuilder();\n    List<List<String>> buffers = new ArrayList<List<String>>();\n    List<String> customBuffer = new ArrayList<String>();\n\n    // Add custom arguments as they must appear before tool specific arguments\n    for ( PropertyEntry entry : config.getCustomArguments() ) {\n      appendCustomArgument( customBuffer, entry, variableSpace, true );\n    }\n\n    for ( Iterator<String> iterator = customBuffer.iterator(); iterator.hasNext(); ) {\n      sb.append( iterator.next() );\n      if ( iterator.hasNext() ) {\n        sb.append( WHITESPACE );\n      }\n    }\n\n    for ( ArgumentWrapper arg : SqoopUtils.findAllArguments( config ) ) {\n      List<String> buffer = new ArrayList<String>( 4 );\n      appendArgument( buffer, arg, variableSpace );\n      if ( !buffer.isEmpty() ) {\n        buffers.add( buffer );\n      }\n    }\n\n    if ( !customBuffer.isEmpty() && !buffers.isEmpty() ) {\n      sb.append( WHITESPACE );\n    }\n\n    Iterator<List<String>> buffersIter = buffers.iterator();\n    while ( buffersIter.hasNext() ) {\n      List<String> buffer = buffersIter.next();\n      sb.append( buffer.get( 0 ) );\n      if ( buffer.size() == 2 ) {\n        sb.append( WHITESPACE );\n        // Escape value and add\n        sb.append( quote( escapeBackslash( buffer.get( 1 ) ) ) );\n      }\n      if ( buffersIter.hasNext() ) {\n        sb.append( WHITESPACE );\n      }\n    }\n\n    return sb.toString();\n  }\n\n  /**\n   * Escapes known Java escape sequences. See {@link #ESCAPE_SEQUENCES} for the list of escape sequences we escape here.\n   *\n   * @param s\n   *          String to escape\n   * @return Escaped string where all escape sequences are properly escaped\n   */\n  protected static String escapeEscapeSequences( String s ) {\n    for ( Object[] escapeSequence : ESCAPE_SEQUENCES ) {\n      s = ( (Pattern) escapeSequence[0] ).matcher( s ).replaceAll( (String) escapeSequence[1] );\n    }\n    return s;\n  }\n\n  /**\n   * If any whitespace is detected the string will be quoted. If any quotes exist in the string they will be escaped.\n   *\n   * @param s\n   *          String to quote\n   * @return A quoted version of {@code s} if whitespace exists in the string, otherwise unmodified {@code s}.\n   */\n  protected static String quote( String s ) {\n    final String orig = s;\n    s = QUOTE_PATTERN.matcher( s ).replaceAll( \"\\\\\\\\\\\"\" );\n    // Make sure the string is quoted if it contains a quote character, whitespace or has a backslash\n    if ( !orig.equals( s ) || WHITESPACE_PATTERN.matcher( s ).find() || BACKSLASH_PATTERN.matcher( s ).find() || EQUALS_PATTERN.matcher( s ).find() ) {\n      s = QUOTE + s + QUOTE;\n    }\n    return s;\n  }\n\n  /**\n   * Add all {@link ArgumentWrapper}s to a list of arguments\n   *\n   * @param args\n   *          Arguments to append to\n   * @param arguments\n   *          Arguments to append\n   * @param variableSpace\n   *          Variable space to look up argument values from. May be {@code null}.\n   */\n  protected static void appendArguments( List<String> args, Set<? extends ArgumentWrapper> arguments,\n      VariableSpace variableSpace ) {\n    for ( ArgumentWrapper ai : arguments ) {\n      appendArgument( args, ai, variableSpace );\n    }\n  }\n\n  /**\n   * Append this argument to a list of arguments if it has a value or if it's a flag.\n   *\n   * @param args\n   *          List of arguments to append to\n   */\n  protected static void appendArgument( List<String> args, ArgumentWrapper arg, VariableSpace variableSpace ) {\n    String value = arg.getValue();\n    if ( variableSpace != null ) {\n      value = variableSpace.environmentSubstitute( value );\n    }\n    if ( arg.getName().equals( \"password\" ) ) {\n      value = Encr.decryptPasswordOptionallyEncrypted( value );\n    }\n    if ( arg.isFlag() && Boolean.parseBoolean( value ) ) {\n      args.add( arg.getPrefix() + arg.getName() );\n    } else if ( !arg.isFlag() && value != null ) {\n      if ( !StringUtil.isEmpty( value ) ) {\n        args.add( arg.getPrefix() + arg.getName() );\n        args.add( value );\n      }\n    }\n  }\n\n  private static void appendCustomArguments( List<String> args, SqoopConfig config, VariableSpace variableSpace ) {\n    for ( PropertyEntry entry : config.getCustomArguments() ) {\n      appendCustomArgument( args, entry, variableSpace, false );\n    }\n  }\n\n  private static void appendCustomArgument( List<String> args, PropertyEntry arg, VariableSpace variableSpace, boolean quote ) {\n    String key = arg.getKey();\n    String value = arg.getValue();\n\n    // ignore if both key and value are blank\n    if ( StringUtils.isBlank( key ) && StringUtils.isBlank( value ) ) {\n      return;\n    }\n\n    key = StringUtils.defaultIfBlank( arg.getKey(), \"null\" );\n    value = StringUtils.defaultIfBlank( arg.getValue(), \"null\" );\n\n    if ( variableSpace != null ) {\n      key = variableSpace.environmentSubstitute( key );\n      value = variableSpace.environmentSubstitute( value );\n    }\n\n    if ( quote ) {\n      value = quote( escapeBackslash( value ) );\n    }\n\n    args.add( ARG_D );\n    args.add( key + EQUALS + value );\n  }\n\n  private static String escapeBackslash( String s ) {\n    return BACKSLASH_PATTERN.matcher( s ).replaceAll( \"\\\\\\\\\\\\\\\\\" );\n  }\n\n  private static void handleCustomOption( List<String> args, String option, StreamTokenizer tokenizer, VariableSpace variableSpace ) throws IOException {\n    String key = null;\n    String value = null;\n\n    args.add( ARG_D );\n    if ( ARG_D.equals( option ) ) {\n      tokenizer.nextToken();\n      key = tokenizer.sval;\n    } else {\n      key = option.substring( ARG_D.length() );\n    }\n\n    if ( key.contains( EQUALS ) ) {\n      if ( key.endsWith( EQUALS ) ) {\n        key = key.substring( 0, key.length() - 1 );\n        tokenizer.nextToken();\n        value = tokenizer.sval;\n      } else {\n        String[] split = key.split( EQUALS );\n        key = split[0];\n        value = split[1];\n      }\n    } else {\n      tokenizer.nextToken();\n      value = tokenizer.sval;\n    }\n    if ( variableSpace != null ) {\n      key = variableSpace.environmentSubstitute( key );\n      value = variableSpace.environmentSubstitute( value );\n    }\n    args.add( key + EQUALS + escapeEscapeSequences( value ) );\n  }\n\n  /**\n   * Find all fields annotated with {@link CommandLineArgument} in the class provided. All arguments must have valid\n   * JavaBeans-style getter and setter methods in the object.\n   *\n   * @param o\n   *          Object to look for arguments in\n   * @return Ordered set of arguments representing all {@link CommandLineArgument}-annotated fields in {@code o}\n   */\n  public static Set<? extends ArgumentWrapper> findAllArguments( Object o ) {\n    Set<ArgumentWrapper> arguments = new TreeSet<ArgumentWrapper>(\n        new Comparator<ArgumentWrapper>() {\n          @Override\n          /*\n           * Sort by order then by name\n           */\n          public int compare( ArgumentWrapper o1, ArgumentWrapper o2 ) {\n            int diff = o1.getOrder() - o2.getOrder();\n            if ( diff != 0 ) {\n              return diff;\n            }\n\n            return o1.getName().compareTo( o2.getName() );\n          }\n        }\n    );\n\n    Class<?> aClass = o.getClass();\n    while ( aClass != null ) {\n      for ( Field f : aClass.getDeclaredFields() ) {\n        if ( f.isAnnotationPresent( CommandLineArgument.class ) ) {\n          CommandLineArgument anno = f.getAnnotation( CommandLineArgument.class );\n          String fieldName = f.getName().substring( 0, 1 ).toUpperCase() + f.getName().substring( 1 );\n          Method getter = findMethod( aClass, fieldName, null, \"get\", \"is\" );\n          Method setter = findMethod( aClass, fieldName, new Class<?>[] { f.getType() }, \"set\" );\n          arguments.add( new ArgumentWrapper( anno.name(), getDisplayName( anno ), anno.flag(),\n              anno.prefix(), anno.order(), o, getter, setter ) );\n        }\n      }\n      aClass = aClass.getSuperclass();\n    }\n\n    return arguments;\n  }\n\n  /**\n   * Determine the display name for the command line argument.\n   *\n   * @param anno\n   *          Command line argument\n   * @return {@link CommandLineArgument#displayName()} or, if not set,\n   *         {@link CommandLineArgument#name()}\n   */\n  public static String getDisplayName( CommandLineArgument anno ) {\n    return StringUtil.isEmpty( anno.displayName() ) ? anno.name() : anno.displayName();\n  }\n\n  /**\n   * Finds a method in the given class or any super class with the name {@code prefix + methodName} that accepts 0\n   * parameters.\n   *\n   * @param aClass\n   *          Class to search for method in\n   * @param methodName\n   *          Camelcase'd method name to search for with any of the provided prefixes\n   * @param parameterTypes\n   *          The parameter types the method signature must match.\n   * @param prefixes\n   *          Prefixes to prepend to {@code methodName} when searching for method names, e.g. \"get\", \"is\"\n   * @return The first method found to match the format {@code prefix + methodName}\n   */\n  public static Method findMethod( Class<?> aClass, String methodName, Class<?>[] parameterTypes, String... prefixes ) {\n    for ( String prefix : prefixes ) {\n      try {\n        return aClass.getDeclaredMethod( prefix + methodName, parameterTypes );\n      } catch ( NoSuchMethodException ex ) {\n        // ignore, continue searching prefixes\n      }\n    }\n    // If no method found with any prefixes search the super class\n    aClass = aClass.getSuperclass();\n    return aClass == null ? null : findMethod( aClass, methodName, parameterTypes, prefixes );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/ui/AbstractSqoopJobEntryController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop.ui;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.HadoopVfsFileChooserDialog;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.kettle.plugins.job.AbstractJobEntryController;\nimport org.pentaho.big.data.kettle.plugins.job.BlockableJobConfig;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.ArgumentWrapper;\nimport org.pentaho.big.data.kettle.plugins.sqoop.DatabaseItem;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopConfig;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopUtils;\nimport org.pentaho.big.data.plugins.common.ui.HadoopClusterDelegateImpl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.database.Database;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleDatabaseException;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.ui.core.database.dialog.DatabaseDialog;\nimport org.pentaho.di.ui.core.database.dialog.DatabaseExplorerDialog;\nimport org.pentaho.di.ui.core.dialog.EnterSelectionDialog;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\nimport org.pentaho.ui.xul.XulComponent;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingConvertor;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.components.XulButton;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.containers.XulDeck;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.util.AbstractModelList;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.util.Collection;\nimport java.util.List;\n\n/**\n * Base functionality to support a Sqoop job entry controller that provides most of the common functionality to back a\n * XUL-based dialog.\n *\n * @param <S>\n *          Type of Sqoop configuration object this controller depends upon. Must match the configuration object the job\n *          entry expects.\n */\npublic abstract class AbstractSqoopJobEntryController<S extends SqoopConfig, E extends AbstractSqoopJobEntry<S>>\n    extends AbstractJobEntryController<S, E> {\n  public static final String SELECTED_DATABASE_CONNECTION = \"selectedDatabaseConnection\";\n  public static final String MODE_TOGGLE_LABEL = \"modeToggleLabel\";\n  protected static Class<?> PKG = AbstractSqoopJobEntry.class;\n  private final String[] MODE_I18N_STRINGS = new String[] { \"Sqoop.JobEntry.AdvancedOptions.Button.Text\",\n    \"Sqoop.JobEntry.QuickSetup.Button.Text\" };\n  public static final String VALUE = \"value\";\n\n  // This is overwritten in init() with the i18n string\n  protected DatabaseItem NO_DATABASE = new DatabaseItem( \"@@none@@\", \"Choose Available\" );\n\n  // The following is overwritten in init() with the i18n string\n  protected DatabaseItem USE_ADVANCED_OPTIONS = new DatabaseItem( \"@@advanced@@\", \"Use Advanced Options\" );\n\n  private NamedCluster CHOOSE_AVAILABLE_CLUSTER;\n  private NamedCluster USE_ADVANCED_OPTIONS_CLUSTER;\n\n  protected AbstractModelList<ArgumentWrapper> advancedArguments;\n  private AbstractModelList<DatabaseItem> databaseConnections;\n  private AbstractModelList<NamedCluster> namedClusters;\n  private DatabaseItem selectedDatabaseConnection;\n  private DatabaseDialog databaseDialog;\n  protected NamedCluster selectedNamedCluster;\n\n  // Flag to indicate we shouldn't handle any events. Useful for preventing unwanted synchronization during\n  // initialization\n  // or other user-driven events.\n  protected boolean suppressEventHandling = false;\n\n  protected final HadoopClusterDelegateImpl ncDelegate;\n\n  /**\n   * The text for the Quick Setup/Advanced Options mode toggle (label).\n   */\n  private String modeToggleLabel;\n  private AdvancedButton selectedAdvancedButton = AdvancedButton.LIST;\n\n  protected enum AdvancedButton {\n    LIST( 0, JobEntryMode.ADVANCED_LIST ),\n    COMMAND_LINE( 1, JobEntryMode.ADVANCED_COMMAND_LINE );\n\n    private int deckIndex;\n    private JobEntryMode mode;\n\n    AdvancedButton( int deckIndex, JobEntryMode mode ) {\n      this.deckIndex = deckIndex;\n      this.mode = mode;\n    }\n\n    public int getDeckIndex() {\n      return deckIndex;\n    }\n\n    public JobEntryMode getMode() {\n      return mode;\n    }\n  }\n\n  /**\n   * Creates a new Sqoop job entry controller.\n   *\n   * @param jobMeta\n   *          Meta data for the job\n   * @param container\n   *          Container with dialog for which we will control\n   * @param jobEntry\n   *          Job entry the dialog is being created for\n   * @param bindingFactory\n   *          Binding factory to generate bindings\n   */\n  @SuppressWarnings( \"unchecked\" )\n  public AbstractSqoopJobEntryController( JobMeta jobMeta, XulDomContainer container, E jobEntry,\n      BindingFactory bindingFactory ) {\n    super( jobMeta, container, jobEntry, bindingFactory );\n    this.config = (S) jobEntry.getJobConfig().clone();\n\n    NamedClusterService namedClusterService = jobEntry.getNamedClusterService();\n    this.advancedArguments = new AbstractModelList<ArgumentWrapper>();\n    this.databaseConnections = new AbstractModelList<DatabaseItem>();\n    this.namedClusters = new AbstractModelList<NamedCluster>();\n    ncDelegate = new HadoopClusterDelegateImpl( Spoon.getInstance(), namedClusterService,\n      jobEntry.getRuntimeTestActionService(), jobEntry.getRuntimeTester() );\n\n    CHOOSE_AVAILABLE_CLUSTER = namedClusterService.getClusterTemplate();\n    CHOOSE_AVAILABLE_CLUSTER.setName( BaseMessages.getString( AbstractSqoopJobEntry.class,\n        \"DatabaseName.ChooseAvailable\" ) );\n    CHOOSE_AVAILABLE_CLUSTER.setVariable( \"valid\", \"false\" );\n\n    USE_ADVANCED_OPTIONS_CLUSTER = namedClusterService.getClusterTemplate();\n    USE_ADVANCED_OPTIONS_CLUSTER.setName( BaseMessages.getString( AbstractSqoopJobEntry.class,\n        \"DatabaseName.UseAdvancedOptions\" ) );\n    USE_ADVANCED_OPTIONS_CLUSTER.setVariable( \"valid\", \"false\" );\n  }\n\n  /**\n   * @return the element id of the XUL dialog element in the XUL document\n   */\n  public abstract String getDialogElementId();\n\n  /**\n   * Create the necessary XUL {@link Binding}s to support the dialog's desired functionality.\n   *\n   * @param config\n   *          Configuration object to bind to\n   * @param container\n   *          Container with components to bind to\n   * @param bindingFactory\n   *          Binding factory to create bindings with\n   * @param bindings\n   *          Collection to add created bindings to. This collection will be initialized via\n   *          {@link org.pentaho.ui.xul.binding.Binding#fireSourceChanged()} upon return.\n   */\n  @Override\n  protected void createBindings( S config, XulDomContainer container, BindingFactory bindingFactory,\n      Collection<Binding> bindings ) {\n\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n    bindings.add( bindingFactory.createBinding( config, BlockableJobConfig.JOB_ENTRY_NAME, BlockableJobConfig.JOB_ENTRY_NAME, VALUE ) );\n\n    // TODO Determine if separate schema field is required, this has to be provided as part of the --table argument\n    // anyway.\n    // bindings.add(bindingFactory.createBinding(config, SCHEMA, SCHEMA, VALUE));\n    bindings.add( bindingFactory.createBinding( config, SqoopConfig.TABLE, SqoopConfig.TABLE, VALUE ) );\n\n    bindings.add( bindingFactory.createBinding( config, SqoopConfig.COMMAND_LINE, SqoopConfig.COMMAND_LINE, VALUE ) );\n\n    bindingFactory.setBindingType( Binding.Type.ONE_WAY );\n    bindings.add( bindingFactory.createBinding( this, \"modeToggleLabel\", getModeToggleLabelElementId(), VALUE ) );\n    bindings.add( bindingFactory.createBinding( databaseConnections, \"children\", \"connection\", \"elements\" ) );\n    bindings.add( bindingFactory.createBinding( namedClusters, \"children\", \"named-clusters\", \"elements\" ) );\n\n    XulTree variablesTree = (XulTree) container.getDocumentRoot().getElementById( \"advanced-table\" );\n    bindings.add( bindingFactory.createBinding( advancedArguments, \"children\", variablesTree, \"elements\" ) );\n\n    XulTree customVariablesTree = (XulTree) container.getDocumentRoot().getElementById( \"custom-table\" );\n    bindings.add( bindingFactory.createBinding( config.getCustomArguments(), \"children\", customVariablesTree, \"elements\" ) );\n\n    // Create database/connection sync so that we're notified any time the connect argument is updated\n    bindingFactory.createBinding( config, \"connect\", this, \"connectChanged\" );\n    bindingFactory.createBinding( config, \"username\", this, \"usernameChanged\" );\n    bindingFactory.createBinding( config, \"password\", this, \"passwordChanged\" );\n\n    bindingFactory.createBinding( config, SqoopConfig.NAMENODE_HOST, this, \"advancedNamedConfiguration\" );\n    bindingFactory.createBinding( config, SqoopConfig.NAMENODE_PORT, this, \"advancedNamedConfiguration\" );\n    bindingFactory.createBinding( config, SqoopConfig.JOBTRACKER_HOST, this, \"advancedNamedConfiguration\" );\n    bindingFactory.createBinding( config, SqoopConfig.JOBTRACKER_PORT, this, \"advancedNamedConfiguration\" );\n\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n    // Specifically create this binding after the databaseConnections binding so the list is populated before we attempt\n    // to select an item\n    bindings.add( bindingFactory.createBinding( this, SELECTED_DATABASE_CONNECTION, \"connection\", \"selectedItem\" ) );\n    bindings.add( bindingFactory.createBinding( \"named-clusters\", \"selectedIndex\", this, \"selectedNamedCluster\",\n        new BindingConvertor<Integer, NamedCluster>() {\n          public NamedCluster sourceToTarget( final Integer index ) {\n            if ( index == -1 ) {\n              return null;\n            }\n            return namedClusters.get( index );\n          }\n\n          public Integer targetToSource( final NamedCluster namedCluster ) {\n            return namedClusters.indexOf( namedCluster );\n          }\n        } ) );\n  }\n\n  protected void initializeNamedClusterSelection() {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedClusterMenu =\n        (XulMenuList<NamedCluster>) container.getDocumentRoot().getElementById( \"named-clusters\" ); //$NON-NLS-1$\n    try {\n      String cn = config.getClusterName();\n      if ( cn != null ) {\n        NamedCluster namedCluster = jobEntry.getNamedClusterService().read( cn, getMetaStore() );\n        namedClusterMenu.setSelectedItem( namedCluster );\n        if ( namedCluster == null && StringUtil.hasVariable( cn ) ) {\n          // If we have a variable name in the cluster then we should be using advanced list\n          setSelectedNamedCluster( USE_ADVANCED_OPTIONS_CLUSTER );\n        } else {\n          setSelectedNamedCluster( namedCluster );\n        }\n      } else if ( config.isAdvancedClusterConfigSet() ) {\n        setSelectedNamedCluster( USE_ADVANCED_OPTIONS_CLUSTER );\n      } else {\n        setSelectedNamedCluster( CHOOSE_AVAILABLE_CLUSTER );\n      }\n    } catch ( MetaStoreException e ) {\n      jobEntry.logError( e.getMessage() );\n    }\n  }\n\n  public AbstractModelList<NamedCluster> getNamedClusters() {\n    return this.namedClusters;\n  }\n\n  public void setNamedClusters( AbstractModelList<NamedCluster> namedClusters ) {\n    this.namedClusters = namedClusters;\n  }\n\n  public void setSelectedNamedCluster( NamedCluster namedCluster ) {\n    this.selectedNamedCluster = namedCluster;\n    if ( !suppressEventHandling ) {\n      if ( CHOOSE_AVAILABLE_CLUSTER.equals( namedCluster ) ) {\n        config.setNamedCluster( null );\n      } else if ( !USE_ADVANCED_OPTIONS_CLUSTER.equals( namedCluster ) && namedCluster != null ) {\n        config.setNamedCluster( namedCluster );\n      }\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedNamedCluster\", null, this.selectedNamedCluster );\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  private void updateDeleteButton() {\n    boolean disabled = config.getCustomArguments().size() == 0;\n    XulComponent delete = getXulDomContainer().getDocumentRoot().getElementById( \"delete-button\" );\n    delete.setDisabled( disabled );\n  }\n\n  public boolean isSelectedNamedCluster() {\n    return this.selectedNamedCluster != null;\n  }\n\n  /**\n   * @return element id for the deck of dialog modes (quick setup, advanced)\n   */\n  protected String getModeDeckElementId() {\n    return \"modeDeck\";\n  }\n\n  @Override\n  protected void beforeInit() {\n    NO_DATABASE =\n        new DatabaseItem( \"@@none@@\", BaseMessages.getString( AbstractSqoopJobEntry.class,\n            \"DatabaseName.ChooseAvailable\" ) );\n    USE_ADVANCED_OPTIONS =\n        new DatabaseItem( \"@@advanced@@\", BaseMessages.getString( AbstractSqoopJobEntry.class,\n            \"DatabaseName.UseAdvancedOptions\" ) );\n    // Suppress event handling while we're initializing to prevent unwanted value changes\n    suppressEventHandling = true;\n    populateDatabases();\n    populateNamedClusters();\n    setModeToggleLabel( BaseMessages.getString( AbstractSqoopJobEntry.class, MODE_I18N_STRINGS[0] ) );\n    // customizeModeToggleLabel(getModeToggleLabelElementId());\n  }\n\n  @Override\n  protected void afterInit() {\n    setUiMode( getConfig().getModeAsEnum() );\n\n    suppressEventHandling = false;\n\n    // Manually set the current database, if it is valid, to sync the UI buttons since we suppressed their event\n    // handling while initializing bindings\n    setSelectedDatabaseConnection( createDatabaseItem( getConfig().getDatabase() ) );\n    initializeNamedClusterSelection();\n\n    updateDeleteButton();\n  }\n\n  /**\n   * @return element id for the deck of advanced dialog modes (general, list, command line)\n   */\n  protected String getAdvancedModeDeckElementId() {\n    return \"advancedModeDeck\";\n  }\n\n  /**\n   * Synchronize the model values from the configuration object to our internal model objects.\n   */\n  protected void syncModel() {\n    advancedArguments.clear();\n    advancedArguments.addAll( config.getAdvancedArgumentsList() );\n  }\n\n  /**\n   * Populate the list of databases from the {@link JobMeta}.\n   */\n  protected void populateDatabases() {\n    databaseConnections.clear();\n    for ( DatabaseMeta dbMeta : jobMeta.getDatabases() ) {\n      if ( jobEntry.isDatabaseSupported( dbMeta.getDatabaseInterface().getClass() ) ) {\n        databaseConnections.add( new DatabaseItem( dbMeta.getName() ) );\n      }\n    }\n    updateDatabaseItemsList();\n  }\n\n  protected void populateNamedClusters() {\n    namedClusters.clear();\n    try {\n      namedClusters.addAll( jobEntry.getNamedClusterService().list( getMetaStore() ) );\n    } catch ( MetaStoreException e ) {\n      jobEntry.logError( e.getMessage(), e );\n    }\n    namedClusters.add( CHOOSE_AVAILABLE_CLUSTER );\n    namedClusters.add( USE_ADVANCED_OPTIONS_CLUSTER );\n  }\n\n  /**\n   * This is used to be notified when the connect string changes so we can remove the selection from the database\n   * dropdown.\n   */\n  public void setConnectChanged( String connect ) {\n    // If the connect string changes unselect the database\n    if ( !suppressEventHandling ) {\n      if ( connect != null ) {\n        config.copyConnectionInfoToAdvanced();\n        setSelectedDatabaseConnection( USE_ADVANCED_OPTIONS );\n      } else {\n        setSelectedDatabaseConnection( NO_DATABASE );\n      }\n    }\n  }\n\n\n  public void setAdvancedNamedConfiguration( String value ) {\n    if ( !suppressEventHandling ) {\n      if ( value != null ) {\n        setSelectedNamedCluster( USE_ADVANCED_OPTIONS_CLUSTER );\n      }\n    }\n  }\n\n  public void setUsernameChanged( String username ) {\n    if ( !suppressEventHandling ) {\n      config.copyConnectionInfoToAdvanced();\n      setSelectedDatabaseConnection( USE_ADVANCED_OPTIONS );\n    }\n  }\n\n  public void setPasswordChanged( String password ) {\n    if ( !suppressEventHandling ) {\n      config.copyConnectionInfoToAdvanced();\n      setSelectedDatabaseConnection( USE_ADVANCED_OPTIONS );\n    }\n  }\n\n  /**\n   * @return the list of database connections that back the connections list\n   */\n  public AbstractModelList<DatabaseItem> getDatabaseConnections() {\n    return databaseConnections;\n  }\n\n  /**\n   * @return the selected database from the configuration object\n   */\n  public DatabaseItem getSelectedDatabaseConnection() {\n    return selectedDatabaseConnection;\n  }\n\n  /**\n   * Creates a {@link DatabaseItem} based on the name of a database and the existence of a connection string in the\n   * configuration.\n   *\n   * @param database\n   *          Name of database\n   * @return A database item whose name is {@code database}. If {@code database} is null, {@link #NO_DATABASE} is\n   *         returned iff. {@link SqoopConfig#getConnect() getConfig().getConnect()} is\n   *         null; {@link #USE_ADVANCED_OPTIONS} otherwise.\n   */\n  protected DatabaseItem createDatabaseItem( String database ) {\n    return database == null ? ( getConfig().getConnect() != null ? USE_ADVANCED_OPTIONS : NO_DATABASE )\n        : new DatabaseItem( database );\n  }\n\n  /**\n   * Sets the selected database connection. This database will be verified to exist and the appropriate settings within\n   * the model will be set.\n   *\n   * @param selectedDatabaseConnection\n   *          Database item to select\n   */\n  public void setSelectedDatabaseConnection( DatabaseItem selectedDatabaseConnection ) {\n    if ( !suppressEventHandling ) {\n      this.selectedDatabaseConnection = selectedDatabaseConnection;\n      DatabaseMeta databaseMeta =\n          this.selectedDatabaseConnection == null ? null : jobMeta.findDatabase( this.selectedDatabaseConnection\n              .getName() );\n      jobEntry.setUsedDbConnection( databaseMeta );\n      boolean validDatabaseSelected = databaseMeta != null;\n      setDatabaseInteractionButtonsDisabled( !validDatabaseSelected );\n      suppressEventHandling = true;\n      try {\n        updateDatabaseItemsList();\n      } finally {\n        suppressEventHandling = false;\n      }\n      // Always update the database info in case something more than the name changes (we can only tell if database\n      // names change via DatabaseItem)\n      if ( validDatabaseSelected ) {\n        try {\n          getConfig().setConnectionInfo( databaseMeta.getName(), databaseMeta.getURL(), databaseMeta.getUsername(),\n              databaseMeta.getPassword() );\n        } catch ( KettleDatabaseException ex ) {\n          jobEntry.logError(\n              BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorConfiguringDatabaseConnection\" ), ex );\n        }\n      } else {\n        getConfig().copyConnectionInfoFromAdvanced();\n      }\n      // Always fire a property change\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( SELECTED_DATABASE_CONNECTION, null, this.selectedDatabaseConnection );\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  /**\n   * Make sure we have a \"Use Advanced Option\" in the list of database connections if we don't have a valid database\n   * selected but we have an advanced connect string.\n   */\n  protected void updateDatabaseItemsList() {\n    if ( this.selectedDatabaseConnection == null || NO_DATABASE.equals( this.selectedDatabaseConnection ) ) {\n      if ( !databaseConnections.contains( NO_DATABASE ) ) {\n        databaseConnections.add( NO_DATABASE );\n      }\n    } else {\n      if ( databaseConnections.contains( NO_DATABASE ) ) {\n        databaseConnections.remove( NO_DATABASE );\n      }\n    }\n    if ( getConfig().getConnectFromAdvanced() != null || getConfig().getUsernameFromAdvanced() != null\n        || getConfig().getPasswordFromAdvanced() != null ) {\n      if ( !databaseConnections.contains( USE_ADVANCED_OPTIONS ) ) {\n        databaseConnections.add( USE_ADVANCED_OPTIONS );\n      }\n    } else {\n      if ( databaseConnections.contains( USE_ADVANCED_OPTIONS ) ) {\n        databaseConnections.remove( USE_ADVANCED_OPTIONS );\n      }\n    }\n  }\n\n  /**\n   * Set the enabled state for all buttons that require a valid database to be selected.\n   *\n   * @param b\n   *          {@code true} if the buttons should be disabled\n   */\n  protected void setDatabaseInteractionButtonsDisabled( boolean b ) {\n    document.getElementById( getEditConnectionButtonId() ).setDisabled( b );\n    document.getElementById( getBrowseTableButtonId() ).setDisabled( b );\n    // document.getElementById(getBrowseSchemaButtonId()).setDisabled(b);\n  }\n\n  /**\n   * @return the text for the \"modeToggleLabel\" label\n   */\n  public String getModeToggleLabel() {\n    return modeToggleLabel;\n  }\n\n  /**\n   * Set the label text for the mode toggle label element\n   *\n   * @param modeToggleLabel\n   */\n  public void setModeToggleLabel( String modeToggleLabel ) {\n    String old = this.modeToggleLabel;\n    this.modeToggleLabel = modeToggleLabel;\n    firePropertyChange( MODE_TOGGLE_LABEL, old, this.modeToggleLabel );\n  }\n\n  @Override\n  protected void setModeToggleLabel( JobEntryMode mode ) {\n    setModeToggleLabel( BaseMessages.getString( AbstractSqoopJobEntry.class, MODE_I18N_STRINGS[mode.ordinal()] ) );\n  }\n\n  protected DatabaseDialog getDatabaseDialog() {\n    if ( databaseDialog == null ) {\n      databaseDialog = new DatabaseDialog( getShell() );\n    }\n    return databaseDialog;\n  }\n\n  public void editConnection() {\n    DatabaseMeta current = jobMeta.findDatabase( config.getDatabase() );\n    if ( current == null ) {\n      return; // nothing to edit, this should not be possible through the UI\n    }\n    editDatabaseMeta( current, false );\n  }\n\n  public void newConnection() {\n    editDatabaseMeta( new DatabaseMeta(), true );\n  }\n\n  /**\n   * Open the Database Connection Dialog to edit\n   *\n   * @param database\n   *          Database meta to edit\n   * @param isNew\n   *          Is this database meta new? If so and the user chooses to save the database connection we will make sure to\n   *          save this into the job meta.\n   */\n  protected void editDatabaseMeta( DatabaseMeta database, boolean isNew ) {\n    database.shareVariablesWith( jobMeta );\n    getDatabaseDialog().setDatabaseMeta( database );\n    if ( getDatabaseDialog().open() != null ) {\n      if ( isNew ) {\n        jobMeta.addDatabase( getDatabaseDialog().getDatabaseMeta() );\n      }\n      suppressEventHandling = true;\n      try {\n        populateDatabases();\n      } finally {\n        suppressEventHandling = false;\n      }\n      setSelectedDatabaseConnection( createDatabaseItem( getDatabaseDialog().getDatabaseMeta().getName() ) );\n      jobEntry.setUsedDbConnection( database );\n    }\n  }\n\n  /**\n   * @return the simple name for this controller. This controller can be referenced by this name in the XUL document.\n   */\n  @Override\n  public String getName() {\n    return \"controller\";\n  }\n\n  /**\n   * @return the edit connection button's id. By default this is {@code \"editConnectionButton\"}\n   */\n  public String getEditConnectionButtonId() {\n    return \"editConnectionButton\";\n  }\n\n  /**\n   * @return the browse table button's id. By default this is {@code \"browseTableButton\"}\n   */\n  public String getBrowseTableButtonId() {\n    return \"browseTableButton\";\n  }\n\n  public String getBrowseSchemaButtonId() {\n    return \"browseSchemaButton\";\n  }\n\n  /**\n   * @return the id of the element responsible for toggling between \"Quick Setup\" and \"Advanced Options\" modes\n   */\n  public String getModeToggleLabelElementId() {\n    return \"mode-toggle-label\";\n  }\n\n  /**\n   * @return the advanced list button's id. By default this is {@code \"advanced-list-button\"}\n   */\n  public String getAdvancedListButtonElementId() {\n    return \"advanced-list-button\";\n  }\n\n  /**\n   * @return the advanced command line button's id. By default this is {@code \"advanced-command-line-button\"}\n   */\n  public String getAdvancedCommandLineButtonElementId() {\n    return \"advanced-command-line-button\";\n  }\n\n  /**\n   * @return the current configuration object. This configuration may be discarded if the dialog is canceled.\n   */\n  @Override\n  public S getConfig() {\n    return config;\n  }\n\n  /**\n   * Test the configuration settings and show a dialog with the feedback.\n   */\n  public void testSettings() {\n    List<String> warnings = jobEntry.getValidationWarnings( getConfig() );\n    if ( !warnings.isEmpty() ) {\n      StringBuilder sb = new StringBuilder();\n      for ( String warning : warnings ) {\n        sb.append( warning ).append( \"\\n\" );\n      }\n      showErrorDialog( BaseMessages.getString( AbstractSqoopJobEntry.class, \"ValidationError.Dialog.Title\" ), sb\n          .toString() );\n      return;\n    }\n    // TODO implement\n    showErrorDialog( \"Error\", \"Not Implemented\" );\n  }\n\n  /**\n   * Toggles between Quick Setup and Advanced Options mode. This assumes there exists a deck by id\n   * {@link #getModeDeckElementId()} and it contains two panels.\n   */\n  public void toggleMode() {\n    XulDeck deck = getModeDeck();\n    setUiMode( deck.getSelectedIndex() == 1 ? JobEntryMode.QUICK_SETUP : selectedAdvancedButton.getMode() );\n  }\n\n  /**\n   * Toggles between Quick Setup and Advanced Options mode. This assumes there exists a deck by id\n   * {@link #getModeDeckElementId()} and it contains two panels.\n   *\n   * @param quickMode\n   *          Should quick mode be visible/selected?\n   */\n  protected void toggleQuickMode( boolean quickMode ) {\n    XulDeck deck = getModeDeck();\n    deck.setSelectedIndex( quickMode ? 0 : 1 );\n\n    // Swap the label on the button\n    setModeToggleLabel( BaseMessages\n        .getString( AbstractSqoopJobEntry.class, MODE_I18N_STRINGS[deck.getSelectedIndex()] ) );\n\n    // We toggle to and from quick setup in this method so either the old or the new is always Mode.QUICK_SETUP.\n    // Whichever is not is the mode for the currently selected advanced button\n    JobEntryMode oldMode = deck.getSelectedIndex() == 0 ? selectedAdvancedButton.getMode() : JobEntryMode.QUICK_SETUP;\n    JobEntryMode newMode =\n        JobEntryMode.QUICK_SETUP == oldMode ? selectedAdvancedButton.getMode() : JobEntryMode.QUICK_SETUP;\n    updateUiMode( oldMode, newMode );\n  }\n\n  protected void setUiMode( JobEntryMode mode ) {\n    switch ( mode ) {\n      case QUICK_SETUP:\n        if ( selectedNamedCluster != null\n          && !this.selectedNamedCluster.equals( USE_ADVANCED_OPTIONS_CLUSTER )\n          && !suppressEventHandling ) {\n          config.setNamedCluster( selectedNamedCluster );\n        }\n        toggleQuickMode( true );\n        break;\n      case ADVANCED_LIST:\n        setSelectedAdvancedButton( AdvancedButton.LIST );\n        toggleQuickMode( false );\n        break;\n      case ADVANCED_COMMAND_LINE:\n        setSelectedAdvancedButton( AdvancedButton.COMMAND_LINE );\n        toggleQuickMode( false );\n        break;\n      default:\n        throw new RuntimeException( \"unsupported mode: \" + mode );\n    }\n  }\n\n  /**\n   * Update the UI Mode and configure the underlying {@link SqoopConfig} object.\n   *\n   * @param oldMode\n   *          Old mode\n   * @param newMode\n   *          New mode\n   */\n  protected void updateUiMode( JobEntryMode oldMode, JobEntryMode newMode ) {\n    if ( suppressEventHandling ) {\n      return;\n    }\n    if ( JobEntryMode.ADVANCED_COMMAND_LINE.equals( oldMode ) ) {\n      if ( !syncCommandLineToConfig() ) {\n        // Flip back to the advanced command line view\n        // Suppress event handling so we don't re-enter updateUiMode and copy the properties back on top of the command\n        // line\n        suppressEventHandling = true;\n        try {\n          setUiMode( JobEntryMode.ADVANCED_COMMAND_LINE );\n        } finally {\n          suppressEventHandling = false;\n        }\n        return;\n      }\n    } else if ( JobEntryMode.ADVANCED_COMMAND_LINE.equals( newMode ) ) {\n      // Sync config properties -> command line\n      getConfig().setCommandLine( SqoopUtils.generateCommandLineString( getConfig(), null ) );\n    }\n\n    if ( JobEntryMode.ADVANCED_LIST.equals( newMode ) ) {\n      // Synchronize the model when we switch to the advanced list to make sure it's fresh\n      syncModel();\n    }\n\n    getConfig().setMode( getMode().name() );\n  }\n\n  /**\n   * @return the current UI mode based off the current state of the components\n   */\n  private JobEntryMode getMode() {\n    XulDeck modeDeck = getModeDeck();\n    XulDeck advancedModeDeck = getAdvancedModeDeck();\n    if ( modeDeck.getSelectedIndex() == 0 ) {\n      return JobEntryMode.QUICK_SETUP;\n    } else {\n      for ( AdvancedButton b : AdvancedButton.values() ) {\n        if ( b.getDeckIndex() == advancedModeDeck.getSelectedIndex() ) {\n          return b.getMode();\n        }\n      }\n    }\n    throw new RuntimeException( \"unknown UI mode\" );\n  }\n\n  /**\n   * Configure the current config object from the command line string. This will invoke\n   * {@link #showErrorDialog(String, String, Throwable)} if an exception occurs.\n   *\n   * @return {@code true} if the command line could be parsed and the config object updated successfully.\n   */\n  protected boolean syncCommandLineToConfig() {\n    try {\n      // Sync command line -> config properties\n      SqoopUtils.configureFromCommandLine( getConfig(), getConfig().getCommandLine(), null );\n      return true;\n    } catch ( Exception ex ) {\n      showErrorDialog( BaseMessages.getString( AbstractSqoopJobEntry.class, \"Dialog.Error\" ), BaseMessages.getString(\n          AbstractSqoopJobEntry.class, \"ErrorConfiguringFromCommandLine\" ), ex );\n    }\n    return false;\n  }\n\n  public void setSelectedAdvancedButton( AdvancedButton button ) {\n    AdvancedButton old = selectedAdvancedButton;\n    selectedAdvancedButton = button;\n    switch ( button ) {\n      case LIST:\n        XulButton advancedList = getAdvancedListButton();\n        advancedList.setSelected( true );\n        getAdvancedCommandLineButton().setSelected( false );\n        break;\n      case COMMAND_LINE:\n        getAdvancedListButton().setSelected( false );\n        getAdvancedCommandLineButton().setSelected( true );\n        break;\n      default:\n        throw new RuntimeException( \"Unknown button type: \" + button );\n    }\n    toggleAdvancedMode( button );\n    updateUiMode( old == null ? null : old.getMode(), button.getMode() );\n  }\n\n  /**\n   * Toggle the selected deck for advanced mode.\n   *\n   * @param button\n   *          Button that was selected\n   */\n  protected void toggleAdvancedMode( AdvancedButton button ) {\n    getAdvancedModeDeck().setSelectedIndex( button.getDeckIndex() );\n  }\n\n  /**\n   * @return The button to select the advanced list mode\n   */\n  public XulButton getAdvancedListButton() {\n    return getButton( getAdvancedListButtonElementId() );\n  }\n\n  /**\n   * @return The button to select the advanced command line mode\n   */\n  public XulButton getAdvancedCommandLineButton() {\n    return getButton( getAdvancedCommandLineButtonElementId() );\n  }\n\n  /**\n   * @return the deck that shows either the Quick Setup or Advanced Mode UI\n   */\n  protected XulDeck getModeDeck() {\n    return (XulDeck) getXulDomContainer().getDocumentRoot().getElementById( getModeDeckElementId() );\n  }\n\n  /**\n   * @return the deck that contains Advanced Mode panels\n   */\n  protected XulDeck getAdvancedModeDeck() {\n    return (XulDeck) getXulDomContainer().getDocumentRoot().getElementById( getAdvancedModeDeckElementId() );\n  }\n\n  /**\n   * Gets a {@link XulButton} from the current {@link XulDomContainer}\n   *\n   * @param elementId\n   *          Element Id of the button to look up\n   * @return The button with element id {@code elementId} or {@code null} if not found\n   */\n  protected XulButton getButton( String elementId ) {\n    return (XulButton) getXulDomContainer().getDocumentRoot().getElementById( elementId );\n  }\n\n  /**\n   * Callback for clicking the advanced list button\n   */\n  public void advancedListButtonClicked() {\n    setSelectedAdvancedButton( AdvancedButton.LIST );\n  }\n\n  /**\n   * Callback for clicking the advanced command line button\n   */\n  public void advancedCommandLineButtonClicked() {\n    setSelectedAdvancedButton( AdvancedButton.COMMAND_LINE );\n  }\n\n  /**\n   * Show the schema browse dialog if schemas can be detected and exist for the give database. Set the selected schema\n   * to {@link SqoopConfig#setSchema(String) getConfig().setSchema(schema)}.\n   */\n  public void browseSchema() {\n    DatabaseMeta databaseMeta = jobMeta.findDatabase( getConfig().getDatabase() );\n    try ( Database database = new Database( jobMeta.getParent(), databaseMeta ) ) {\n      database.connect();\n      String[] schemas = database.getSchemas();\n\n      if ( null != schemas && schemas.length > 0 ) {\n        schemas = Const.sortStrings( schemas );\n        EnterSelectionDialog dialog =\n          new EnterSelectionDialog( getShell(), schemas,\n              BaseMessages.getString( AbstractSqoopJobEntry.class, \"AvailableSchemas.Title\" ),\n              BaseMessages.getString( AbstractSqoopJobEntry.class, \"AvailableSchemas.Message\" ) );\n        String schema = dialog.open();\n        if ( schema != null ) {\n          getConfig().setSchema( schema );\n        }\n      } else {\n        showErrorDialog( BaseMessages.getString( AbstractSqoopJobEntry.class, \"Dialog.Error\" ), BaseMessages.getString(\n            AbstractSqoopJobEntry.class, \"NoSchema.Error\" ) );\n      }\n    } catch ( Exception e ) {\n      showErrorDialog( BaseMessages.getString( BaseMessages.getString( AbstractSqoopJobEntry.class,\n          \"System.Dialog.Error.Title\" ) ), BaseMessages.getString( BaseMessages.getString( AbstractSqoopJobEntry.class,\n          \"ErrorRetrievingSchemas\" ) ), e );\n    }\n  }\n\n  /**\n   * Show the Database Explorer Dialog for the database information provided. The provided schema and table will be\n   * selected if already configured. Any new selection will be saved in the current configuration.\n   */\n  public void browseTable() {\n    DatabaseMeta databaseMeta = jobMeta.findDatabase( getConfig().getDatabase() );\n    DatabaseExplorerDialog std =\n        new DatabaseExplorerDialog( getShell(), SWT.NONE, databaseMeta, jobMeta.getDatabases() );\n    std.setSelectedSchemaAndTable( getConfig().getSchema(), getConfig().getTable() );\n    if ( std.open() ) {\n      getConfig().setSchema( std.getSchemaName() );\n      getConfig().setTable( std.getTableName() );\n    }\n  }\n\n  protected FileObject getInitialFile( String path ) throws FileSystemException {\n    if ( Const.isEmpty( path ) ) {\n      path = \"/\";\n    }\n\n    NamedCluster namedCluster =\n        jobEntry.getNamedClusterService().getNamedClusterByName( selectedNamedCluster.getName(), getMetaStore() );\n    if ( namedCluster == null  ) {\n      return null;\n    }\n    path = namedCluster.processURLsubstitution( path, getMetaStore(), jobEntry );\n\n    FileObject initialFile = null;\n\n    // only used for UI\n    Spoon spoon = Spoon.getInstance();\n    if ( path != null ) {\n      String fileName = jobEntry.environmentSubstitute( path );\n      if ( fileName != null && !fileName.equals( \"\" ) ) {\n        try {\n          initialFile = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileName );\n          if ( namedCluster.isMapr() ) {\n            if ( !initialFile.getName().getScheme().startsWith( Schemes.MAPRFS_SCHEME ) ) {\n              return null;\n            }\n          } else if ( !initialFile.getName().getScheme().startsWith( Schemes.HDFS_SCHEME ) ) {\n            return null;\n          }\n        } catch ( Exception ex ) {\n          return null;\n        }\n      }\n    }\n\n    return initialFile;\n  }\n\n  private IMetaStore getMetaStore() {\n    return Spoon.getInstance().getMetaStore();\n  }\n\n  protected void extractNamedClusterFromVfsFileChooser() {\n    VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n    CustomVfsUiPanel currentPanel = dialog.getCurrentPanel();\n    if ( currentPanel != null && currentPanel instanceof HadoopVfsFileChooserDialog ) {\n      HadoopVfsFileChooserDialog hadoopVfsFileChooserDialog = (HadoopVfsFileChooserDialog) currentPanel;\n      selectedNamedCluster = hadoopVfsFileChooserDialog.getNamedClusterWidget().getSelectedNamedCluster();\n    }\n    if ( selectedNamedCluster != null ) {\n      setSelectedNamedCluster( selectedNamedCluster );\n    }\n  }\n\n  public void newCustomArgument() {\n    config.getCustomArguments().add( new PropertyEntry() );\n    updateDeleteButton();\n  }\n\n  public void deleteCustomArgument() {\n    XulTree customVariablesTree = (XulTree) container.getDocumentRoot().getElementById( \"custom-table\" );\n    Collection<PropertyEntry> selectedItems = customVariablesTree.getSelectedItems();\n    for ( PropertyEntry pe : selectedItems ) {\n      try {\n        getConfig().getCustomArguments().remove( pe );\n      } catch ( Exception e ) {\n        customVariablesTree.setElements( getConfig().getCustomArguments() );\n      }\n    }\n    updateDeleteButton();\n  }\n\n  public void newNamedCluster( String stepName ) {\n    XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( stepName );\n    Shell shell = (Shell) xulDialog.getRootObject();\n    String newNamedCluster = ncDelegate.newNamedCluster( jobMeta, null, shell );\n    if ( newNamedCluster != null ) {\n      //cancel button on editing pressed, clusters not changed\n      populateNamedClusters();\n      selectNamedCluster( newNamedCluster );\n    }\n  }\n\n  public void editNamedCluster( String stepName ) {\n    if ( isSelectedNamedCluster() ) {\n      XulDialog xulDialog = (XulDialog) getXulDomContainer().getDocumentRoot().getElementById( stepName );\n      Shell shell = (Shell) xulDialog.getRootObject();\n      String clusterName = ncDelegate.editNamedCluster( null, selectedNamedCluster, shell );\n      if ( clusterName != null ) {\n        //cancel button on editing pressed, clusters not changed\n        populateNamedClusters();\n        selectNamedCluster( clusterName );\n      }\n    }\n  }\n\n  public void selectNamedCluster( String configName ) {\n    @SuppressWarnings( \"unchecked\" )\n    XulMenuList<NamedCluster> namedConfigMenu =\n      (XulMenuList<NamedCluster>) container.getDocumentRoot().getElementById( \"named-clusters\" );\n    for ( NamedCluster nc : getNamedClusters() ) {\n      if ( configName != null && configName.equals( nc.getName() ) ) {\n        namedConfigMenu.setSelectedItem( nc );\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/ui/AbstractSqoopJobEntryDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop.ui;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopConfig;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.XulSpoonSettingsManager;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.XulRunner;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.swt.SwtXulLoader;\nimport org.pentaho.ui.xul.swt.SwtXulRunner;\n\nimport java.util.Enumeration;\nimport java.util.ResourceBundle;\n\n/**\n * Base functionality for a XUL-based Sqoop job entry dialog\n * \n * @param <S>\n *          Type of Sqoop configuration object this dialog depends upon. Must match the configuration object the job\n *          entry expects.\n */\npublic abstract class AbstractSqoopJobEntryDialog<S extends SqoopConfig, E extends AbstractSqoopJobEntry<S>> extends\n    JobEntryDialog implements JobEntryDialogInterface {\n\n  protected ResourceBundle bundle = new ResourceBundle() {\n    @Override\n    protected Object handleGetObject( String key ) {\n      return BaseMessages.getString( getMessagesClass(), key );\n    }\n\n    @Override\n    public Enumeration<String> getKeys() {\n      return null;\n    }\n  };\n\n  private AbstractSqoopJobEntryController<S, E> controller;\n\n  @SuppressWarnings( \"unchecked\" )\n  protected AbstractSqoopJobEntryDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException {\n    super( parent, jobEntry, rep, jobMeta );\n    init( (E) jobEntry );\n  }\n\n  /**\n   * @return the name of the class to use to look up localized messages\n   */\n  protected abstract Class<?> getMessagesClass();\n\n  /**\n   * @return the file name for the XUL document to load for this dialog\n   */\n  protected abstract String getXulFile();\n\n  /**\n   * Create the controller for this dialog\n   * \n   * @param container\n   *          XUL DOM container loaded from the file path returned by {@link #getXulFile()}\n   * @param jobEntry\n   *          Job entry this dialog supports\n   * @param bindingFactory\n   *          Binding factory to create bindings with\n   * @return Controller capable of handling requests for this dialog\n   */\n  protected abstract AbstractSqoopJobEntryController<S, E> createController( XulDomContainer container, E jobEntry,\n      BindingFactory bindingFactory );\n\n  /**\n   * Initialize this dialog for the job entry instance provided.\n   * \n   * @param jobEntry\n   *          The job entry this dialog supports.\n   */\n  protected void init( E jobEntry ) throws XulException {\n    SwtXulLoader swtXulLoader = new SwtXulLoader();\n    // Register the settings manager so dialog position and size is restored\n    swtXulLoader.setSettingsManager( XulSpoonSettingsManager.getInstance() );\n    swtXulLoader.registerClassLoader( getClass().getClassLoader() );\n    // Register Kettle's variable text box so we can reference it from XUL\n    swtXulLoader.register( \"VARIABLETEXTBOX\", ExtTextbox.class.getName() );\n    swtXulLoader.setOuterContext( shell );\n\n    // Load the XUL document with the dialog defined in it\n    XulDomContainer container = swtXulLoader.loadXul( getXulFile(), bundle );\n\n    // Create the controller with a default binding factory for the document we just loaded\n    BindingFactory bf = new DefaultBindingFactory();\n    bf.setDocument( container.getDocumentRoot() );\n    controller = createController( container, jobEntry, bf );\n    container.addEventHandler( controller );\n\n    // Load up the SWT-XUL runtime and initialize it with our container\n    final XulRunner runner = new SwtXulRunner();\n    runner.addContainer( container );\n    runner.initialize();\n  }\n\n  @Override\n  public JobEntryInterface open() {\n    return controller.open();\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/ui/SqoopExportJobEntryController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop.ui;\n\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopExportConfig;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopExportJobEntry;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.util.Collection;\n\n/**\n * Controller for the Sqoop Export Dialog.\n */\npublic class SqoopExportJobEntryController extends\n    AbstractSqoopJobEntryController<SqoopExportConfig, SqoopExportJobEntry> {\n\n  public static final String SQOOP_EXPORT_STEP_NAME = \"sqoop-export\";\n\n  public SqoopExportJobEntryController( JobMeta jobMeta, XulDomContainer container, SqoopExportJobEntry sqoopJobEntry,\n      BindingFactory bindingFactory ) {\n    super( jobMeta, container, sqoopJobEntry, bindingFactory );\n  }\n\n  @Override\n  public String getDialogElementId() {\n    return SQOOP_EXPORT_STEP_NAME;\n  }\n\n  @Override\n  protected void createBindings( SqoopExportConfig config, XulDomContainer container, BindingFactory bindingFactory,\n      Collection<Binding> bindings ) {\n    super.createBindings( config, container, bindingFactory, bindings );\n\n    bindings.add( bindingFactory.createBinding( config, SqoopExportConfig.EXPORT_DIR, SqoopExportConfig.EXPORT_DIR, \"value\" ) );\n  }\n\n  public void browseForExportDirectory() {\n    try {\n      String[] schemeRestrictions = new String[1];\n      if ( selectedNamedCluster != null && !\"false\".equals( selectedNamedCluster.getVariable( \"valid\" ) ) ) {\n        schemeRestrictions[0] = selectedNamedCluster.isMapr() ? Schemes.MAPRFS_SCHEME : Schemes.HDFS_SCHEME;\n      } else {\n        // must select cluster\n        return;\n      }\n\n      String path = getConfig().getExportDir();\n      FileObject initialFile = getInitialFile( path );\n\n      if ( initialFile == null ) {\n        showErrorDialog( BaseMessages.getString( PKG, \"Sqoop.JobEntry.Connection.Error.title\" ),\n            BaseMessages.getString( PKG, \"Sqoop.JobEntry.Connection.error\" ) );\n        return;\n      }\n\n      FileObject exportDir =\n          browseVfs( null, initialFile, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY, schemeRestrictions,\n              false, schemeRestrictions[0], selectedNamedCluster, false, false );\n      VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n      boolean okPressed = dialog.okPressed;\n      if ( okPressed ) {\n        getConfig().setExportDir( exportDir != null ? exportDir.getName().getPath() : null );\n        extractNamedClusterFromVfsFileChooser();\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      getJobEntry().logError( BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorBrowsingDirectory\" ), e );\n    }\n  }\n\n  @Override\n  public void accept() {\n    // Set the database meta based on the current database\n    jobEntry.setDatabaseMeta( jobMeta.findDatabase( config.getDatabase() ) );\n    super.accept();\n  }\n\n  public void editNamedCluster() {\n    editNamedCluster( SQOOP_EXPORT_STEP_NAME );\n  }\n\n  public void newNamedCluster() {\n    newNamedCluster( SQOOP_EXPORT_STEP_NAME );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/java/org/pentaho/big/data/kettle/plugins/sqoop/ui/SqoopImportJobEntryController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop.ui;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.big.data.kettle.plugins.hdfs.vfs.Schemes;\nimport org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopImportConfig;\nimport org.pentaho.big.data.kettle.plugins.sqoop.SqoopImportJobEntry;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.util.Collection;\n\n/**\n * Controller for the Sqoop Import Dialog.\n */\npublic class SqoopImportJobEntryController extends\n    AbstractSqoopJobEntryController<SqoopImportConfig, SqoopImportJobEntry> {\n\n  public static final String SQOOP_IMPORT_STEP_NAME = \"sqoop-import\";\n\n  public SqoopImportJobEntryController( JobMeta jobMeta, XulDomContainer container, SqoopImportJobEntry sqoopJobEntry,\n      BindingFactory bindingFactory ) {\n    super( jobMeta, container, sqoopJobEntry, bindingFactory );\n  }\n\n  @Override\n  public String getDialogElementId() {\n    return \"sqoop-import\";\n  }\n\n  @Override\n  protected void createBindings( SqoopImportConfig config, XulDomContainer container, BindingFactory bindingFactory,\n      Collection<Binding> bindings ) {\n    super.createBindings( config, container, bindingFactory, bindings );\n\n    bindings.add( bindingFactory.createBinding( config, SqoopImportConfig.TARGET_DIR, SqoopImportConfig.TARGET_DIR, \"value\" ) );\n  }\n\n  public void browseForTargetDirectory() {\n    try {\n      String[] schemeRestrictions = new String[1];\n      if ( selectedNamedCluster != null && !\"false\".equals( selectedNamedCluster.getVariable( \"valid\" ) ) ) {\n        schemeRestrictions[0] = selectedNamedCluster.isMapr() ? Schemes.MAPRFS_SCHEME : Schemes.HDFS_SCHEME;\n      } else {\n        // must select cluster\n        return;\n      }\n\n      String path = getConfig().getTargetDir();\n      FileObject initialFile = getInitialFile( path );\n\n      if ( initialFile == null ) {\n        showErrorDialog( BaseMessages.getString( PKG, \"Sqoop.JobEntry.Connection.Error.title\" ),\n            BaseMessages.getString( PKG, \"Sqoop.JobEntry.Connection.error\" ) );\n        return;\n      }\n\n      FileObject targetDir =\n          browseVfs( null, initialFile, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY, schemeRestrictions,\n              false, schemeRestrictions[0], selectedNamedCluster, false, false );\n      VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n      boolean okPressed = dialog.okPressed;\n      if ( okPressed ) {\n        getConfig().setTargetDir( targetDir != null ? targetDir.getName().getPath() : null );\n        extractNamedClusterFromVfsFileChooser();\n      }\n    } catch ( KettleFileException | FileSystemException e ) {\n      getJobEntry().logError( BaseMessages.getString( AbstractSqoopJobEntry.class, \"ErrorBrowsingDirectory\" ), e );\n    }\n  }\n\n  public void editNamedCluster() {\n    editNamedCluster( SQOOP_IMPORT_STEP_NAME );\n  }\n\n  public void newNamedCluster() {\n    newNamedCluster( SQOOP_IMPORT_STEP_NAME );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/resources/org/pentaho/big/data/kettle/plugins/sqoop/messages/messages_en_US.properties",
    "content": "BigData.Category.Description=Big Data\nSqoop.Import.PluginName=Sqoop import\nSqoop.Import.PluginDescription=Import data from a relational database (RDBMS) into the Hadoop Distributed File System (HDFS) using Apache Sqoop\nSqoop.Export.PluginName=Sqoop export\nSqoop.Export.PluginDescription=Export data from the Hadoop Distributed File System (HDFS) into a relational database (RDBMS) using Apache Sqoop\n\n# General UI Properties\nSqoop.JobEntry.Name.Label=Name:\nSqoop.JobEntry.Source.Group=Source\nSqoop.JobEntry.Target.Group=Target\nSqoop.JobEntry.NamenodeHost.Label=Namenode Host:\nSqoop.JobEntry.NamenodePort.Label=Namenode Port:\nSqoop.JobEntry.ShimIdentifier.Label=Driver Version:\nSqoop.JobEntry.JobtrackerHost.Label=Jobtracker Host:\nSqoop.JobEntry.JobtrackerPort.Label=Jobtracker Port:\nSqoop.JobEntry.Connection.Label=Database Connection:\nSqoop.JobEntry.Schema.Label=Schema:\nSqoop.JobEntry.Table.Label=Table:\nSqoop.JobEntry.New.Button.Text=New...\nSqoop.JobEntry.Edit.Button.Text=Edit...\nSqoop.JobEntry.Browse.Button.Text=Browse...\nSqoop.JobEntry.Test.Button.Text=Test\nSqoop.JobEntry.QuickSetup.Button.Text=Quick Setup\nSqoop.JobEntry.AdvancedOptions.Button.Text=Advanced Options\nSqoop.JobEntry.CommandLine.Label=Command Line:\nSqoop.JobEntry.Advanced.List.Button.Text=List View\nSqoop.JobEntry.Advanced.CommandLine.Button.Text=Command Line View\nSqoop.JobEntry.AdvancedTable.Column.Name.Label=Argument\nSqoop.JobEntry.AdvancedTable.Column.Value.Label=Value\nSqoop.JobEntry.CustomAdvancedTable.Column.Name.Label=Name\nSqoop.JobEntry.CustomAdvancedTable.Column.Value.Label=Value\nSqoop.JobEntry.CustomArguments.Label=Arguments:\nSqoop.JobEntry.DefaultTab.Label=Default\nSqoop.JobEntry.CustomTab.Label=Custom\n\n# Import specific properties\nSqoop.JobEntry.Import.Dialog.Title=Sqoop import\nSqoop.JobEntry.Import.Target.Directory.Label=Target Directory:\n\n# Export specific properties\nSqoop.JobEntry.Export.Dialog.Title=Sqoop export\nSqoop.JobEntry.Export.Export.Directory.Label=Export Directory:\n\nSqoop.JobEntry.NamedCluster.Label=Hadoop Cluster:\nSqoop.JobEntry.NamedCluster.Edit=Edit...\nSqoop.JobEntry.NamedCluster.New=New...\n\n# Advanced view properties\nNamenodeHost.Label=Namenode Host\nNamenodePort.Label=Namenode Port\nClusterName.Label=Hadoop Cluster\nShimIdentifier.Label=Driver Version\nJobtrackerHost.Label=Jobtracker Host\nJobtrackerPort.Label=Jobtracker Port\nBlockingExecution.Label=Block when executing?\nBlockingPollingInterval.Label=Polling interval (in ms)\nHBaseZookeeperQuorum.Label=HBase Zookeeper Quorum\nHBaseZookeeperClientPort.Label=HBase Zookeeper Port\n\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\n\nErrorAttachingLogging=Unable to capture Sqoop logging. Logging will be unavailable from Kettle.\nErrorDetachingLogging=Unable to detach logging wrapper from Sqoop. Logging may be duplicated.\nErrorConfiguringHadoopEnvironment=Error configuring Hadoop environment\nErrorLoadingHadoopConnectionInformation=Error loading Hadoop connection information\nErrorRunningSqoopTool=Error running Sqoop\nErrorConfiguringDatabaseConnection=Error determining connect URL for database connection {0}\nErrorRetrievingSchemas=Error getting schemas list\nErrorBrowsingDirectory=Error browsing for directory\nErrorConfiguringFromCommandLine=Error configuring from command line. Please fix any errors to switch to another view.\nErrorUnknownArguments=Unknown argument(s): {0}\nErrorMustConfigureDatabaseConnectionFromList=Cannot perform this operation when the database connection information is provided via Advanced Options.\nErrorProhibitedEmptyString=\"{0}\" is not allowed to be initialized with an empty string.\nErrorLoadNamedCluster=Failed to load named cluster \"{0}\". Falling back to one stored within the job.\n\nAvailableSchemas.Title=Available schemas\nAvailableSchemas.Message=Please select a schema name\n\nNoSchema.Error=We can not find any schemas\n\nDatabaseName.ChooseAvailable=Choose Available\nDatabaseName.UseAdvancedOptions=Use Advanced Options\n\nValidationError.Dialog.Title=Configuration Error\nValidationError.Connect.Message=JDBC connect string is required.\nValidationError.BlockingPollingInterval.Message=Expected number for blocking polling inverval: {0}.\n\nSqoop.JobEntry.Connection.Error.title=Unable to Connect\nSqoop.JobEntry.Connection.error=You don''t seem to be getting a connection to the Hadoop Cluster.  Check the cluster configuration you''re using."
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/resources/org/pentaho/big/data/kettle/plugins/sqoop/xul/SqoopExportJobEntry.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<window id=\"hadoop-window-wrapper\" onload=\"controller.init()\">\n<dialog id=\"sqoop-export\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n        xmlns:pen=\"http://www.pentaho.org/2008/xul\"\n        title=\"${Sqoop.JobEntry.Export.Dialog.Title}\"\n        resizable=\"true\"\n        appicon=\"ui/images/spoon.ico\"\n        width=\"650\"\n        height=\"400\"\n        buttons=\"\"\n        buttonalign=\"center\">\n  <!--\n  buttons=\"accept,cancel,extra1,extra2\"\n  -->\n  <vbox>\n    <grid>\n      <columns>\n        <column/>\n        <column/>\n      </columns>\n      <rows>\n        <row>\n          <vbox flex=\"1\">\n            <label value=\"${Sqoop.JobEntry.Name.Label}\"/>\n            <textbox id=\"jobEntryName\" flex=\"1\" multiline=\"false\"/>\n          </vbox>\n          <hbox flex=\"1\">\n            <spacer flex=\"1\"/>\n            <image src=\"sqoop-export.png\" />\n          </hbox>\n        </row>\n      </rows>\n    </grid>\n  </vbox>\n  <vbox flex=\"1\">\n    <deck id=\"modeDeck\" flex=\"1\">\n      <vbox id=\"quickSetupPanel\">\n        <groupbox>\n          <caption label=\"${Sqoop.JobEntry.Source.Group}\"/>\n          <grid>\n            <columns>\n              <column/>\n              <column flex=\"1\"/>\n            </columns>\n            <rows>\n                <row>\n                    <label value=\"${Sqoop.JobEntry.NamedCluster.Label}\" width=\"200\"/>\n                    <grid>\n                        <columns>\n                            <column/>\n                            <column/>\n                            <column/>\n                        </columns>\n                        <rows>\n                            <row>\n                                <menulist id=\"named-clusters\" pen:binding=\"name\">\n                                    <menupopup>\n                                    </menupopup>\n                                </menulist>\n                                <button id=\"editNamedCluster\" label=\"${Sqoop.JobEntry.NamedCluster.Edit}\" onclick=\"controller.editNamedCluster()\"/>\n                                <button id=\"newNamedCluster\" label=\"${Sqoop.JobEntry.NamedCluster.New}\" onclick=\"controller.newNamedCluster()\"/>\n                            </row>\n                        </rows>\n                    </grid>\n                </row>\n                <row>\n                <label value=\"${Sqoop.JobEntry.Export.Export.Directory.Label}\"/>\n                <!-- Wrap with an hbox so all components align -->\n                <hbox flex=\"1\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"exportDir\" flex=\"1\"/>\n                  <button label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseForExportDirectory();\"/>\n                </hbox>\n              </row>\n            </rows>\n          </grid>\n        </groupbox>\n        <groupbox>\n          <caption label=\"${Sqoop.JobEntry.Target.Group}\"/>\n          <grid align=\"start\">\n            <columns>\n              <column/>\n              <column/>\n            </columns>\n            <rows>\n              <row>\n                <label value=\"${Sqoop.JobEntry.Connection.Label}\"/>\n                <hbox flex=\"1\">\n                  <menulist id=\"connection\" flex=\"1\">\n                    <menupopup>\n                    </menupopup>\n                  </menulist>\n                  <button id=\"editConnectionButton\" label=\"${Sqoop.JobEntry.Edit.Button.Text}\" onclick=\"controller.editConnection();\"/>\n                  <button label=\"${Sqoop.JobEntry.New.Button.Text}\" onclick=\"controller.newConnection();\"/>\n                </hbox>\n              </row>\n              <!-- TODO Determine if schema is required\n              <row>\n                <label value=\"${Sqoop.JobEntry.Schema.Label}\"/>\n                <hbox flex=\"1\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"schema\" flex=\"1\"/>\n                  <button id=\"browseSchemaButton\" label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseSchema();\"/>\n                </hbox>\n              </row>\n              -->\n              <row align=\"start\">\n                <label value=\"${Sqoop.JobEntry.Table.Label}\" align=\"start\"/>\n                <hbox flex=\"1\" align=\"start\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"table\" flex=\"1\"/>\n                  <button id=\"browseTableButton\" label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseTable();\"/>\n                </hbox>\n              </row>\n            </rows>\n          </grid>\n        </groupbox>\n      </vbox>\n      <pen:include src=\"advanced-mode.xul\"/>\n    </deck>\n    <pen:include src=\"button-bar.xul\"/>\n  </vbox>\n</dialog>\n</window>\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/resources/org/pentaho/big/data/kettle/plugins/sqoop/xul/SqoopImportJobEntry.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n<window id=\"hadoop-window-wrapper\" onload=\"controller.init()\">\n<dialog id=\"sqoop-import\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n        xmlns:pen=\"http://www.pentaho.org/2008/xul\"\n        title=\"${Sqoop.JobEntry.Import.Dialog.Title}\"\n        resizable=\"true\"\n        appicon=\"ui/images/spoon.ico\"\n        width=\"650\"\n        height=\"400\"\n        buttons=\"\"\n        buttonalign=\"center\">\n  <!--\n  buttons=\"accept,cancel,extra1,extra2\"\n  -->\n  <vbox>\n    <grid>\n      <columns>\n        <column/>\n        <column/>\n      </columns>\n      <rows>\n        <row>\n          <vbox flex=\"1\">\n            <label value=\"${Sqoop.JobEntry.Name.Label}\"/>\n            <textbox id=\"jobEntryName\" flex=\"1\" multiline=\"false\"/>\n          </vbox>\n          <hbox flex=\"1\">\n            <spacer flex=\"1\"/>\n            <image src=\"sqoop-import.png\"/>\n          </hbox>\n        </row>\n      </rows>\n    </grid>\n  </vbox>\n  <vbox flex=\"1\">\n    <deck id=\"modeDeck\" flex=\"1\">\n      <vbox id=\"quickSetupPanel\">\n        <groupbox>\n          <caption label=\"${Sqoop.JobEntry.Source.Group}\"/>\n          <grid align=\"start\">\n            <columns>\n              <column/>\n              <column/>\n            </columns>\n            <rows>\n              <row>\n                <label value=\"${Sqoop.JobEntry.Connection.Label}\"/>\n                <hbox flex=\"1\">\n                  <menulist id=\"connection\" flex=\"1\">\n                    <menupopup>\n                    </menupopup>\n                  </menulist>\n                  <button id=\"editConnectionButton\" label=\"${Sqoop.JobEntry.Edit.Button.Text}\" onclick=\"controller.editConnection();\"/>\n                  <button label=\"${Sqoop.JobEntry.New.Button.Text}\" onclick=\"controller.newConnection();\"/>\n                </hbox>\n              </row>\n              <!-- TODO Determine if schema is required\n              <row>\n                <label value=\"${Sqoop.JobEntry.Schema.Label}\"/>\n                <hbox flex=\"1\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"schema\" flex=\"1\"/>\n                  <button id=\"browseSchemaButton\" label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseSchema();\"/>\n                </hbox>\n              </row>\n              -->\n              <row align=\"start\">\n                <label value=\"${Sqoop.JobEntry.Table.Label}\" align=\"start\"/>\n                <hbox flex=\"1\" align=\"start\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"table\" flex=\"1\"/>\n                  <button id=\"browseTableButton\" label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseTable();\"/>\n                </hbox>\n              </row>\n            </rows>\n          </grid>\n        </groupbox>\n        <groupbox>\n          <caption label=\"${Sqoop.JobEntry.Target.Group}\"/>\n          <grid>\n            <columns>\n              <column/>\n              <column flex=\"1\"/>\n            </columns>\n            <rows>\n                <row>\n                    <label value=\"${Sqoop.JobEntry.NamedCluster.Label}\" width=\"200\"/>\n                    <grid>\n                        <columns>\n                            <column/>\n                            <column/>\n                            <column/>\n                        </columns>\n                        <rows>\n                            <row>\n                                <menulist id=\"named-clusters\" pen:binding=\"name\">\n                                    <menupopup>\n                                    </menupopup>\n                                </menulist>\n                                <button id=\"editNamedCluster\" label=\"${Sqoop.JobEntry.NamedCluster.Edit}\" onclick=\"controller.editNamedCluster()\"/>\n                                <button id=\"newNamedCluster\" label=\"${Sqoop.JobEntry.NamedCluster.New}\" onclick=\"controller.newNamedCluster()\"/>\n                            </row>\n                        </rows>\n                    </grid>\n                </row>\n              <row>\n                <label value=\"${Sqoop.JobEntry.Import.Target.Directory.Label}\"/>\n                <!-- Wrap with an hbox so all components align -->\n                <hbox flex=\"1\">\n                  <textbox pen:customclass=\"variabletextbox\" id=\"targetDir\" flex=\"1\"/>\n                  <button label=\"${Sqoop.JobEntry.Browse.Button.Text}\" onclick=\"controller.browseForTargetDirectory();\"/>\n                </hbox>\n              </row>\n            </rows>\n          </grid>\n        </groupbox>\n      </vbox>\n      <pen:include src=\"advanced-mode.xul\"/>\n    </deck>\n    <pen:include src=\"button-bar.xul\"/>\n  </vbox>\n</dialog>\n</window>"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/resources/org/pentaho/big/data/kettle/plugins/sqoop/xul/advanced-mode.xul",
    "content": "<?xml version=\"1.0\"?>\n<!-- Layout that defines what an \"Advanced Options\" mode looks like for Sqoop Import/Export dialogs -->\n<vbox id=\"advancedMode\" flex=\"1\" xmlns:pen=\"http://www.pentaho.org/2008/xul\">\n  <hbox>\n    <spacer flex=\"1\"/>\n    <button id=\"advanced-list-button\" onclick=\"controller.advancedListButtonClicked();\" tooltiptext=\"${Sqoop.JobEntry.Advanced.List.Button.Text}\" image=\"list_view.png\" selectedimage=\"list_view_selected.png\" type=\"radio\" selected=\"true\"/>\n    <spacer width=\"10\"/>\n    <button id=\"advanced-command-line-button\" onclick=\"controller.advancedCommandLineButtonClicked();\" tooltiptext=\"${Sqoop.JobEntry.Advanced.CommandLine.Button.Text}\" image=\"command_line_view.png\" selectedimage=\"command_line_view_selected.png\" type=\"radio\"/>\n    <spacer flex=\"1\"/>\n  </hbox>\n  <deck id=\"advancedModeDeck\" flex=\"1\">\n    <vbox flex=\"1\">\n      <tabbox flex=\"1\">\n        <tabs>\n          <tab label=\"${Sqoop.JobEntry.DefaultTab.Label}\"/>\n          <tab label=\"${Sqoop.JobEntry.CustomTab.Label}\"/>\n        </tabs>\n\n        <tabpanels flex=\"1\">\n          <tabpanel flex=\"1\">\n            <tree id=\"advanced-table\" hidecolumnpicker=\"true\" autocreatenewrows=\"false\" flex=\"1\">\n              <treecols>\n                <treecol id=\"name-col\" editable=\"false\" flex=\"1\" label=\"${Sqoop.JobEntry.AdvancedTable.Column.Name.Label}\" pen:binding=\"displayName\"/>\n                <treecol id=\"value-col\" editable=\"true\" flex=\"1\" label=\"${Sqoop.JobEntry.AdvancedTable.Column.Value.Label}\" pen:binding=\"value\"/>\n              </treecols>\n              <treechildren />\n            </tree>\n          </tabpanel>\n          <tabpanel flex=\"1\">\n            <hbox>\n              <label value=\"${Sqoop.JobEntry.CustomArguments.Label}\"/>\n              <spacer flex=\"1\"/>\n              <button id=\"delete-button\" onclick=\"controller.deleteCustomArgument()\" image=\"generic-delete.png\" pen:disabledimage=\"disabled-generic-delete.png\"/>\n            </hbox>\n            <tree id=\"custom-table\" hidecolumnpicker=\"true\" autocreatenewrows=\"true\" editable=\"true\" flex=\"1\" newitembinding=\"controller.newCustomArgument()\">\n              <treecols>\n                <treecol id=\"custom-name-col\" editable=\"true\" flex=\"1\" label=\"${Sqoop.JobEntry.CustomAdvancedTable.Column.Name.Label}\" pen:binding=\"key\"/>\n                <treecol id=\"custom-value-col\" editable=\"true\" flex=\"1\" label=\"${Sqoop.JobEntry.CustomAdvancedTable.Column.Value.Label}\" pen:binding=\"value\"/>\n              </treecols>\n              <treechildren />\n            </tree>\n          </tabpanel>\n        </tabpanels>\n      </tabbox>\n    </vbox>\n    <vbox flex=\"1\">\n      <label value=\"${Sqoop.JobEntry.CommandLine.Label}\"/>\n      <textbox pen:customclass=\"variabletextbox\" multiLine=\"true\" id=\"commandLine\" flex=\"1\"></textbox>\n    </vbox>\n  </deck>\n</vbox>"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/main/resources/org/pentaho/big/data/kettle/plugins/sqoop/xul/button-bar.xul",
    "content": "<?xml version=\"1.0\"?>\n<!-- A button bar that contains a Quick Setup/Advanced Options toggle, test, ok, and cancel buttons. -->\n<grid>\n  <columns>\n    <column align=\"center\"/>\n    <column/>\n    <column/>\n    <column/>\n  </columns>\n  <rows>\n    <row>\n      <label flex=\"1\" id=\"mode-toggle-label\" value=\"${Sqoop.JobEntry.AdvancedOptions.Button.Text}\" onclick=\"controller.toggleMode();\"/>\n      <button label=\"${Dialog.Help}\" onclick=\"controller.help();\"/>\n      <button label=\"${Dialog.Accept}\" onclick=\"controller.accept();\"/>\n      <button label=\"${Dialog.Cancel}\" onclick=\"controller.cancel();\"/>\n    </row>\n  </rows>\n</grid>\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/AbstractSqoopJobEntryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.Spy;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.impl.cluster.NamedClusterImpl;\nimport org.pentaho.big.data.kettle.plugins.job.BlockableJobConfig;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.logging.KettleLogStore;\nimport org.pentaho.di.core.plugins.DatabasePluginType;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.job.Job;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.hadoop.shim.api.HadoopClientServices;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.action.RuntimeTestActionService;\n\nimport java.util.List;\n\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n@RunWith( MockitoJUnitRunner.class )\npublic class AbstractSqoopJobEntryTest {\n  @Mock NamedClusterService mockNamedClusterService;\n  @Mock NamedClusterServiceLocator mockNamedClusterServiceLocator;\n  @Mock RuntimeTestActionService mockRuntimeTestActionService;\n  @Mock RuntimeTester mockRuntimeTester;\n  @Mock Job mockJob;\n  @Spy JobMeta mockJobMeta;\n  @Mock DatabaseMeta mockDatabaseMeta;\n  @Mock HadoopClientServices mockHadoopClientServices;\n  NamedClusterImpl namedClusterTemplate = new NamedClusterImpl();\n  NamedClusterImpl namedClusterActual = new NamedClusterImpl();\n  SqoopConfig sqoopConfig;\n  AbstractSqoopJobEntry sqoopJobEntry;\n\n  private static final String CLUSTER_NAME_VARIABLE = \"ClusterNameVariable\";\n  private static final String CLUSTER_NAME = \"MyClusterName\";\n\n  @Before\n  public void setUp() throws Exception {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.addPluginType( DatabasePluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n    KettleLogStore.init();\n    namedClusterTemplate.setName( \"${\" + CLUSTER_NAME_VARIABLE + \"}\" );\n    namedClusterActual.setName( CLUSTER_NAME );\n\n    sqoopJobEntry =\n      new AbstractSqoopJobEntry( mockNamedClusterService, mockNamedClusterServiceLocator, mockRuntimeTestActionService,\n        mockRuntimeTester ) {\n\n        @Override protected BlockableJobConfig createJobConfig() {\n          return null;\n        }\n\n        @Override public List<String> getValidationWarnings( BlockableJobConfig config ) {\n          return null;\n        }\n\n        @Override protected String getToolName() {\n          return \"toolName\";\n        }\n      };\n\n    sqoopConfig = new SqoopConfig() {\n      @Override protected NamedCluster createClusterTemplate() {\n        return namedClusterTemplate;\n      }\n    };\n\n    sqoopConfig.setClusterName( \"${\" + CLUSTER_NAME_VARIABLE + \"}\" );\n    sqoopConfig.setJobEntryName( \"jobname\" );\n    \n    sqoopJobEntry.setParentJob( mockJob );\n    VariableSpace variables = new Variables();\n    variables.setVariable( CLUSTER_NAME_VARIABLE, CLUSTER_NAME );\n\n    mockJobMeta.initializeVariablesFrom( variables );\n    sqoopJobEntry.setParentJobMeta( mockJobMeta );\n    when( mockJob.getJobMeta() ).thenReturn( mockJobMeta );\n    when( mockJobMeta.findDatabase( any() ) ).thenReturn( mockDatabaseMeta );\n    when( mockNamedClusterService.contains( eq( CLUSTER_NAME ), any() ) ).thenReturn( true );\n    when( mockNamedClusterService.read( eq( CLUSTER_NAME ), any() ) ).thenReturn( namedClusterActual );\n\n    when( mockNamedClusterServiceLocator.getService( any(), any() ) ).thenReturn( mockHadoopClientServices );\n    when( mockHadoopClientServices.runSqoop( any(), any() ) ).thenReturn( 0 );\n  }\n\n  @Test\n  public void executeSqoopWithVariableClusterName() throws Exception {\n\n    sqoopJobEntry.setJobConfig( sqoopConfig );\n    Result jobResult = new Result();\n\n    sqoopJobEntry.executeSqoop( jobResult );\n    verify( mockNamedClusterService, times( 1 ) ).read( any(), any() );\n  }\n\n}"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/PersistentPropertyChangeListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport java.beans.PropertyChangeEvent;\nimport java.beans.PropertyChangeListener;\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * Property Change Listener that records all received events, useful for test purposes.\n */\npublic class PersistentPropertyChangeListener implements PropertyChangeListener {\n  private List<PropertyChangeEvent> receivedEvents;\n\n  public PersistentPropertyChangeListener() {\n    receivedEvents = new ArrayList<PropertyChangeEvent>();\n  }\n\n  @Override\n  public void propertyChange( PropertyChangeEvent evt ) {\n    receivedEvents.add( evt );\n  }\n\n  /**\n   * @return every event received by this listener\n   */\n  public List<PropertyChangeEvent> getReceivedEvents() {\n    return receivedEvents;\n  }\n\n  /**\n   * @return only the events that resulted in changed values\n   */\n  public List<PropertyChangeEvent> getReceivedEventsWithChanges() {\n    List<PropertyChangeEvent> events = new ArrayList<PropertyChangeEvent>();\n    for ( PropertyChangeEvent evt : receivedEvents ) {\n      if ( !( evt.getOldValue() == null ? evt.getNewValue() == null : evt.getOldValue().equals( evt.getNewValue() ) ) ) {\n        events.add( evt );\n      }\n    }\n    return events;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/PropertyFiringObjectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.junit.Test;\nimport org.pentaho.ui.xul.XulEventSource;\n\nimport java.beans.PropertyChangeEvent;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Method;\nimport java.lang.reflect.Modifier;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.mockito.Mockito.mock;\n\n/**\n * This is a helper class to dynamically test a class' ability to get, set, and fire property change events for all\n * private fields.\n */\npublic class PropertyFiringObjectTest {\n\n  @Test\n  public void testSqoopExportConfig() throws Exception {\n    testPropertyFiringForAllPrivateFieldsOf( new SqoopExportConfig( mock( SqoopExportJobEntry.class ) ) );\n  }\n\n  @Test\n  public void testSqoopImportConfig() throws Exception {\n    testPropertyFiringForAllPrivateFieldsOf( new SqoopImportConfig( mock( SqoopImportJobEntry.class ) ) );\n  }\n\n  /**\n   * Test that all private fields have getter and setter methods, and they work as they should: the getter returns the\n   * value and the setter generates a {@link PropertyChangeEvent} for that property.\n   *\n   * @param o\n   *          instance of event source to test\n   * @throws Exception\n   */\n  private void testPropertyFiringForAllPrivateFieldsOf( XulEventSource object ) throws Exception {\n    // Attach our property change listener to the object\n    PersistentPropertyChangeListener listner = new PersistentPropertyChangeListener();\n    object.addPropertyChangeListener( listner );\n    try {\n      Class<?> oClass = object.getClass();\n      for ( Field f : oClass.getDeclaredFields() ) {\n        if ( !Modifier.isPrivate( f.getModifiers() ) || Modifier.isTransient( f.getModifiers() ) ||\n            Modifier.isFinal( f.getModifiers() )) {\n          // Skip non-private or transient fields or final fields\n          continue;\n        }\n\n        String fullFieldName = object.getClass().getSimpleName() + \".\" + f.getName();\n        try {\n          // Clear out the previous run's events if there were any\n          listner.getReceivedEvents().clear();\n\n          String camelcaseFieldName = f.getName().substring( 0, 1 ).toUpperCase() + f.getName().substring( 1 );\n\n          // Grab the getter and setter methods for the field\n          Method getter = SqoopUtils.findMethod( oClass, camelcaseFieldName, null, \"get\", \"is\" );\n          Method setter = SqoopUtils.findMethod( oClass, camelcaseFieldName, new Class<?>[] { f.getType() }, \"set\" );\n\n          // Grab the original value so we can make sure we're changing it so guarantee a PropertyChangeEvent\n          Object originalValue = getter.invoke( object );\n          // Generate a test value to set this property to\n          Object value = getTestValue( f.getType(), originalValue );\n\n          assertFalse( fullFieldName + \": generated value does not differ from original value. Please update \"\n              + getClass() + \".getTestValue() to return a different value for \" + f.getType() + \".\", value.equals(\n                  originalValue ) );\n          setter.invoke( object, value );\n          assertEquals( fullFieldName + \": value not get/set properly\", value, getter.invoke( object ) );\n          assertEquals( fullFieldName + \": PropertyChangeEvent not received when changing value\", 1, listner\n              .getReceivedEvents().size() );\n          PropertyChangeEvent evt = listner.getReceivedEvents().get( 0 );\n          assertEquals( fullFieldName + \": fired event with wrong property name\", f.getName(), evt.getPropertyName() );\n          assertEquals( fullFieldName + \": fired event with incorrect old value\", originalValue, evt.getOldValue() );\n          assertEquals( fullFieldName + \": fired event with incorrect new value\", value, evt.getNewValue() );\n        } catch ( Exception ex ) {\n          throw new Exception( \"Error testing getter/setter for \" + fullFieldName, ex );\n        }\n      }\n    } finally {\n      // Remove our listener when we're done\n      object.removePropertyChangeListener( listner );\n    }\n  }\n\n  /**\n   * Get a value to test with that matches the type provided.\n   *\n   * @param type\n   *          Type of test value\n   * @param originalValue\n   *          The test value returned by this method must differ from this object\n   * @return An object of type {@code Type}\n   */\n\n  private Object getTestValue( Class<?> type, Object originalValue ) {\n    if ( String.class.equals( type ) ) {\n      return  String.valueOf( System.currentTimeMillis() );\n    }\n    if ( Boolean.class.equals( type ) ) {\n      // Return the opposite\n      return originalValue == null ? Boolean.TRUE : !(Boolean) originalValue;\n    }\n    //not primitive\n    return mock( type );\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopConfigTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport com.google.common.collect.Maps;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.big.data.kettle.plugins.job.JobEntryMode;\nimport org.pentaho.big.data.kettle.plugins.job.PropertyEntry;\nimport org.pentaho.big.data.kettle.plugins.sqoop.util.MockitoAutoBean;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.repository.StringObjectId;\nimport org.pentaho.ui.xul.util.AbstractModelList;\nimport org.w3c.dom.Node;\n\nimport java.beans.PropertyChangeEvent;\nimport java.beans.PropertyChangeListener;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.UUID;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertSame;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.doAnswer;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.reset;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Test the SqoopConfig functionality not exercised by {@link PropertyFiringObjectTest}\n */\n@RunWith( MockitoJUnitRunner.Silent.class )\npublic class SqoopConfigTest {\n  /**\n   *\n   */\n  private static final String ENTRY_VALUE = \"entryValue\";\n  /**\n   *\n   */\n  private static final String ENTRY_KEY = \"entryKey\";\n  private static final String EMPTY = \"\";\n  public static final String HDFS_HOST = \"hdfsHost\";\n  public static final String HDFS_PORT = \"8020\";\n  public static final String SHIM_IDENTIFIER = \"cdh61\";\n  public static final String JOB_TRACKER_HOST = \"jobTracker\";\n  public static final String JOB_TRACKER_PORT = \"2222\";\n  private NamedCluster namedClusterMock = mock( NamedCluster.class );\n\n  private SqoopConfig config;\n  private NamedCluster template;\n  @Mock Runnable createClusterTemplate;\n\n  @Before\n  public void setUp() throws Exception {\n    config = createSqoopConfig();\n  }\n\n  protected SqoopConfig createSqoopConfig() {\n    return new SqoopConfig() {\n      @Override protected NamedCluster createClusterTemplate() {\n        template = mock( NamedCluster.class );\n\n        MockitoAutoBean<String> hdfsHost = new MockitoAutoBean<>( HDFS_HOST );\n        doAnswer( hdfsHost ).when( template ).getHdfsHost();\n        doAnswer( hdfsHost ).when( template ).setHdfsHost( anyString() );\n\n        MockitoAutoBean<String> hdfsPort = new MockitoAutoBean<>( HDFS_PORT );\n        doAnswer( hdfsPort ).when( template ).getHdfsPort();\n        doAnswer( hdfsPort ).when( template ).setHdfsPort( anyString() );\n\n        MockitoAutoBean<String> shimIdentifier = new MockitoAutoBean<>( SHIM_IDENTIFIER );\n        doAnswer( shimIdentifier ).when( template ).getShimIdentifier();\n        doAnswer( shimIdentifier ).when( template ).setShimIdentifier( anyString() );\n\n        MockitoAutoBean<String> jobTrackerHost = new MockitoAutoBean<>( JOB_TRACKER_HOST );\n        doAnswer( jobTrackerHost ).when( template ).getJobTrackerHost();\n        doAnswer( jobTrackerHost ).when( template ).setJobTrackerHost( anyString() );\n\n        MockitoAutoBean<String> jobTrackerPort = new MockitoAutoBean<>( JOB_TRACKER_PORT );\n        doAnswer( jobTrackerPort ).when( template ).getJobTrackerPort();\n        doAnswer( jobTrackerPort ).when( template ).setJobTrackerPort( anyString() );\n\n        createClusterTemplate.run();\n        return template;\n      }\n    };\n  }\n\n  @Test\n  public void addRemovePropertyChangeListener() {\n    PersistentPropertyChangeListener l = new PersistentPropertyChangeListener();\n\n    config.addPropertyChangeListener( l );\n    config.setJobEntryName( \"test\" );\n    assertEquals( 1, l.getReceivedEvents().size() );\n    config.removePropertyChangeListener( l );\n    config.setJobEntryName( \"test1\" );\n    assertEquals( 1, l.getReceivedEvents().size() );\n  }\n\n  @Test\n  public void addRemovePropertyChangeListener_propertyName() {\n    PersistentPropertyChangeListener l = new PersistentPropertyChangeListener();\n\n    config.addPropertyChangeListener( \"test\", l );\n    config.setJobEntryName( \"test\" );\n    assertEquals( 0, l.getReceivedEvents().size() );\n    config.removePropertyChangeListener( \"test\", l );\n\n    config.addPropertyChangeListener( SqoopConfig.JOB_ENTRY_NAME, l );\n    config.setJobEntryName( \"test1\" );\n    assertEquals( 1, l.getReceivedEvents().size() );\n    config.removePropertyChangeListener( SqoopConfig.JOB_ENTRY_NAME, l );\n    config.setJobEntryName( \"test2\" );\n    assertEquals( 1, l.getReceivedEvents().size() );\n  }\n\n  @Test\n  public void getAdvancedArgumentsList() {\n    AbstractModelList<ArgumentWrapper> args = config.getAdvancedArgumentsList();\n    assertEquals( 62, args.size() );\n\n    PropertyChangeListener l = mock( PropertyChangeListener.class );\n    config.addPropertyChangeListener( l );\n\n    // Make sure we can get and set the value for all arguments returned\n    String value = String.valueOf( System.currentTimeMillis() );\n    for ( ArgumentWrapper arg : args ) {\n      arg.setValue( value );\n      assertEquals( value, arg.getValue() );\n    }\n\n    // We should have received one event for every property changed\n    verify( l, times( 62 ) ).propertyChange( (PropertyChangeEvent) any() );\n  }\n\n  @Test\n  public void testClone() {\n    config.setConnect( SqoopConfig.CONNECT );\n    config.setJobEntryName( SqoopConfig.JOB_ENTRY_NAME );\n\n    SqoopConfig clone = config.clone();\n\n    assertEquals( config.getConnect(), clone.getConnect() );\n    assertEquals( config.getJobEntryName(), clone.getJobEntryName() );\n  }\n\n  @Test\n  public void setDatabaseConnectionInformation() {\n    PersistentPropertyChangeListener l = new PersistentPropertyChangeListener();\n    config.addPropertyChangeListener( l );\n\n    String database = \"bogus\";\n    String connect = \"jdbc:bogus://bogus\";\n    String username = \"bob\";\n    String password = \"uncle\";\n\n    config.setConnectionInfo( database, connect, username, password );\n\n    assertEquals( 0, l.getReceivedEvents().size() );\n    assertEquals( database, config.getDatabase() );\n    assertEquals( connect, config.getConnect() );\n    assertEquals( username, config.getUsername() );\n    assertEquals( password, config.getPassword() );\n  }\n\n  @Test\n  public void numMappers() {\n    String numMappers = \"5\";\n\n    config.setNumMappers( numMappers );\n\n    List<String> args = new ArrayList<String>();\n\n    List<ArgumentWrapper> argumentWrappers = config.getAdvancedArgumentsList();\n\n    ArgumentWrapper arg = null;\n    Iterator<ArgumentWrapper> argIter = argumentWrappers.iterator();\n    while ( arg == null && argIter.hasNext() ) {\n      ArgumentWrapper a = argIter.next();\n      if ( a.getName().equals( \"num-mappers\" ) ) {\n        arg = a;\n      }\n    }\n    assertNotNull( arg );\n    SqoopUtils.appendArgument( args, arg, new Variables() );\n    assertEquals( 2, args.size() );\n    assertEquals( \"--num-mappers\", args.get( 0 ) );\n    assertEquals( numMappers, args.get( 1 ) );\n  }\n\n  @Test\n  public void copyConnectionInfoFromAdvanced() {\n    PersistentPropertyChangeListener l = new PersistentPropertyChangeListener();\n    config.addPropertyChangeListener( l );\n\n    String connect = \"connect\";\n    String username = \"username\";\n    String password = \"password\";\n\n    config.setConnectFromAdvanced( connect );\n    config.setUsernameFromAdvanced( username );\n    config.setPasswordFromAdvanced( password );\n\n    assertNull( config.getConnect() );\n    assertNull( config.getUsername() );\n    assertNull( config.getPassword() );\n\n    config.copyConnectionInfoFromAdvanced();\n\n    assertEquals( connect, config.getConnect() );\n    assertEquals( username, config.getUsername() );\n    assertEquals( password, config.getPassword() );\n\n    assertEquals( 0, l.getReceivedEvents().size() );\n  }\n\n  @Test\n  public void copyConnectionInfoToAdvanced() {\n    PersistentPropertyChangeListener l = new PersistentPropertyChangeListener();\n    config.addPropertyChangeListener( l );\n\n    String connect = \"connect\";\n    String username = \"username\";\n    String password = \"password\";\n\n    config.setConnect( connect );\n    config.setUsername( username );\n    config.setPassword( password );\n\n    assertNull( config.getConnectFromAdvanced() );\n    assertNull( config.getUsernameFromAdvanced() );\n    assertNull( config.getPasswordFromAdvanced() );\n\n    config.copyConnectionInfoToAdvanced();\n\n    assertEquals( connect, config.getConnectFromAdvanced() );\n    assertEquals( username, config.getUsernameFromAdvanced() );\n    assertEquals( password, config.getPasswordFromAdvanced() );\n\n    assertEquals( 3, l.getReceivedEvents().size() );\n    assertEquals( \"connect\", l.getReceivedEvents().get( 0 ).getPropertyName() );\n    assertEquals( \"username\", l.getReceivedEvents().get( 1 ).getPropertyName() );\n    assertEquals( \"password\", l.getReceivedEvents().get( 2 ).getPropertyName() );\n  }\n\n  @Test\n  public void getModeAsEnum() {\n    assertNull( config.getMode() );\n    assertEquals( JobEntryMode.QUICK_SETUP, config.getModeAsEnum() );\n\n    config.setMode( JobEntryMode.ADVANCED_COMMAND_LINE.name() );\n    assertEquals( JobEntryMode.ADVANCED_COMMAND_LINE.name(), config.getMode() );\n    assertEquals( JobEntryMode.ADVANCED_COMMAND_LINE, config.getModeAsEnum() );\n  }\n\n  @Test\n  public void testClusterConfigToXML() throws Exception {\n    String xml = \"<test>\" + config.getClusterXML() + \"</test>\";\n    Node node = XMLHandler.loadXMLString( xml, \"test\" );\n\n    reset( createClusterTemplate );\n    createSqoopConfig().loadClusterConfig( node );\n\n    verify( createClusterTemplate ).run();\n    verify( template ).setHdfsHost( HDFS_HOST );\n    verify( template ).setHdfsPort( HDFS_PORT );\n    verify( template ).setJobTrackerHost( JOB_TRACKER_HOST );\n    verify( template ).setJobTrackerPort( JOB_TRACKER_PORT );\n  }\n\n  @Test\n  public void testClusterConfigToRepo() throws Exception {\n    Repository repository = mock( Repository.class );\n    StringObjectId id_job = new StringObjectId( UUID.randomUUID().toString() );\n    JobEntryInterface jobEntryInterface = mock( JobEntryInterface.class );\n    StringObjectId objectId = new StringObjectId( UUID.randomUUID().toString() );\n    when( jobEntryInterface.getObjectId() ).thenReturn( objectId );\n\n    final HashMap<String, String> properties = Maps.newHashMap();\n    doAnswer( new Answer() {\n      @Override public Object answer( InvocationOnMock invocation ) throws Throwable {\n        Object[] arguments = invocation.getArguments();\n        properties.put( ( (String) arguments[2] ), (String) arguments[3] );\n        return null;\n      }\n    } ).when( repository ).saveJobEntryAttribute( eq( id_job ), eq( objectId ), anyString(), anyString() );\n    doAnswer( new Answer() {\n      @Override public String answer( InvocationOnMock invocation ) throws Throwable {\n        return properties.get( invocation.getArguments()[1].toString() );\n      }\n    } ).when( repository ).getJobEntryAttributeString( eq( id_job ), anyString() );\n\n    config.saveClusterConfig( repository, id_job, jobEntryInterface );\n    verify( repository, times( 5 ) ).saveJobEntryAttribute( eq( id_job ), eq( objectId ), anyString(), anyString() );\n\n    reset( createClusterTemplate );\n    createSqoopConfig().loadClusterConfig( repository, id_job );\n\n    verify( createClusterTemplate ).run();\n    verify( template ).setHdfsHost( HDFS_HOST );\n    verify( template ).setHdfsPort( HDFS_PORT );\n    verify( template ).setJobTrackerHost( JOB_TRACKER_HOST );\n    verify( template ).setJobTrackerPort( JOB_TRACKER_PORT );\n  }\n\n  @Test\n  public void testSetNamedCluster() throws Exception {\n    NamedCluster namedCluster = mock( NamedCluster.class );\n    when( namedCluster.getName() ).thenReturn( \"named cluster\" );\n    config.setNamedCluster( namedCluster );\n\n    verify( createClusterTemplate ).run();\n    verify( template ).replaceMeta( namedCluster );\n    assertEquals( \"named cluster\", config.getClusterName() );\n  }\n\n  @Test\n  public void testIsAdvancedClusterConfigSet_ClusterNameNull() throws Exception {\n    when( namedClusterMock.getName() ).thenReturn( null );\n    config.setNamedCluster( namedClusterMock );\n    assertTrue( config.isAdvancedClusterConfigSet() );\n  }\n\n  @Test\n  public void testIsAdvancedClusterConfigSet_ClusterNameEmpty() throws Exception {\n    when( namedClusterMock.getName() ).thenReturn( EMPTY );\n    config.setNamedCluster( namedClusterMock );\n    assertTrue( config.isAdvancedClusterConfigSet() );\n  }\n\n  @Test\n  public void testIsAdvancedClusterConfigSet_ClusterNameEmptyOrNullAndAllNcPropertiesNull() throws Exception {\n    when( namedClusterMock.getName() ).thenReturn( null );\n    config.setNamedCluster( namedClusterMock );\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( null );\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( null );\n    assertFalse( config.isAdvancedClusterConfigSet() );\n  }\n\n  @Test\n  public void testIsAdvancedClusterConfigSet_ClusterNameNotNull() throws Exception {\n    when( namedClusterMock.getName() ).thenReturn( \"Cluster Name For Testing\" );\n    config.setNamedCluster( namedClusterMock );\n    assertFalse( config.isAdvancedClusterConfigSet() );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_AllNotNullNotEmpty() throws Exception {\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertTrue( ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_AllNull() throws Exception {\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( null );\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( null );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertFalse( ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_AllEmpty() throws Exception {\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( EMPTY );\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( EMPTY );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( EMPTY );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( EMPTY );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertFalse( ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_HdfsHostOnlyNotNull() throws Exception {\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( null );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertTrue( \"It should be true - HDFS host: \" + config.getNamedCluster().getHdfsHost(), ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_HdfsPortOnlyNotNull() throws Exception {\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( null );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertTrue( \"It should be true - HDFS port: \" + config.getNamedCluster().getHdfsPort(), ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_JobTrackerHostOnlyNotNull() throws Exception {\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( null );\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerPort() ).thenReturn( null );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertTrue( \"It should be true - Job tracker host: \" + config.getNamedCluster().getJobTrackerHost(), ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testNcPropertiesNotNullOrEmpty_JobTrackerPortOnlyNotNull() throws Exception {\n    when( config.getNamedCluster().getHdfsHost() ).thenReturn( null );\n    when( config.getNamedCluster().getHdfsPort() ).thenReturn( null );\n    when( config.getNamedCluster().getJobTrackerHost() ).thenReturn( null );\n    boolean ncPropertiesNotNullOrEmpty = config.ncPropertiesNotNullOrEmpty( config.getNamedCluster() );\n    assertTrue( \"It should be true - Job tracker host: \" + config.getNamedCluster().getJobTrackerPort(), ncPropertiesNotNullOrEmpty );\n  }\n\n  @Test\n  public void testSetCustomArguments_GetCustomArguments() throws Exception {\n    PropertyEntry pEntryMock = new PropertyEntry( ENTRY_KEY, ENTRY_VALUE );\n    AbstractModelList<PropertyEntry> customArguments = new AbstractModelList<>();\n    customArguments.add( pEntryMock );\n    assertNotNull( config.getCustomArguments() );\n    assertEquals( 0, config.getCustomArguments().size() );\n    config.setCustomArguments( customArguments );\n    assertSame( customArguments, config.getCustomArguments() );\n    assertEquals( 1, config.getCustomArguments().size() );\n    assertEquals( ENTRY_KEY, config.getCustomArguments().get( 0 ).getKey() );\n    assertEquals( ENTRY_VALUE, config.getCustomArguments().get( 0 ).getValue() );\n  }\n\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/SqoopLog4jFilterTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.kettle.plugins.sqoop;\n\nimport org.apache.logging.log4j.core.Filter;\nimport org.apache.logging.log4j.core.LogEvent;\nimport org.apache.logging.log4j.util.ReadOnlyStringMap;\nimport org.junit.Test;\n\nimport static org.junit.Assert.*;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\npublic class SqoopLog4jFilterTest {\n\n  @Test\n  public void decide() {\n    String goodLog = \"goodLog\";\n    String badLog = \"badLog\";\n    LogEvent goodEvent = mock( LogEvent.class );\n    ReadOnlyStringMap goodContextData = mock( ReadOnlyStringMap.class );\n    when( goodContextData.getValue( \"logChannelId\" ) ).thenReturn( goodLog );\n    when( goodEvent.getContextData() ).thenReturn( goodContextData );\n    LogEvent badEvent = mock( LogEvent.class );\n    ReadOnlyStringMap badContextData = mock( ReadOnlyStringMap.class );\n    when( badContextData.getValue( \"logChannelId\" ) ).thenReturn( badLog );\n    when( badEvent.getContextData() ).thenReturn( badContextData );\n    Filter f = new SqoopLog4jFilter( goodLog );\n    assertEquals( Filter.Result.NEUTRAL, f.filter( goodEvent ) );\n    assertEquals( Filter.Result.DENY, f.filter( badEvent ) );\n  }\n}"
  },
  {
    "path": "kettle-plugins/sqoop/core/src/test/java/org/pentaho/big/data/kettle/plugins/sqoop/util/MockitoAutoBean.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.big.data.kettle.plugins.sqoop.util;\n\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.stubbing.Answer;\n\nimport java.lang.reflect.Method;\n\n/**\n * @author nhudak\n */\npublic class MockitoAutoBean<T> implements Answer<T> {\n  private T value;\n\n  public MockitoAutoBean() {\n  }\n\n  public MockitoAutoBean( T value ) {\n    this();\n    this.value = value;\n  }\n\n  @SuppressWarnings( \"unchecked\" )\n  @Override public T answer( InvocationOnMock invocation ) throws Throwable {\n    Method method = invocation.getMethod();\n    if ( method.getParameterTypes().length == 1 ) {\n      value = (T) invocation.getArguments()[0];\n    }\n    return value;\n  }\n\n  public T getValue() {\n    return value;\n  }\n\n  public void setValue( T value ) {\n    this.value = value;\n  }\n}\n"
  },
  {
    "path": "kettle-plugins/sqoop/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>pentaho-big-data-kettle-plugins</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-kettle-plugins-sqoop</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>a Pentaho open source project</description>\n  <url>http://www.pentaho.com</url>\n\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "legacy/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pentaho-big-data-legacy</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>jar</packaging>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n  <properties>\n    <common.version>3.3.0-v20070426</common.version>\n    <pig.version>0.8.1</pig.version>\n    <jets3t.version>0.9.4</jets3t.version>\n    <hadoop.version>0.20.2</hadoop.version>\n    <hbase.version>0.90.3</hbase.version>\n    <xml-apis.version>2.0.2</xml-apis.version>\n    <commands.version>3.3.0-I20070605-0010</commands.version>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <high-scale-lib.version>1.1.2</high-scale-lib.version>\n    <jline.version>0.9.94</jline.version>\n    <easymock.version>3.0</easymock.version>\n  </properties>\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.fasterxml.jackson.core</groupId>\n      <artifactId>jackson-databind</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-iam</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-core</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.fasterxml.jackson.dataformat</groupId>\n          <artifactId>jackson-dataformat-cbor</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>software.amazon.ion</groupId>\n          <artifactId>ion-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-emr</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-pricing</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-s3</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>aws-java-sdk-kms</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>joda-time</groupId>\n      <artifactId>joda-time</artifactId>\n      <version>${dependency.joda-time.revision}</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>*</artifactId>\n          <groupId>*</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.avro</groupId>\n      <artifactId>avro</artifactId>\n      <version>${org.apache.avro.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.core</groupId>\n      <artifactId>commands</artifactId>\n      <version>${commands.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.equinox</groupId>\n      <artifactId>common</artifactId>\n      <version>${common.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.github.stephenc.high-scale-lib</groupId>\n      <artifactId>high-scale-lib</artifactId>\n      <version>${high-scale-lib.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpclient</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpcore</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>net.java.dev.jets3t</groupId>\n      <artifactId>jets3t</artifactId>\n      <version>${jets3t.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>jline</groupId>\n      <artifactId>jline</artifactId>\n      <version>${jline.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.thrift</groupId>\n      <artifactId>libthrift</artifactId>\n      <version>${org.apache.thrift.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>${dependency.commons-configuration.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-core</artifactId>\n      <version>${hadoop.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.jboss.shrinkwrap</groupId>\n      <artifactId>shrinkwrap-impl-base</artifactId>\n      <version>1.0.0-alpha-12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.pig</groupId>\n      <artifactId>pig</artifactId>\n      <version>${pig.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.easymock</groupId>\n      <artifactId>easymock</artifactId>\n      <version>${easymock.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase</artifactId>\n      <version>${hbase.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <artifactId>libthrift</artifactId>\n          <groupId>libthrift</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.h2database</groupId>\n      <artifactId>h2</artifactId>\n      <version>${h2.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>xml-apis</groupId>\n      <artifactId>xml-apis</artifactId>\n      <version>${xml-apis.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-annotations</artifactId>\n      <version>3.4.1</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n  <build>\n    <resources>\n      <resource>\n        <filtering>true</filtering>\n        <directory>src/main/resources</directory>\n        <includes>\n          <include>**/*.properties</include>\n        </includes>\n      </resource>\n      <resource>\n        <filtering>false</filtering>\n        <directory>src/main/resources</directory>\n        <excludes>\n          <exclude>**/*.properties</exclude>\n        </excludes>\n      </resource>\n    </resources>\n    <testResources>\n      <testResource>\n        <filtering>true</filtering>\n        <directory>src/test/resources</directory>\n        <includes>\n          <include>**/*.properties</include>\n          <include>**/*.log</include>\n        </includes>\n      </testResource>\n    </testResources>\n  </build>\n</project>\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/core/hadoop/HadoopConfigurationInfo.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.core.hadoop;\n\n/**\n * Created by bryan on 8/10/15.\n */\npublic class HadoopConfigurationInfo {\n  private final String id;\n  private final String name;\n  private final boolean isActive;\n  private final boolean willBeActiveAfterRestart;\n\n  public HadoopConfigurationInfo( String id, String name, boolean isActive, boolean willBeActiveAfterRestart ) {\n    this.id = id;\n    this.name = name;\n    this.isActive = isActive;\n    this.willBeActiveAfterRestart = willBeActiveAfterRestart;\n  }\n\n  public String getId() {\n    return id;\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public boolean isActive() {\n    return isActive;\n  }\n\n  public boolean isWillBeActiveAfterRestart() {\n    return willBeActiveAfterRestart;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/core/hadoop/HadoopConfigurationPrompter.java",
    "content": "/*******************************************************************************\n *\n * Pentaho Big Data\n *\n * Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n *\n *******************************************************************************\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with\n * the License. You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n ******************************************************************************/\n\npackage org.pentaho.di.core.hadoop;\n\nimport java.util.List;\n\n/**\n * Created by bryan on 8/13/15.\n */\npublic interface HadoopConfigurationPrompter {\n  String getConfigurationSelection( List<HadoopConfigurationInfo> hadoopConfigurationInfos );\n\n  void promptForRestart();\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/core/hadoop/HadoopSpoonPlugin.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.core.hadoop;\n\nimport org.pentaho.di.core.annotations.LifecyclePlugin;\nimport org.pentaho.di.core.gui.GUIOption;\nimport org.pentaho.di.core.lifecycle.LifeEventHandler;\nimport org.pentaho.di.core.lifecycle.LifecycleListener;\n\n@LifecyclePlugin( id = \"HadoopSpoonPlugin\", name = \"Hadoop Spoon Plugin\" )\npublic class HadoopSpoonPlugin implements LifecycleListener, GUIOption<Object> {\n  public static final String PLUGIN_ID = \"HadoopSpoonPlugin\";\n\n  public static final String HDFS_SCHEME = \"hdfs\";\n  public static final String MAPRFS_SCHEME = \"maprfs\";\n\n  public void onStart( LifeEventHandler arg0 ) {\n    //TODO: This class should be removed as a LifecyclePlugin and no longer implement LifecycleListener once dependencies are cleaned up\n  }\n\n  public void onExit( LifeEventHandler arg0 ) {\n    //TODO: This class should be removed as a LifecyclePlugin and no longer implement LifecycleListener once dependencies are cleaned up\n  }\n\n  public String getLabelText() {\n    return null;\n  }\n\n  public Object getLastValue() {\n    return null;\n  }\n\n  public DisplayType getType() {\n    return null;\n  }\n\n  public void setValue( Object value ) {\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/core/hadoop/NoShimSpecifiedException.java",
    "content": "/*******************************************************************************\n *\n * Pentaho Big Data\n *\n * Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n *\n *******************************************************************************\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with\n * the License. You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n ******************************************************************************/\n\npackage org.pentaho.di.core.hadoop;\nimport org.pentaho.hadoop.shim.api.ConfigurationException;\n\n/**\n * Created by bryan on 8/19/15.\n */\npublic class NoShimSpecifiedException extends ConfigurationException {\n  public NoShimSpecifiedException( String message ) {\n    super( message );\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/core/hadoop/SpoonExtensionPoint.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.core.hadoop;\n\nimport java.util.ResourceBundle;\n\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.ui.repository.repositoryexplorer.ControllerInitializationException;\nimport org.pentaho.di.ui.repository.repositoryexplorer.controllers.NamedClustersController;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.spoon.SpoonLifecycleListener;\nimport org.pentaho.di.ui.spoon.SpoonPerspective;\nimport org.pentaho.di.ui.spoon.SpoonPlugin;\nimport org.pentaho.di.ui.spoon.SpoonPluginCategories;\nimport org.pentaho.di.ui.spoon.SpoonPluginInterface;\nimport org.pentaho.di.ui.spoon.XulSpoonResourceBundle;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\n\n@SpoonPluginCategories( { \"repository-explorer\" } ) @SpoonPlugin( id = \"SpoonExtensionPoint\", image = \"\" )\npublic class SpoonExtensionPoint implements SpoonPluginInterface {\n\n  private static Class<?> PKG = SpoonExtensionPoint.class;\n\n  private LogChannelInterface log = new LogChannel( SpoonExtensionPoint.class.getName() );\n\n  private ResourceBundle resourceBundle = new XulSpoonResourceBundle( PKG );\n\n  public void applyToContainer( String category, XulDomContainer container ) throws XulException {\n    try {\n      container.registerClassLoader( getClass().getClassLoader() );\n      container.loadOverlay( \"org/pentaho/di/core/hadoop/explorer-layout-overlay.xul\", resourceBundle );\n      NamedClustersController controller = new NamedClustersController();\n      controller.init( Spoon.getInstance().rep );\n      container.addEventHandler( controller );\n    } catch ( XulException e ) {\n      log.logError( e.getMessage() );\n    } catch ( ControllerInitializationException e ) {\n      log.logError( e.getMessage() );\n    }\n  }\n\n  public SpoonLifecycleListener getLifecycleListener() {\n    return null;\n  }\n\n  public SpoonPerspective getPerspective() {\n    return null;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/avroinput/AvroInput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport java.io.IOException;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\n/**\n * Class providing an input step for reading data from an Avro serialized file. Handles both container files (where the\n * schema is serialized into the file) and schemaless files. In the case of the later, the user must supply a schema in\n * order to read objects from the file. In the case of the former, a schema can be optionally supplied.\n * \n * Currently supports Avro records, arrays, maps and primitive types. Paths use the \"dot\" notation and \"$\" indicates the\n * root of the object. Arrays and maps are accessed via \"[]\" and differ only in that array elements are accessed via\n * zero-based integer indexes and map values are accessed by string keys.\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\npublic class AvroInput extends BaseStep implements StepInterface {\n\n  protected AvroInputMeta m_meta;\n  protected AvroInputData m_data;\n\n  public AvroInput(\n      StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta, Trans trans ) {\n\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  /*\n   * (non-Javadoc)\n   * \n   * @see org.pentaho.di.trans.step.BaseStep#processRow(org.pentaho.di.trans.step .StepMetaInterface,\n   * org.pentaho.di.trans.step.StepDataInterface)\n   */\n  @Override\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n\n    Object[] currentInputRow = getRow();\n\n    if ( first ) {\n      first = false;\n\n      m_data = (AvroInputData) sdi;\n      m_meta = (AvroInputMeta) smi;\n\n      if ( Const.isEmpty( m_meta.getFilename() ) && !m_meta.getAvroInField() ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoAvroFileSpecified\" ) );\n      }\n\n      String readerSchema = m_meta.getSchemaFilename();\n      readerSchema = environmentSubstitute( readerSchema );\n      String avroFieldName = m_meta.getAvroFieldName();\n      avroFieldName = environmentSubstitute( avroFieldName );\n\n      String schemaFieldName = m_meta.getSchemaFieldName();\n      schemaFieldName = environmentSubstitute( schemaFieldName );\n\n      // setup the output row meta\n      RowMetaInterface outRowMeta = null;\n      outRowMeta = getInputRowMeta();\n      if ( outRowMeta != null ) {\n        outRowMeta = outRowMeta.clone();\n      } else {\n        outRowMeta = new RowMeta();\n      }\n\n      int newFieldOffset = outRowMeta.size();\n      m_data.setOutputRowMeta( outRowMeta );\n      m_meta.getFields( getTransMeta().getBowl(), m_data.getOutputRowMeta(), getStepname(), null, null, this );\n\n      // initialize substitution fields\n      if ( m_meta.getLookupFields() != null && m_meta.getLookupFields().size() > 0 && getInputRowMeta() != null\n          && currentInputRow != null ) {\n        for ( AvroInputMeta.LookupField f : m_meta.getLookupFields() ) {\n          f.init( getInputRowMeta(), this );\n        }\n      }\n\n      if ( m_meta.getAvroInField() ) {\n        // initialize for reading from a field\n        if ( getInputRowMeta() != null ) {\n          m_data.initializeFromFieldDecoding( getTransMeta().getBowl(), avroFieldName, readerSchema,\n              m_meta.getAvroFields(), m_meta.getAvroIsJsonEncoded(), newFieldOffset, m_meta.getSchemaInField(),\n              schemaFieldName, m_meta.getSchemaInFieldIsPath(), m_meta.getCacheSchemasInMemory(),\n              m_meta.getDontComplainAboutMissingFields(), log );\n        }\n      } else {\n        // initialize for reading from a file\n        FileObject fileObject = KettleVFS.getInstance( getTransMeta().getBowl() ).getFileObject(\n          environmentSubstitute( m_meta.getFilename() ), getTransMeta() );\n        m_data.establishFileType( getTransMeta().getBowl(), fileObject, readerSchema, m_meta.getAvroFields(),\n          m_meta.getAvroIsJsonEncoded(), newFieldOffset, m_meta.getDontComplainAboutMissingFields(), log );\n      }\n    }\n\n    if ( !m_meta.getAvroInField() ) {\n      currentInputRow = null;\n    } else {\n      if ( currentInputRow != null ) {\n        // set variables lookup values\n        if ( m_meta.getLookupFields() != null && m_meta.getLookupFields().size() > 0 ) {\n          for ( AvroInputMeta.LookupField f : m_meta.getLookupFields() ) {\n            f.setVariable( this, currentInputRow );\n          }\n        }\n      }\n    }\n\n    Object[][] outputRow = null;\n    try {\n      if ( !m_meta.getAvroInField() || getInputRowMeta() != null ) {\n        outputRow = m_data.avroObjectToKettle( getTransMeta().getBowl(), currentInputRow, this );\n      }\n    } catch ( Exception ex ) {\n      if ( getStepMeta().isDoingErrorHandling() ) {\n        String errorDescriptions =\n            BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.ProblemDecodingAvroObject\", ex.getMessage() );\n\n        String errorFields = \"\";\n        RowMetaInterface rowMeta = null;\n        Object[] currentRow = new Object[0];\n        if ( m_meta.getAvroInField() ) {\n          errorFields += m_meta.getAvroFieldName();\n          rowMeta = getInputRowMeta();\n          currentRow = currentInputRow;\n        } else {\n          errorFields = \"Data read from file\";\n          rowMeta = m_data.getOutputRowMeta();\n        }\n        putError( rowMeta, currentRow, 1, errorDescriptions, errorFields, \"AvroInput001\" );\n\n        if ( checkFeedback( getProcessed() ) ) {\n          logBasic( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.CheckFeedback\", getProcessed() ) );\n        }\n\n        return true;\n      } else {\n        throw new KettleException( ex.getMessage(), ex );\n      }\n    }\n    if ( outputRow != null ) {\n      // there may be more than one row if the paths contain an array/map\n      // expansion\n      for ( int i = 0; i < outputRow.length; i++ ) {\n        putRow( m_data.getOutputRowMeta(), outputRow[i] );\n\n        if ( log.isRowLevel() ) {\n          log.logRowlevel( toString(), \"Outputted row #\" + getProcessed() + \" : \" + outputRow );\n        }\n      }\n    } else {\n      if ( !m_meta.getAvroInField() ) {\n        try {\n          logBasic( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.ClosingFile\" ) );\n          m_data.close();\n        } catch ( IOException ex ) {\n          throw new KettleException( ex.getMessage(), ex );\n        }\n      }\n      setOutputDone();\n      return false;\n    }\n\n    if ( checkFeedback( getProcessed() ) ) {\n      logBasic( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.CheckFeedback\", getProcessed() ) );\n    }\n\n    return true;\n  }\n\n  /*\n   * (non-Javadoc)\n   * \n   * @see org.pentaho.di.trans.step.BaseStep#setStopped(boolean)\n   */\n  @Override\n  public void setStopped( boolean stopped ) {\n    if ( isStopped() && stopped == true ) {\n      return;\n    }\n\n    super.setStopped( stopped );\n\n    if ( stopped && !m_meta.getAvroInField() ) {\n      try {\n        logBasic( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.ClosingFile\" ) );\n        m_data.close();\n      } catch ( IOException ex ) {\n        logError( ex.getMessage(), ex );\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/avroinput/AvroInputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.apache.avro.Schema;\nimport org.apache.avro.file.DataFileStream;\nimport org.apache.avro.generic.GenericContainer;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.generic.GenericData.Record;\nimport org.apache.avro.generic.GenericDatumReader;\nimport org.apache.avro.io.Decoder;\nimport org.apache.avro.io.DecoderFactory;\nimport org.apache.avro.util.Utf8;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.row.RowDataUtil;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\n/**\n * Data class for the AvroInput step. Contains methods to determine the type of Avro file (i.e. container or just\n * serialized objects), extract all the leaf fields from the object structure described in the schema and convert Avro\n * leaf fields to kettle values.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n */\npublic class AvroInputData extends BaseStepData implements StepDataInterface {\n\n  /** For logging */\n  protected LogChannelInterface m_log;\n\n  /** The output data format */\n  protected RowMetaInterface m_outputRowMeta;\n\n  /** For reading container files - will be null if file is not a container file */\n  protected DataFileStream m_containerReader;\n\n  /** For reading from files of just serialized objects */\n  protected GenericDatumReader m_datumReader;\n  protected Decoder m_decoder;\n  protected InputStream m_inStream;\n\n  /**\n   * The schema used to write the file - will be null if the file is not a container file\n   */\n  protected Schema m_writerSchema;\n\n  /** The schema to use for extracting values */\n  protected Schema m_schemaToUse;\n\n  /**\n   * The default schema to use (in the case where the schema is in an incoming field and a particular row has a null (or\n   * unparsable/unavailable) schema\n   */\n  protected Schema m_defaultSchema;\n\n  /** The default datum reader (constructed with the default schema) */\n  protected GenericDatumReader m_defaultDatumReader;\n  protected Object m_defaultTopLevelObject;\n\n  /**\n   * Schema cache. Map of strings (actual schema or path to schema) to two element array. Element 0 = GenericDatumReader\n   * configured with schema; 2 = top level structure object to use.\n   */\n  protected Map<String, Object[]> m_schemaCache = new HashMap<String, Object[]>();\n\n  /** True if the data to be decoded is json rather than binary */\n  protected boolean m_jsonEncoded;\n\n  /** If the top level is a record */\n  protected Record m_topLevelRecord;\n\n  /** If the top level is an array */\n  protected GenericData.Array m_topLevelArray;\n\n  /** If the top level is a map */\n  protected Map<Utf8, Object> m_topLevelMap;\n\n  protected List<AvroInputMeta.AvroField> m_normalFields;\n  protected AvroArrayExpansion m_expansionHandler;\n\n  /** The index that the decoded fields start at in the output row */\n  protected int m_newFieldOffset;\n\n  /** True if decoding from an incoming field */\n  protected boolean m_decodingFromField;\n\n  /** If decoding from an incoming field, this holds its index */\n  protected int m_fieldToDecodeIndex = -1;\n\n  /** True if schema is in an incoming field */\n  protected boolean m_schemaInField;\n\n  /**\n   * If decoding from an incoming field and schema is in an incoming field, then this holds the schema field's index\n   */\n  protected int m_schemaFieldIndex = -1;\n\n  /**\n   * True if the schema field contains a path to a schema rather than the schema itself\n   */\n  protected boolean m_schemaFieldIsPath;\n\n  /** True if schemas read from incoming fields are to be cached in memory */\n  protected boolean m_cacheSchemas;\n\n  /**\n   * True if null should be output for a field if it is not present in the schema being used (otherwise an exeption is\n   * raised)\n   */\n  protected boolean m_dontComplainAboutMissingFields;\n\n  /** Factory for obtaining a decoder */\n  protected DecoderFactory m_factory;\n\n  /**\n   * Cleanses a string path by ensuring that any variables names present in the path do not contain \".\"s (replaces any\n   * dots with underscores).\n   *\n   * @param path\n   *          the path to cleanse\n   * @return the cleansed path\n   */\n  public static String cleansePath( String path ) {\n    // look for variables and convert any \".\" to \"_\"\n\n    int index = path.indexOf( \"${\" );\n\n    int endIndex = 0;\n    String tempStr = path;\n    while ( index >= 0 ) {\n      index += 2;\n      endIndex += tempStr.indexOf( \"}\" );\n      if ( endIndex > 0 && endIndex > index + 1 ) {\n        String key = path.substring( index, endIndex );\n\n        String cleanKey = key.replace( '.', '_' );\n        path = path.replace( key, cleanKey );\n      } else {\n        break;\n      }\n\n      if ( endIndex + 1 < path.length() ) {\n        tempStr = path.substring( endIndex + 1, path.length() );\n      } else {\n        break;\n      }\n\n      index = tempStr.indexOf( \"${\" );\n\n      if ( index > 0 ) {\n        index += endIndex;\n      }\n    }\n\n    return path;\n  }\n\n  /**\n   * Inner class that handles a single array/map expansion process. Expands an array or map to multiple Kettle rows.\n   * Delegates to AvroInptuMeta.AvroField objects to handle the extraction of leaf primitives.\n   *\n   * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n   * @version $Revision$\n   */\n  protected static class AvroArrayExpansion {\n\n    /** The prefix of the full path that defines the expansion */\n    public String m_expansionPath;\n\n    /**\n     * Subfield objects that handle the processing of the path after the expansion prefix\n     */\n    protected List<AvroInputMeta.AvroField> m_subFields;\n\n    private List<String> m_pathParts;\n    private List<String> m_tempParts;\n\n    protected RowMetaInterface m_outputRowMeta;\n\n    public AvroArrayExpansion( List<AvroInputMeta.AvroField> subFields ) {\n      m_subFields = subFields;\n    }\n\n    /**\n     * Initialize this field by parsing the path etc.\n     *\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public void init() throws KettleException {\n      if ( Const.isEmpty( m_expansionPath ) ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoPathSet\" ) );\n      }\n      if ( m_pathParts != null ) {\n        return;\n      }\n\n      String expansionPath = cleansePath( m_expansionPath );\n\n      String[] temp = expansionPath.split( \"\\\\.\" );\n      m_pathParts = new ArrayList<String>();\n      for ( String part : temp ) {\n        m_pathParts.add( part );\n      }\n\n      if ( m_pathParts.get( 0 ).equals( \"$\" ) ) {\n        m_pathParts.remove( 0 ); // root record indicator\n      } else if ( m_pathParts.get( 0 ).startsWith( \"$[\" ) ) {\n\n        // strip leading $ off of array\n        String r = m_pathParts.get( 0 ).substring( 1, m_pathParts.get( 0 ).length() );\n        m_pathParts.set( 0, r );\n      }\n      m_tempParts = new ArrayList<String>();\n\n      // initialize the sub fields\n      if ( m_subFields != null ) {\n        for ( AvroInputMeta.AvroField f : m_subFields ) {\n          int outputIndex = m_outputRowMeta.indexOfValue( f.m_fieldName );\n          f.init( outputIndex );\n        }\n      }\n    }\n\n    /**\n     * Reset this field. Should be called prior to processing a new field value from the avro file\n     *\n     * @param space\n     *          environment variables (values that environment variables resolve to cannot contain \".\"s)\n     */\n    public void reset( VariableSpace space ) {\n      m_tempParts.clear();\n\n      for ( String part : m_pathParts ) {\n        m_tempParts.add( space.environmentSubstitute( part ) );\n      }\n\n      // reset sub fields\n      for ( AvroInputMeta.AvroField f : m_subFields ) {\n        f.reset( space );\n      }\n    }\n\n    /**\n     * Processes a map at this point in the path.\n     *\n     * @param map\n     *          the map to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param space\n     *          environment variables\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return an array of Kettle rows corresponding to the expanded map/array and containing all leaf values as defined\n     *         in the paths\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object[][] convertToKettleValues(\n        Map<Utf8, Object> map, Schema s, Schema defaultSchema, VariableSpace space,  boolean ignoreMissing ) throws KettleException {\n\n      if ( map == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.MalformedPathMap\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( !( part.charAt( 0 ) == '[' ) ) {\n        throw new KettleException( BaseMessages\n            .getString( AvroInputMeta.PKG, \"AvroInput.Error.MalformedPathMap2\", part ) );\n      }\n\n      String key = part.substring( 1, part.indexOf( ']' ) );\n\n      if ( part.indexOf( ']' ) < part.length() - 1 ) {\n        // more dimensions to the array/map\n        part = part.substring( part.indexOf( ']' ) + 1, part.length() );\n        m_tempParts.add( 0, part );\n      }\n\n      if ( key.equals( \"*\" ) ) {\n        // start the expansion - we delegate conversion to our subfields\n        Schema valueType = s.getValueType();\n        Object[][] result = new Object[map.keySet().size()][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n\n        int i = 0;\n        for ( Utf8 mk : map.keySet() ) {\n          Object value = map.get( mk );\n\n          for ( int j = 0; j < m_subFields.size(); j++ ) {\n            AvroInputMeta.AvroField sf = m_subFields.get( j );\n            sf.reset( space );\n\n            // what have we got\n            if ( valueType.getType() == Schema.Type.RECORD ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (Record) value, valueType, defaultSchema, ignoreMissing );\n            } else if ( valueType.getType() == Schema.Type.ARRAY ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (GenericData.Array) value, valueType, defaultSchema, ignoreMissing );\n            } else if ( valueType.getType() == Schema.Type.MAP ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (Map<Utf8, Object>) value, valueType, defaultSchema, ignoreMissing );\n            } else {\n              // assume a primitive\n              result[i][sf.m_outputIndex] = sf.getPrimitive( value, valueType );\n            }\n          }\n          i++; // next row\n        }\n\n        return result;\n      } else {\n        Object value = map.get( new Utf8( key ) );\n\n        if ( value == null ) {\n          // key doesn't exist in map\n          Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n\n          for ( int i = 0; i < m_subFields.size(); i++ ) {\n            AvroInputMeta.AvroField sf = m_subFields.get( i );\n            result[0][sf.m_outputIndex] = null;\n          }\n\n          return result;\n        }\n\n        Schema valueType = s.getValueType();\n        if ( valueType.getType() == Schema.Type.UNION ) {\n          if ( value instanceof GenericContainer ) {\n            // we can ask these things for their schema (covers\n            // records, arrays, enums and fixed)\n            valueType = ( (GenericContainer) value ).getSchema();\n          } else {\n            // either have a map or primitive here\n            if ( value instanceof Map ) {\n              // now have to look for the schema of the map\n              Schema mapSchema = null;\n              for ( Schema ts : valueType.getTypes() ) {\n                if ( ts.getType() == Schema.Type.MAP ) {\n                  mapSchema = ts;\n                  break;\n                }\n              }\n              if ( mapSchema == null ) {\n                throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                    \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n              }\n              valueType = mapSchema;\n            } else {\n              // We shouldn't have a primitive here\n              if ( !ignoreMissing ) {\n                throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                    \"AvroInput.Error.EncounteredAPrimitivePriorToMapExpansion\" ) );\n              }\n              Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n              return result;\n            }\n          }\n        }\n\n        // what have we got?\n        if ( valueType.getType() == Schema.Type.RECORD ) {\n          return convertToKettleValues( (Record) value, valueType, defaultSchema, space, ignoreMissing );\n        } else if ( valueType.getType() == Schema.Type.ARRAY ) {\n          return convertToKettleValues( (GenericData.Array) value, valueType, defaultSchema, space, ignoreMissing );\n        } else if ( valueType.getType() == Schema.Type.MAP ) {\n          return convertToKettleValues( (Map<Utf8, Object>) value, valueType, defaultSchema, space, ignoreMissing );\n        } else {\n          // we shouldn't have a primitive at this point. If we are\n          // extracting a particular key from the map then we're not to the\n          // expansion phase,\n          // so normally there must be a non-primitive sub-structure. Only if\n          // the user is switching schema versions on a per-row basis or the\n          // schema is a union at the top level could we end up here\n          if ( !ignoreMissing ) {\n            throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                \"AvroInput.Error.UnexpectedMapValueTypeAtNonExpansionPoint\" ) );\n          }\n          Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n          return result;\n        }\n      }\n    }\n\n    /**\n     * Processes an array at this point in the path.\n     *\n     * @param array\n     *          the array to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param space\n     *          environment variables\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return an array of Kettle rows corresponding to the expanded map/array and containing all leaf values as defined\n     *         in the paths\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object[][] convertToKettleValues( GenericData.Array array, Schema s, Schema defaultSchema, VariableSpace space,\n        boolean ignoreMissing ) throws KettleException {\n\n      if ( array == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.MalformedPathArray\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( !( part.charAt( 0 ) == '[' ) ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.MalformedPathArray2\",\n            part ) );\n      }\n\n      String index = part.substring( 1, part.indexOf( ']' ) );\n\n      if ( part.indexOf( ']' ) < part.length() - 1 ) {\n        // more dimensions to the array\n        part = part.substring( part.indexOf( ']' ) + 1, part.length() );\n        m_tempParts.add( 0, part );\n      }\n\n      if ( index.equals( \"*\" ) ) {\n        // start the expansion - we delegate conversion to our subfields\n\n        Schema elementType = s.getElementType();\n        Object[][] result = new Object[array.size()][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n\n        for ( int i = 0; i < array.size(); i++ ) {\n          Object value = array.get( i );\n\n          for ( int j = 0; j < m_subFields.size(); j++ ) {\n            AvroInputMeta.AvroField sf = m_subFields.get( j );\n            sf.reset( space );\n            // what have we got\n            if ( elementType.getType() == Schema.Type.RECORD ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (Record) value, elementType, defaultSchema, ignoreMissing );\n            } else if ( elementType.getType() == Schema.Type.ARRAY ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (GenericData.Array) value, elementType, defaultSchema, ignoreMissing );\n            } else if ( elementType.getType() == Schema.Type.MAP ) {\n              result[i][sf.m_outputIndex] =\n                  sf.convertToKettleValue( (Map<Utf8, Object>) value, elementType, defaultSchema, ignoreMissing );\n            } else {\n              // assume a primitive\n              result[i][sf.m_outputIndex] = sf.getPrimitive( value, elementType );\n            }\n          }\n\n        }\n        return result;\n      } else {\n        int arrayI = 0;\n        try {\n          arrayI = Integer.parseInt( index.trim() );\n        } catch ( NumberFormatException e ) {\n          throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n              \"AvroInput.Error.UnableToParseArrayIndex\", index ) );\n        }\n\n        if ( arrayI >= array.size() || arrayI < 0 ) {\n\n          // index is out of bounds\n          Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n          for ( int i = 0; i < m_subFields.size(); i++ ) {\n            AvroInputMeta.AvroField sf = m_subFields.get( i );\n            result[0][sf.m_outputIndex] = null;\n          }\n\n          return result;\n        }\n\n        Object value = array.get( arrayI );\n        Schema elementType = s.getElementType();\n\n        if ( elementType.getType() == Schema.Type.UNION ) {\n          if ( value instanceof GenericContainer ) {\n            // we can ask these things for their schema (covers\n            // records, arrays, enums and fixed)\n            elementType = ( (GenericContainer) value ).getSchema();\n          } else {\n            // either have a map or primitive here\n            if ( value instanceof Map ) {\n              // now have to look for the schema of the map\n              Schema mapSchema = null;\n              for ( Schema ts : elementType.getTypes() ) {\n                if ( ts.getType() == Schema.Type.MAP ) {\n                  mapSchema = ts;\n                  break;\n                }\n              }\n              if ( mapSchema == null ) {\n                throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                    \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n              }\n              elementType = mapSchema;\n            } else {\n              // We shouldn't have a primitive here\n              if ( !ignoreMissing ) {\n                throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                    \"AvroInput.Error.EncounteredAPrimitivePriorToMapExpansion\" ) );\n              }\n              Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n              return result;\n            }\n          }\n        }\n\n        // what have we got?\n        if ( elementType.getType() == Schema.Type.RECORD ) {\n          return convertToKettleValues( (Record) value, elementType, defaultSchema, space, ignoreMissing );\n        } else if ( elementType.getType() == Schema.Type.ARRAY ) {\n          return convertToKettleValues( (GenericData.Array) value, elementType, defaultSchema, space, ignoreMissing );\n        } else if ( elementType.getType() == Schema.Type.MAP ) {\n          return convertToKettleValues( (Map<Utf8, Object>) value, elementType, defaultSchema, space, ignoreMissing );\n        } else {\n          // we shouldn't have a primitive at this point. If we are\n          // extracting a particular index from the array then we're not to the\n          // expansion phase,\n          // so normally there must be a non-primitive sub-structure. Only if\n          // the user is switching schema versions on a per-row basis or the\n          // schema is a union at the top level could we end up here\n          if ( !ignoreMissing ) {\n            throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                \"AvroInput.Error.UnexpectedArrayElementTypeAtNonExpansionPoint\" ) );\n          } else {\n            Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n            return result;\n          }\n        }\n      }\n    }\n\n    /**\n     * Processes a record at this point in the path.\n     *\n     * @param record\n     *          the record to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param space\n     *          environment variables\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return an array of Kettle rows corresponding to the expanded map/array and containing all leaf values as defined\n     *         in the paths\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object[][] convertToKettleValues( Record record, Schema s, Schema defaultSchema, VariableSpace space,\n        boolean ignoreMissing ) throws KettleException {\n\n      if ( record == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.MalformedPathRecord\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( part.charAt( 0 ) == '[' ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.InvalidPath\" )\n            + m_tempParts );\n      }\n\n      if ( part.indexOf( '[' ) > 0 ) {\n        String arrayPart = part.substring( part.indexOf( '[' ) );\n        part = part.substring( 0, part.indexOf( '[' ) );\n\n        // put the array section back into location zero\n        m_tempParts.add( 0, arrayPart );\n      }\n\n      // part is a named field of the record\n      Schema.Field fieldS = s.getField( part );\n\n      if ( fieldS == null ) {\n        if ( !ignoreMissing ) {\n          throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NonExistentField\",\n              part ) );\n        }\n      }\n\n      Object field = record.get( part );\n\n      if ( field == null ) {\n        // field is null and we haven't hit the expansion yet. There will be\n        // nothing\n        // to return for all the sub-fields grouped in the expansion\n        Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n        return result;\n      }\n\n      Schema.Type fieldT = fieldS.schema().getType();\n      Schema fieldSchema = fieldS.schema();\n      if ( fieldT == Schema.Type.UNION ) {\n        if ( field instanceof GenericContainer ) {\n          // we can ask these things for their schema (covers\n          // records, arrays, enums and fixed)\n          fieldSchema = ( (GenericContainer) field ).getSchema();\n          fieldT = fieldSchema.getType();\n        } else {\n          // either have a map or primitive here\n          if ( field instanceof Map ) {\n            // now have to look for the schema of the map\n            Schema mapSchema = null;\n            for ( Schema ts : fieldSchema.getTypes() ) {\n              if ( ts.getType() == Schema.Type.MAP ) {\n                mapSchema = ts;\n                break;\n              }\n            }\n            if ( mapSchema == null ) {\n              throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n            }\n            fieldSchema = mapSchema;\n            fieldT = Schema.Type.MAP;\n          } else {\n            // We shouldn't have a primitive here\n            if ( !ignoreMissing ) {\n              throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Error.EncounteredAPrimitivePriorToMapExpansion\" ) );\n            }\n            Object[][] result = new Object[1][m_outputRowMeta.size() + RowDataUtil.OVER_ALLOCATE_SIZE];\n            return result;\n          }\n        }\n      }\n\n      // what have we got?\n      if ( fieldT == Schema.Type.RECORD ) {\n        return convertToKettleValues( (Record) field, fieldSchema, defaultSchema, space, ignoreMissing );\n      } else if ( fieldT == Schema.Type.ARRAY ) {\n        return convertToKettleValues( (GenericData.Array) field, fieldSchema, defaultSchema, space, ignoreMissing );\n      } else if ( fieldT == Schema.Type.MAP ) {\n\n        return convertToKettleValues( (Map<Utf8, Object>) field, fieldSchema, defaultSchema, space, ignoreMissing );\n      } else {\n        // primitives will always be handled by the subField delegates, so we\n        // should'nt\n        // get here\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n            \"AvroInput.Error.UnexpectedRecordFieldTypeAtNonExpansionPoint\" ) );\n      }\n    }\n  }\n\n  /**\n   * Get the output row format\n   *\n   * @return the output row format\n   */\n  public RowMetaInterface getOutputRowMeta() {\n    return m_outputRowMeta;\n  }\n\n  /**\n   * Helper function that creates a field object once we've reached a leaf in the schema.\n   *\n   * @param path\n   *          the path so far\n   * @param s\n   *          the schema for the primitive\n   * @return an avro field object.\n   */\n  protected static AvroInputMeta.AvroField createAvroField( String path, Schema s, String namePrefix ) {\n    AvroInputMeta.AvroField newField = new AvroInputMeta.AvroField();\n    // newField.m_fieldName = s.getName(); // this will set the name to the\n    // primitive type if the schema is for a primitive\n    String fieldName = path;\n    if ( !Const.isEmpty( namePrefix ) ) {\n      fieldName = namePrefix;\n    }\n    newField.m_fieldName = fieldName; // set the name to the path, so that for\n    // primitives within arrays we can at least\n    // distinguish among them\n    newField.m_fieldPath = path;\n    switch ( s.getType() ) {\n      case BOOLEAN:\n        newField.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_BOOLEAN );\n        break;\n      case ENUM:\n      case STRING:\n        newField.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n        if ( s.getType() == Schema.Type.ENUM ) {\n          newField.m_indexedVals = s.getEnumSymbols();\n        }\n\n        break;\n      case FLOAT:\n      case DOUBLE:\n        newField.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_NUMBER );\n        break;\n      case INT:\n      case LONG:\n        newField.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_INTEGER );\n        break;\n      case BYTES:\n      case FIXED:\n        newField.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_BINARY );\n        break;\n      default:\n        // unhandled type\n        newField = null;\n    }\n\n    return newField;\n  }\n\n  /**\n   * Helper function that checks the validity of a union. We can only handle unions that contain two elements: a type\n   * and null.\n   *\n   * @param s\n   *          the union schema to check\n   * @return the type of the element that is not null.\n   * @throws KettleException\n   *           if a problem occurs.\n   */\n  protected static Schema checkUnion( Schema s ) throws KettleException {\n    boolean ok = false;\n    List<Schema> types = s.getTypes();\n\n    // the type other than null\n    Schema otherSchema = null;\n\n    if ( types.size() != 2 ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.UnionError1\" ) );\n    }\n\n    for ( Schema p : types ) {\n      if ( p.getType() == Schema.Type.NULL ) {\n        ok = true;\n      } else {\n        otherSchema = p;\n      }\n    }\n\n    if ( !ok ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.UnionError2\" ) );\n    }\n\n    return otherSchema;\n  }\n\n  /**\n   * Check the supplied union for primitive/leaf types\n   *\n   * @param s\n   *          the union schema to check\n   * @return a list of primitive/leaf types in this union\n   */\n  protected static List<Schema> checkUnionForLeafTypes( Schema s ) {\n\n    List<Schema> types = s.getTypes();\n    List<Schema> primitives = new ArrayList<Schema>();\n\n    for ( Schema toCheck : types ) {\n      switch ( toCheck.getType() ) {\n        case BOOLEAN:\n        case LONG:\n        case DOUBLE:\n        case BYTES:\n        case ENUM:\n        case STRING:\n        case INT:\n        case FLOAT:\n        case FIXED:\n          primitives.add( toCheck );\n          break;\n      }\n    }\n\n    return primitives;\n  }\n\n  /**\n   * Builds a list of field objects holding paths corresponding to the leaf primitives in an Avro schema.\n   *\n   * @param s\n   *          the schema to process\n   * @return a List of field objects\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static List<AvroInputMeta.AvroField> getLeafFields( Schema s ) throws KettleException {\n    List<AvroInputMeta.AvroField> fields = new ArrayList<AvroInputMeta.AvroField>();\n\n    String root = \"$\";\n\n    if ( s.getType() == Schema.Type.ARRAY || s.getType() == Schema.Type.MAP ) {\n      while ( s.getType() == Schema.Type.ARRAY || s.getType() == Schema.Type.MAP ) {\n        if ( s.getType() == Schema.Type.ARRAY ) {\n          root += \"[0]\";\n          s = s.getElementType();\n        } else {\n          root += \"[*key*]\";\n          s = s.getValueType();\n        }\n      }\n    }\n\n    if ( s.getType() == Schema.Type.RECORD ) {\n      processRecord( root, s, fields, root );\n    } else if ( s.getType() == Schema.Type.UNION ) {\n      processUnion( root, s, fields, root );\n    } else {\n\n      // our top-level array/map structure bottoms out with primitive types\n      // we'll create one zero-indexed path through to a primitive - the\n      // user can copy and paste the path if they want to extract other\n      // indexes out to separate Kettle fields\n      AvroInputMeta.AvroField newField = createAvroField( root, s, null );\n      if ( newField != null ) {\n        fields.add( newField );\n      }\n    }\n\n    return fields;\n  }\n\n  /**\n   * Helper function used to build paths automatically when extracting leaf fields from a schema\n   *\n   * @param path\n   *          the path so far\n   * @param s\n   *          the schema\n   * @param fields\n   *          a list of field objects that will correspond to leaf primitives\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static void processUnion( String path, Schema s, List<AvroInputMeta.AvroField> fields, String namePrefix )\n    throws KettleException {\n\n    boolean topLevelUnion = path.equals( \"$\" );\n\n    // first check for the presence of primitive/leaf types in this union\n    List<Schema> primitives = checkUnionForLeafTypes( s );\n    if ( primitives.size() > 0 ) {\n      // if there is exactly one primitive then we can set the kettle type\n      // for this primitive's type. If there is more than one primitive\n      // then we'll have to use String to cover them all\n      if ( primitives.size() == 1 ) {\n        Schema single = primitives.get( 0 );\n        namePrefix = topLevelUnion ? single.getName() : namePrefix;\n        AvroInputMeta.AvroField newField = createAvroField( path, single, namePrefix );\n        if ( newField != null ) {\n          fields.add( newField );\n        }\n      } else {\n        Schema stringS = Schema.create( Schema.Type.STRING );\n        AvroInputMeta.AvroField newField =\n            createAvroField( path, stringS, topLevelUnion ? path + \"union:primitive/fixed\" : namePrefix );\n        if ( newField != null ) {\n          fields.add( newField );\n        }\n      }\n    }\n\n    // now scan for arrays, maps and records. Unions may not immediately contain\n    // other unions (according to the spec)\n    for ( Schema toCheck : s.getTypes() ) {\n      if ( toCheck.getType() == Schema.Type.RECORD ) {\n        String recordName = \"[u:\" + toCheck.getName() + \"]\";\n\n        processRecord( path, toCheck, fields, namePrefix + recordName );\n      } else if ( toCheck.getType() == Schema.Type.MAP ) {\n        processMap( path + \"[*key*]\", toCheck, fields, namePrefix + \"[*key*]\" );\n      } else if ( toCheck.getType() == Schema.Type.ARRAY ) {\n        processArray( path + \"[0]\", toCheck, fields, namePrefix + \"[0]\" );\n      }\n    }\n  }\n\n  /**\n   * Helper function used to build paths automatically when extracting leaf fields from a schema\n   *\n   * @param path\n   *          the path so far\n   * @param s\n   *          the schema\n   * @param fields\n   *          a list of field objects that will correspond to leaf primitives\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static void processRecord( String path, Schema s, List<AvroInputMeta.AvroField> fields, String namePrefix )\n    throws KettleException {\n\n    List<Schema.Field> recordFields = s.getFields();\n    for ( Schema.Field rField : recordFields ) {\n      Schema rSchema = rField.schema();\n      /*\n       * if (rSchema.getType() == Schema.Type.UNION) { rSchema = checkUnion(rSchema); }\n       */\n\n      if ( rSchema.getType() == Schema.Type.UNION ) {\n        processUnion( path + \".\" + rField.name(), rSchema, fields, namePrefix + \".\" + rField.name() );\n      } else if ( rSchema.getType() == Schema.Type.RECORD ) {\n        processRecord( path + \".\" + rField.name(), rSchema, fields, namePrefix + \".\" + rField.name() );\n      } else if ( rSchema.getType() == Schema.Type.ARRAY ) {\n        processArray( path + \".\" + rField.name() + \"[0]\", rSchema, fields, namePrefix + \".\" + rField.name() + \"[0]\" );\n      } else if ( rSchema.getType() == Schema.Type.MAP ) {\n        processMap( path + \".\" + rField.name() + \"[*key*]\", rSchema, fields, namePrefix + \".\" + rField.name()\n            + \"[*key*]\" );\n      } else {\n        // primitive\n        AvroInputMeta.AvroField newField =\n            createAvroField( path + \".\" + rField.name(), rSchema, namePrefix + \".\" + rField.name() );\n        if ( newField != null ) {\n          fields.add( newField );\n        }\n      }\n    }\n  }\n\n  /**\n   * Helper function used to build paths automatically when extracting leaf fields from a schema\n   *\n   * @param path\n   *          the path so far\n   * @param s\n   *          the schema\n   * @param fields\n   *          a list of field objects that will correspond to leaf primitives\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static void processMap( String path, Schema s, List<AvroInputMeta.AvroField> fields, String namePrefix )\n    throws KettleException {\n\n    s = s.getValueType(); // type of the values of the map\n\n    if ( s.getType() == Schema.Type.UNION ) {\n      processUnion( path, s, fields, namePrefix );\n    } else if ( s.getType() == Schema.Type.ARRAY ) {\n      processArray( path + \"[0]\", s, fields, namePrefix + \"[0]\" );\n    } else if ( s.getType() == Schema.Type.RECORD ) {\n      processRecord( path, s, fields, namePrefix );\n    } else if ( s.getType() == Schema.Type.MAP ) {\n      processMap( path + \"[*key*]\", s, fields, namePrefix + \"[*key*]\" );\n    } else {\n      AvroInputMeta.AvroField newField = createAvroField( path, s, namePrefix );\n      if ( newField != null ) {\n        fields.add( newField );\n      }\n    }\n  }\n\n  /**\n   * Helper function used to build paths automatically when extracting leaf fields from a schema\n   *\n   * @param path\n   *          the path so far\n   * @param s\n   *          the schema\n   * @param fields\n   *          a list of field objects that will correspond to leaf primitives\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static void processArray( String path, Schema s, List<AvroInputMeta.AvroField> fields, String namePrefix )\n    throws KettleException {\n\n    s = s.getElementType(); // type of the array elements\n\n    if ( s.getType() == Schema.Type.UNION ) {\n      processUnion( path, s, fields, namePrefix );\n    } else if ( s.getType() == Schema.Type.ARRAY ) {\n      processArray( path + \"[0]\", s, fields, namePrefix );\n    } else if ( s.getType() == Schema.Type.RECORD ) {\n      processRecord( path, s, fields, namePrefix );\n    } else if ( s.getType() == Schema.Type.MAP ) {\n      processMap( path + \"[*key*]\", s, fields, namePrefix + \"[*key*]\" );\n    } else {\n      AvroInputMeta.AvroField newField = createAvroField( path, s, namePrefix );\n      if ( newField != null ) {\n        fields.add( newField );\n      }\n    }\n  }\n\n  /**\n   * Load a schema from a file\n   *\n   * @param schemaFile\n   *          the file to load from\n   * @return the schema\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static Schema loadSchema( Bowl bowl, String schemaFile ) throws KettleException {\n\n    Schema s = null;\n    Schema.Parser p = new Schema.Parser();\n\n    FileObject fileO = KettleVFS.getInstance( bowl ).getFileObject( schemaFile );\n    try {\n      InputStream in = KettleVFS.getInputStream( fileO );\n      s = p.parse( in );\n\n      in.close();\n    } catch ( FileSystemException e ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.SchemaError\" ), e );\n    } catch ( IOException e ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.SchemaError\" ), e );\n    }\n\n    return s;\n  }\n\n  /**\n   * Load a schema from a Avro container file\n   *\n   * @param containerFilename\n   *          the name of the Avro container file\n   * @return the schema\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static Schema loadSchemaFromContainer( Bowl bowl, String containerFilename ) throws KettleException {\n    Schema s = null;\n\n    FileObject fileO = KettleVFS.getInstance( bowl ).getFileObject( containerFilename );\n    InputStream in = null;\n\n    try {\n      in = KettleVFS.getInputStream( fileO );\n      GenericDatumReader dr = new GenericDatumReader();\n      DataFileStream reader = new DataFileStream( in, dr );\n      s = reader.getSchema();\n\n      reader.close();\n    } catch ( FileSystemException e ) {\n      throw new KettleException( BaseMessages\n          .getString( AvroInputMeta.PKG, \"AvroInputDialog.Error.KettleFileException\" ), e );\n    } catch ( IOException e ) {\n      throw new KettleException( BaseMessages\n          .getString( AvroInputMeta.PKG, \"AvroInputDialog.Error.KettleFileException\" ), e );\n    }\n\n    return s;\n  }\n\n  /**\n   * Set the output row format\n   *\n   * @param rmi\n   *          the output row format\n   */\n  public void setOutputRowMeta( RowMetaInterface rmi ) {\n    m_outputRowMeta = rmi;\n  }\n\n  /**\n   * Performs initialization based on decoding from an incoming field.\n   *\n   * @param fieldNameToDecode\n   *          name of the field to decode from\n   * @param readerSchemaFile\n   *          the reader schema file (must be supplied)\n   * @param fields\n   *          the user-supplied paths to extract\n   * @param jsonEncoded\n   *          true if the data is JSON encoded\n   * @param newFieldOffset\n   *          offset in the outgoing row format for extracted fields from any incoming kettle fields\n   * @param schemaInField\n   *          true if the schema to use on a row-by-row basis is contained in an incoming field value\n   * @param schemaFieldName\n   *          the name of the incoming field containing the schema\n   * @param schemaFieldIsPath\n   *          true if the incoming schema field values are actually paths to schemas rather than the schema itself\n   * @param cacheSchemas\n   *          true if schemas read from field values are to be cached in memory\n   * @param ingoreMissing\n   *          true if null is to be output for fields not found in the schema\n   * @param log\n   *          for logging\n   * @throws KettleException\n   */\n  public void initializeFromFieldDecoding( Bowl bowl, String fieldNameToDecode, String readerSchemaFile,\n      List<AvroInputMeta.AvroField> fields, boolean jsonEncoded, int newFieldOffset, boolean schemaInField,\n      String schemaFieldName, boolean schemaFieldIsPath, boolean cacheSchemas, boolean ignoreMissing,\n      LogChannelInterface log ) throws KettleException {\n\n    m_log = log;\n    m_decodingFromField = true;\n    m_jsonEncoded = jsonEncoded;\n    m_newFieldOffset = newFieldOffset;\n    m_inStream = null;\n    m_normalFields = new ArrayList<AvroInputMeta.AvroField>();\n    m_cacheSchemas = cacheSchemas;\n    m_schemaInField = schemaInField;\n    m_dontComplainAboutMissingFields = ignoreMissing;\n\n    for ( AvroInputMeta.AvroField f : fields ) {\n      m_normalFields.add( f );\n    }\n    m_fieldToDecodeIndex = m_outputRowMeta.indexOfValue( fieldNameToDecode );\n\n    if ( schemaInField ) {\n      m_schemaFieldIndex = m_outputRowMeta.indexOfValue( schemaFieldName );\n      if ( m_schemaFieldIndex < 0 ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n            \"AvroInput.Error.UnableToFindIncommingSchemaField\" ) );\n      }\n      m_schemaFieldIsPath = schemaFieldIsPath;\n    }\n\n    if ( Const.isEmpty( readerSchemaFile ) ) {\n      if ( !schemaInField ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoSchemaSupplied\" ) );\n      } else {\n        if ( m_log.isBasic() ) {\n          m_log.logBasic( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.NoDefaultSchemaWarning\" ) );\n        }\n      }\n    }\n\n    if ( !Const.isEmpty( readerSchemaFile ) ) {\n      m_schemaToUse = loadSchema( bowl, readerSchemaFile );\n      m_defaultSchema = m_schemaToUse;\n      m_datumReader = new GenericDatumReader( m_schemaToUse );\n      m_defaultDatumReader = m_datumReader;\n    }\n\n    m_factory = new DecoderFactory();\n\n    init();\n  }\n\n  /**\n   * Performs initialization based on the Avro file and schema provided.\n   * <p>\n   *\n   * There are four possibilities:\n   * <p>\n   * <ol>\n   * <li>No schema file provided and no fields defined - can only process a container file, under the assumption that\n   * all leaf primitives are to be output</li>\n   * <li>No schema file provided but fields/paths defined - can only process a container file, and assume that supplied\n   * paths match schema</li>\n   * <li>Schema file provided, no fields defined - output all leaf primitives from schema and have to determine if input\n   * is a container file or just serialized data</li>\n   * <li>Schema file provided and fields defined - output leaf primitives associated with paths. Have to determine if\n   * file is container or not. If container, assume supplied schema overrides encapsulated schema</li>\n   * </ol>\n   *\n   * @param avroFile\n   *          the Avro file\n   * @param readerSchemaFile\n   *          the reader schema\n   * @param fields\n   *          the user-supplied paths to extract\n   * @param jsonEncoded\n   *          true if the data is JSON encoded\n   * @param newFieldOffset\n   *          offset in the outgoing row format for extracted fields from any incoming kettle fields\n   * @param ignoreMissing\n   *          if true output null for fields that don't appear in the schema\n   * @param log\n   *          the logger to use\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public void establishFileType( Bowl bowl, FileObject avroFile, String readerSchemaFile,\n      List<AvroInputMeta.AvroField> fields, boolean jsonEncoded, int newFieldOffset, boolean ignoreMissing,\n      LogChannelInterface log ) throws KettleException {\n\n    m_log = log;\n    m_newFieldOffset = newFieldOffset;\n    m_normalFields = new ArrayList<AvroInputMeta.AvroField>();\n    for ( AvroInputMeta.AvroField f : fields ) {\n      m_normalFields.add( f );\n    }\n    m_inStream = null;\n    m_jsonEncoded = jsonEncoded;\n    m_dontComplainAboutMissingFields = ignoreMissing;\n\n    try {\n      m_inStream = KettleVFS.getInputStream( avroFile );\n    } catch ( FileSystemException e1 ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.UnableToOpenAvroFile\" ),\n          e1 );\n    }\n\n    // load and handle reader schema....\n    if ( !Const.isEmpty( readerSchemaFile ) ) {\n      m_schemaToUse = loadSchema( bowl, readerSchemaFile );\n      m_defaultSchema = m_schemaToUse;\n    } else if ( jsonEncoded ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoSchemaProvided\" ) );\n    }\n\n    m_datumReader = new GenericDatumReader();\n    boolean nonContainer = false;\n\n    if ( !jsonEncoded ) {\n      try {\n        m_containerReader = new DataFileStream( m_inStream, m_datumReader );\n        m_writerSchema = m_containerReader.getSchema();\n\n        // resolve reader/writer schemas\n        if ( !Const.isEmpty( readerSchemaFile ) ) {\n          // map any aliases for schema migration\n          m_schemaToUse = Schema.applyAliases( m_writerSchema, m_schemaToUse );\n        } else {\n          m_schemaToUse = m_writerSchema;\n        }\n      } catch ( IOException e ) {\n        // doesn't look like a container file....\n        nonContainer = true;\n        try {\n          try {\n            m_inStream.close();\n          } catch ( IOException e1 ) {\n            if ( log.isDebug() ) {\n              log.logError( Const.getStackTracker( e1 ) );\n            }\n          }\n          m_inStream = KettleVFS.getInputStream( avroFile );\n        } catch ( FileSystemException e1 ) {\n          throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n              \"AvroInputDialog.Error.KettleFileException\" ), e1 );\n        }\n\n        m_containerReader = null;\n      }\n    }\n\n    if ( nonContainer || jsonEncoded ) {\n      if ( Const.isEmpty( readerSchemaFile ) ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoSchema\" ) );\n      }\n\n      m_factory = new DecoderFactory();\n      if ( jsonEncoded ) {\n        try {\n          m_decoder = m_factory.jsonDecoder( m_schemaToUse, m_inStream );\n        } catch ( IOException e ) {\n          throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.JsonDecoderError\" ) );\n        }\n      } else {\n        m_decoder = m_factory.binaryDecoder( m_inStream, null );\n      }\n      m_datumReader = new GenericDatumReader( m_schemaToUse );\n      m_defaultDatumReader = m_datumReader;\n    }\n\n    init();\n  }\n\n  protected void initTopLevelStructure( Schema schema, boolean setDefault ) throws KettleException {\n    // what top-level structure are we using?\n    if ( schema.getType() == Schema.Type.RECORD ) {\n      m_topLevelRecord = new Record( schema );\n      if ( setDefault ) {\n        m_defaultTopLevelObject = m_topLevelRecord;\n      }\n    } else if ( schema.getType() == Schema.Type.UNION ) {\n      // ASSUMPTION: if the top level structure is a union then each\n      // object we will read will be a record. We'll assume that any\n      // non-record types in the top-level union are named types that\n      // are referenced in the record types. We'll scan the union for the\n      // first record type to construct our\n      // our initial top-level object. When reading, the read method will give\n      // us a new object (with appropriate schema) if this top level object's\n      // schema does not match the schema of the record being currently read\n      Schema firstUnion = null;\n      for ( Schema uS : schema.getTypes() ) {\n        if ( uS.getType() == Schema.Type.RECORD ) {\n          firstUnion = uS;\n          break;\n        }\n      }\n\n      m_topLevelRecord = new Record( firstUnion );\n      if ( setDefault ) {\n        m_defaultTopLevelObject = m_topLevelRecord;\n      }\n    } else if ( schema.getType() == Schema.Type.ARRAY ) {\n      m_topLevelArray = new GenericData.Array( 1, schema ); // capacity,\n                                                            // schema\n      if ( setDefault ) {\n        m_defaultTopLevelObject = m_topLevelArray;\n      }\n    } else if ( schema.getType() == Schema.Type.MAP ) {\n      m_topLevelMap = new HashMap<Utf8, Object>();\n      if ( setDefault ) {\n        m_defaultTopLevelObject = m_topLevelMap;\n      }\n    } else {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n          \"AvroInput.Error.UnsupportedTopLevelStructure\" ) );\n    }\n  }\n\n  protected void setTopLevelStructure( Object topLevel ) {\n    if ( topLevel instanceof Record ) {\n      m_topLevelRecord = (Record) topLevel;\n      m_topLevelArray = null;\n      m_topLevelMap = null;\n    } else if ( topLevel instanceof GenericData.Array ) {\n      m_topLevelArray = (GenericData.Array<?>) topLevel;\n      m_topLevelRecord = null;\n      m_topLevelMap = null;\n    } else {\n      m_topLevelMap = (HashMap<Utf8, Object>) topLevel;\n      m_topLevelRecord = null;\n      m_topLevelArray = null;\n    }\n  }\n\n  protected void setSchemaToUse( Bowl bowl, String schemaKey, boolean useCache, VariableSpace space )\n    throws KettleException {\n\n    if ( Const.isEmpty( schemaKey ) ) {\n      // switch to default\n      if ( m_defaultDatumReader == null ) {\n        // no key, no default schema - can't continue with this row\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n            \"AvroInput.Error.IncommingSchemaIsMissingAndNoDefault\" ) );\n      }\n      if ( m_log.isDetailed() ) {\n        m_log.logDetailed( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.IncommingSchemaIsMissing\" ) );\n      }\n      m_datumReader = m_defaultDatumReader;\n      m_schemaToUse = m_datumReader.getSchema();\n      setTopLevelStructure( m_defaultTopLevelObject );\n      return;\n    } else {\n      schemaKey = schemaKey.trim();\n      schemaKey = space.environmentSubstitute( schemaKey );\n    }\n\n    Object[] cached = null;\n    if ( useCache ) {\n      cached = m_schemaCache.get( schemaKey );\n      if ( m_log.isDetailed() && cached != null ) {\n        m_log.logDetailed(\n            BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.UsingCachedSchema\", schemaKey ) );\n      }\n    }\n\n    if ( !useCache || cached == null ) {\n      Schema toUse = null;\n      if ( m_schemaFieldIsPath ) {\n        // load the schema from disk\n        if ( m_log.isDetailed() ) {\n          m_log.logDetailed(\n              BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.LoadingSchema\", schemaKey ) );\n        }\n        try {\n          toUse = loadSchema( bowl, schemaKey );\n        } catch ( KettleException ex ) {\n          // fall back to default (if possible)\n          if ( m_defaultDatumReader != null ) {\n            if ( m_log.isBasic() ) {\n              m_log.logBasic( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Message.FailedToLoadSchmeaUsingDefault\", schemaKey ) );\n            }\n            m_datumReader = m_defaultDatumReader;\n            m_schemaToUse = m_datumReader.getSchema();\n            setTopLevelStructure( m_defaultTopLevelObject );\n            return;\n          } else {\n            throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                \"AvroInput.Error.CantLoadIncommingSchemaAndNoDefault\", schemaKey ) );\n          }\n        }\n      } else {\n        // use the supplied schema\n        if ( m_log.isDetailed() ) {\n          m_log.logDetailed(\n              BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.ParsingSchema\", schemaKey ) );\n        }\n        Schema.Parser p = new Schema.Parser();\n        toUse = p.parse( schemaKey );\n      }\n      m_schemaToUse = toUse;\n      m_datumReader = new GenericDatumReader( toUse );\n      initTopLevelStructure( toUse, false );\n      if ( useCache ) {\n        Object[] schemaInfo = new Object[2];\n        schemaInfo[0] = m_datumReader;\n        schemaInfo[1] =\n            ( m_topLevelArray != null ) ? m_topLevelArray : ( ( m_topLevelRecord != null ) ? m_topLevelRecord\n                : m_topLevelMap );\n        if ( m_log.isDetailed() ) {\n          m_log.logDetailed( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Message.StoringSchemaInCache\" ) );\n        }\n        m_schemaCache.put( schemaKey, schemaInfo );\n      }\n    } else if ( useCache ) {\n      // got one from the cache\n      m_datumReader = (GenericDatumReader) cached[0];\n      m_schemaToUse = m_datumReader.getSchema();\n      setTopLevelStructure( cached[1] );\n    }\n  }\n\n  protected void init() throws KettleException {\n    if ( m_schemaToUse != null ) {\n      initTopLevelStructure( m_schemaToUse, true );\n      // any fields specified by the user, or do we need to read all leaves\n      // from the schema?\n      if ( m_normalFields == null || m_normalFields.size() == 0 ) {\n        m_normalFields = getLeafFields( m_schemaToUse );\n      }\n    }\n\n    if ( m_normalFields == null || m_normalFields.size() == 0 ) {\n      throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.NoFieldPathsDefined\" ) );\n    }\n\n    m_expansionHandler = checkFieldPaths( m_normalFields, m_outputRowMeta );\n\n    for ( AvroInputMeta.AvroField f : m_normalFields ) {\n      int outputIndex = m_outputRowMeta.indexOfValue( f.m_fieldName );\n      f.init( outputIndex );\n    }\n\n    if ( m_expansionHandler != null ) {\n      m_expansionHandler.init();\n    }\n  }\n\n  /**\n   * Examines the user-specified paths for the presence of a map/array expansion. If such an expansion is detected it\n   * checks that it is valid and, if so, creates an expansion handler for processing it.\n   *\n   * @param normalFields\n   *          the original user-specified paths. This is modified to contain only non-expansion paths.\n   * @param outputRowMeta\n   *          the output row format\n   * @return an AvroArrayExpansion object to handle expansions or null if no expansions are present in the user-supplied\n   *         path definitions.\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  protected static AvroArrayExpansion checkFieldPaths( List<AvroInputMeta.AvroField> normalFields,\n      RowMetaInterface outputRowMeta ) throws KettleException {\n    // here we check whether there are any full map/array expansions\n    // specified in the paths (via [*]). If so, we want to make sure\n    // that only one is present across all paths. E.g. we can handle\n    // multiple fields like $.person[*].first, $.person[*].last etc.\n    // but not $.person[*].first, $.person[*].address[*].street.\n\n    String expansion = null;\n    List<AvroInputMeta.AvroField> normalList = new ArrayList<AvroInputMeta.AvroField>();\n    List<AvroInputMeta.AvroField> expansionList = new ArrayList<AvroInputMeta.AvroField>();\n    for ( AvroInputMeta.AvroField f : normalFields ) {\n      String path = f.m_fieldPath;\n\n      if ( path != null && path.lastIndexOf( \"[*]\" ) >= 0 ) {\n\n        if ( path.indexOf( \"[*]\" ) != path.lastIndexOf( \"[*]\" ) ) {\n          throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n              \"AvroInput.Error.PathContainsMultipleExpansions\", path ) );\n        }\n        String pathPart = path.substring( 0, path.lastIndexOf( \"[*]\" ) + 3 );\n\n        if ( expansion == null ) {\n          expansion = pathPart;\n        } else {\n          if ( !expansion.equals( pathPart ) ) {\n            throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                \"AvroInput.Error.MutipleDifferentExpansions\" ) );\n          }\n        }\n\n        expansionList.add( f );\n      } else {\n        normalList.add( f );\n      }\n    }\n\n    normalFields.clear();\n    for ( AvroInputMeta.AvroField f : normalList ) {\n      normalFields.add( f );\n    }\n\n    if ( expansionList.size() > 0 ) {\n\n      List<AvroInputMeta.AvroField> subFields = new ArrayList<AvroInputMeta.AvroField>();\n\n      for ( AvroInputMeta.AvroField ef : expansionList ) {\n        AvroInputMeta.AvroField subField = new AvroInputMeta.AvroField();\n        subField.m_fieldName = ef.m_fieldName;\n        String path = ef.m_fieldPath;\n        if ( path.charAt( path.length() - 2 ) == '*' ) {\n          path = \"dummy\"; // pulling a primitive out of the map/array (path\n                          // doesn't matter)\n        } else {\n          path = path.substring( path.lastIndexOf( \"[*]\" ) + 3, path.length() );\n          path = \"$\" + path;\n        }\n\n        subField.m_fieldPath = path;\n        subField.m_indexedVals = ef.m_indexedVals;\n        subField.m_kettleType = ef.m_kettleType;\n\n        subFields.add( subField );\n      }\n\n      AvroArrayExpansion exp = new AvroArrayExpansion( subFields );\n      exp.m_expansionPath = expansion;\n      exp.m_outputRowMeta = outputRowMeta;\n\n      return exp;\n    }\n\n    return null;\n  }\n\n  private Object[][] setKettleFields( Object[] outputRowData, VariableSpace space ) throws KettleException {\n    Object[][] result = null;\n\n    // expand map/array in path structure to multiple rows (if necessary)\n    if ( m_expansionHandler != null ) {\n      m_expansionHandler.reset( space );\n\n      if ( m_schemaToUse.getType() == Schema.Type.RECORD || m_schemaToUse.getType() == Schema.Type.UNION ) {\n        // call getSchema() on the top level record here in case it has been\n        // read as one of the elements from a top-level union\n        result =\n            m_expansionHandler.convertToKettleValues( m_topLevelRecord, m_topLevelRecord.getSchema(), m_defaultSchema, space,\n                m_dontComplainAboutMissingFields );\n      } else if ( m_schemaToUse.getType() == Schema.Type.ARRAY ) {\n        result =\n            m_expansionHandler.convertToKettleValues( m_topLevelArray, m_schemaToUse, m_defaultSchema, space,\n                m_dontComplainAboutMissingFields );\n      } else {\n        result =\n            m_expansionHandler.convertToKettleValues( m_topLevelMap, m_schemaToUse, m_defaultSchema, space,\n                m_dontComplainAboutMissingFields );\n      }\n    } else {\n      result = new Object[1][];\n    }\n\n    // if there are no incoming rows (i.e. we're decoding from a file rather\n    // than a field\n    if ( outputRowData == null ) {\n      outputRowData = RowDataUtil.allocateRowData( m_outputRowMeta.size() );\n    } else {\n      // make sure we allocate enough space for the new fields\n      outputRowData = RowDataUtil.resizeArray( outputRowData, m_outputRowMeta.size() );\n    }\n\n    // get the normal (non expansion-related fields)\n    Object value = null;\n    for ( AvroInputMeta.AvroField f : m_normalFields ) {\n      f.reset( space );\n\n      if ( m_schemaToUse.getType() == Schema.Type.RECORD || m_schemaToUse.getType() == Schema.Type.UNION ) {\n        // call getSchema() on the top level record here in case it has been\n        // read as one of the elements from a top-level union\n        value =\n            f.convertToKettleValue( m_topLevelRecord, m_topLevelRecord.getSchema(), m_defaultSchema, m_dontComplainAboutMissingFields );\n      } else if ( m_schemaToUse.getType() == Schema.Type.ARRAY ) {\n        value = f.convertToKettleValue( m_topLevelArray, m_schemaToUse, m_defaultSchema, m_dontComplainAboutMissingFields );\n      } else {\n        value = f.convertToKettleValue( m_topLevelMap, m_schemaToUse, m_defaultSchema, m_dontComplainAboutMissingFields );\n      }\n\n      outputRowData[f.m_outputIndex] = value;\n    }\n\n    // copy normal fields and existing incoming over to each expansion row (if\n    // necessary)\n    if ( m_expansionHandler == null ) {\n      result[0] = outputRowData;\n    } else if ( m_normalFields.size() > 0 || m_newFieldOffset > 0 ) {\n      for ( int i = 0; i < result.length; i++ ) {\n        Object[] row = result[i];\n\n        // existing incoming fields\n        for ( int j = 0; j < m_newFieldOffset; j++ ) {\n          row[j] = outputRowData[j];\n        }\n\n        for ( AvroInputMeta.AvroField f : m_normalFields ) {\n          row[f.m_outputIndex] = outputRowData[f.m_outputIndex];\n        }\n      }\n    }\n\n    return result;\n  }\n\n  /**\n   * Converts an incoming row to outgoing format. Extracts fields from either an Avro object in the incoming row or from\n   * the next structure in the container or non-container Avro file. May return more than one row if a map/array is\n   * being expanded.\n   *\n   * @param incoming\n   *          incoming kettle row - may be null if decoding from a file rather than a field\n   * @param space\n   *          the variables to use\n   * @return one or more rows in the outgoing format\n   * @throws KettleException\n   *           if a problem occurs\n   */\n  public Object[][] avroObjectToKettle( Bowl bowl, Object[] incoming, VariableSpace space ) throws KettleException {\n\n    if ( m_containerReader != null ) {\n      // container file\n      try {\n        if ( m_containerReader.hasNext() ) {\n          if ( m_topLevelRecord != null ) {\n            // special case for top-level record. In case we actually\n            // have a top level union, reassign the record so that\n            // we have the correctly populated object in the case\n            // where our last record instance can't be reused (i.e.\n            // the next record read is a different one from the union\n            // than the last one).\n            m_topLevelRecord = (Record) m_containerReader.next( m_topLevelRecord );\n          } else if ( m_topLevelArray != null ) {\n            m_containerReader.next( m_topLevelArray );\n          } else {\n            m_containerReader.next( m_topLevelMap );\n          }\n\n          return setKettleFields( incoming, space );\n        } else {\n          return null; // no more input\n        }\n      } catch ( IOException e ) {\n        throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.ObjectReadError\" ) );\n      }\n    } else {\n      // non-container file\n      try {\n        /*\n         * if (m_decoder.isEnd()) { return null; }\n         */\n\n        // reading from an incoming field\n        if ( m_decodingFromField ) {\n          if ( incoming == null || incoming.length == 0 ) {\n            // must be done - just return null\n            return null;\n          }\n          ValueMetaInterface fieldMeta = m_outputRowMeta.getValueMeta( m_fieldToDecodeIndex );\n\n          // incoming avro field null? - all decoded fields are null\n          if ( fieldMeta.isNull( incoming[m_fieldToDecodeIndex] ) ) {\n            Object[][] result = new Object[1][];\n            // just resize the existing incoming array (if necessary) and return\n            // the incoming values\n            result[0] = RowDataUtil.resizeArray( incoming, m_outputRowMeta.size() );\n            return result;\n          }\n\n          // if necessary, set the current datum reader and top level structure\n          // for the incoming schema\n          if ( m_schemaInField ) {\n            ValueMetaInterface schemaMeta = m_outputRowMeta.getValueMeta( m_schemaFieldIndex );\n            String schemaToUse = schemaMeta.getString( incoming[m_schemaFieldIndex] );\n            setSchemaToUse( bowl, schemaToUse, m_cacheSchemas, space );\n          }\n\n          if ( m_jsonEncoded ) {\n            try {\n              String fieldValue = fieldMeta.getString( incoming[m_fieldToDecodeIndex] );\n              m_decoder = m_factory.jsonDecoder( m_schemaToUse, fieldValue );\n            } catch ( IOException e ) {\n              throw new KettleException(\n                  BaseMessages.getString( AvroInputMeta.PKG, \"AvroInput.Error.JsonDecoderError\" ) );\n            }\n          } else {\n            byte[] fieldValue = fieldMeta.getBinary( incoming[m_fieldToDecodeIndex] );\n            m_decoder = m_factory.binaryDecoder( fieldValue, null );\n          }\n        }\n\n        if ( m_topLevelRecord != null ) {\n          // special case for top-level record. In case we actually\n          // have a top level union, reassign the record so that\n          // we have the correctly populated object in the case\n          // where our last record instance can't be reused (i.e.\n          // the next record read is a different one from the union\n          // than the last one).\n          m_topLevelRecord = (Record) m_datumReader.read( m_topLevelRecord, m_decoder );\n        } else if ( m_topLevelArray != null ) {\n          m_datumReader.read( m_topLevelArray, m_decoder );\n        } else {\n          m_datumReader.read( m_topLevelMap, m_decoder );\n        }\n\n        return setKettleFields( incoming, space );\n      } catch ( IOException ex ) {\n        // some IO problem or no more input\n        return null;\n      }\n    }\n  }\n\n  public void close() throws IOException {\n    if ( m_containerReader != null ) {\n      m_containerReader.close();\n    }\n    if ( m_inStream != null ) {\n      m_inStream.close();\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/avroinput/AvroInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport org.apache.avro.Schema;\nimport org.apache.commons.vfs2.FileObject;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.custom.CCombo;\nimport org.eclipse.swt.custom.CTabFolder;\nimport org.eclipse.swt.custom.CTabItem;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.TableItem;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Props;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.hadoop.HadoopSpoonPlugin;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransPreviewFactory;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.textfileinput.TextFileInputMeta;\nimport org.pentaho.di.ui.core.dialog.EnterNumberDialog;\nimport org.pentaho.di.ui.core.dialog.EnterTextDialog;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.dialog.PreviewRowsDialog;\nimport org.pentaho.di.ui.core.widget.ColumnInfo;\nimport org.pentaho.di.ui.core.widget.TableView;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.trans.dialog.TransPreviewProgressDialog;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\n/**\n * Dialog for the Avro input step.\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\npublic class AvroInputDialog extends BaseStepDialog implements StepDialogInterface {\n\n  private static final Class<?> PKG = AvroInputMeta.class;\n\n  private final AvroInputMeta m_currentMeta;\n  private final AvroInputMeta m_originalMeta;\n\n  private CTabFolder m_wTabFolder;\n  private CTabItem m_wSourceTab;\n  private CTabItem m_wSchemaTab;\n  private CTabItem m_wFieldsTab;\n  private CTabItem m_wVarsTab;\n\n  /** various UI bits and pieces for the dialog */\n  private Label m_stepnameLabel;\n  private Text m_stepnameText;\n\n  private Button m_sourceInFileBut;\n  private Button m_sourceInFieldBut;\n\n  private Label m_defaultSchemaL;\n\n  private Button m_schemaInFieldBut;\n  private Label m_schemaInFieldIsPathL;\n  private Button m_schemaInFieldIsPathBut;\n  private Label m_cacheSchemasL;\n  private Button m_cacheSchemasBut;\n  private Label m_schemaFieldNameL;\n  private CCombo m_schemaFieldNameText;\n\n  private TextVar m_avroFilenameText;\n  private Button m_avroFileBrowse;\n  private TextVar m_schemaFilenameText;\n  private Button m_schemaFileBrowse;\n\n  private CCombo m_avroFieldNameText;\n\n  private Button m_jsonEncodedBut;\n\n  private Button m_missingFieldsBut;\n  private Button m_getFields;\n  private TableView m_fieldsView;\n\n  private Button m_getLookupFieldsBut;\n  private TableView m_lookupView;\n\n  public AvroInputDialog( Shell parent, Object in, TransMeta tr, String name ) {\n\n    super( parent, (BaseStepMeta) in, tr, name );\n    m_currentMeta = (AvroInputMeta) in;\n    m_originalMeta = (AvroInputMeta) m_currentMeta.clone();\n  }\n\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MIN | SWT.MAX );\n\n    props.setLook( shell );\n    setShellImage( shell, m_currentMeta );\n\n    // used to listen to a text field (m_wStepname)\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    };\n\n    changed = m_currentMeta.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Shell.Title\" ) );\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    m_stepnameLabel = new Label( shell, SWT.RIGHT );\n    m_stepnameLabel.setText( BaseMessages.getString( PKG, \"AvroInputDialog.StepName.Label\" ) );\n    props.setLook( m_stepnameLabel );\n\n    FormData fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( middle, -margin );\n    fd.top = new FormAttachment( 0, margin );\n    m_stepnameLabel.setLayoutData( fd );\n    m_stepnameText = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    m_stepnameText.setText( stepname );\n    props.setLook( m_stepnameText );\n    m_stepnameText.addModifyListener( lsMod );\n\n    // format the text field\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_stepnameText.setLayoutData( fd );\n\n    m_wTabFolder = new CTabFolder( shell, SWT.BORDER );\n    props.setLook( m_wTabFolder, Props.WIDGET_STYLE_TAB );\n    m_wTabFolder.setSimple( false );\n\n    // start of the source tab\n    m_wSourceTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wSourceTab.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SourceTab.Title\" ) );\n\n    Composite wSourceComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wSourceComp );\n\n    FormLayout sourceLayout = new FormLayout();\n    sourceLayout.marginWidth = 3;\n    sourceLayout.marginHeight = 3;\n    wSourceComp.setLayout( sourceLayout );\n\n    // source in file line\n    Label fileSourceL = new Label( wSourceComp, SWT.RIGHT );\n    props.setLook( fileSourceL );\n    fileSourceL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.FileSource.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    fileSourceL.setLayoutData( fd );\n\n    m_sourceInFileBut = new Button( wSourceComp, SWT.CHECK );\n    props.setLook( m_sourceInFileBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    m_sourceInFileBut.setLayoutData( fd );\n\n    m_sourceInFileBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n        m_sourceInFieldBut.setSelection( !m_sourceInFileBut.getSelection() );\n        checkWidgets();\n      }\n    } );\n\n    // source in field line\n    Label fieldSourceL = new Label( wSourceComp, SWT.RIGHT );\n    props.setLook( fieldSourceL );\n    fieldSourceL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.FieldSource.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_sourceInFileBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    fieldSourceL.setLayoutData( fd );\n\n    m_sourceInFieldBut = new Button( wSourceComp, SWT.CHECK );\n    props.setLook( m_sourceInFieldBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_sourceInFileBut, margin );\n    m_sourceInFieldBut.setLayoutData( fd );\n\n    m_sourceInFieldBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n        m_sourceInFileBut.setSelection( !m_sourceInFieldBut.getSelection() );\n        checkWidgets();\n      }\n    } );\n\n    // filename line\n    Label filenameL = new Label( wSourceComp, SWT.RIGHT );\n    props.setLook( filenameL );\n    filenameL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Filename.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_sourceInFieldBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    filenameL.setLayoutData( fd );\n\n    m_avroFileBrowse = new Button( wSourceComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_avroFileBrowse );\n    m_avroFileBrowse.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Button.FileBrowse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( m_sourceInFieldBut, 0 );\n    m_avroFileBrowse.setLayoutData( fd );\n\n    // add listener to pop up VFS browse dialog\n    m_avroFileBrowse.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        try {\n          String[] fileFilters = new String[] { \"*\" };\n          String[] fileFilterNames =\n              new String[] { BaseMessages.getString( TextFileInputMeta.class, \"System.FileType.AllFiles\" ) };\n\n          // get current file\n          FileObject rootFile = null;\n          FileObject initialFile = null;\n          FileObject defaultInitialFile = null;\n\n          if ( m_avroFilenameText.getText() != null ) {\n            String fname = transMeta.environmentSubstitute( m_avroFilenameText.getText() );\n\n            if ( !Const.isEmpty( fname ) ) {\n              initialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( fname );\n              rootFile = initialFile.getFileSystem().getRoot();\n            } else {\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( Spoon.getInstance().getLastFileOpened() );\n            }\n          } else {\n            defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( \"file:///c:/\" );\n          }\n\n          if ( rootFile == null ) {\n            rootFile = defaultInitialFile.getFileSystem().getRoot();\n          }\n\n          VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n          fileChooserDialog.defaultInitialFile = defaultInitialFile;\n          FileObject selectedFile =\n              fileChooserDialog.open( shell, null, HadoopSpoonPlugin.HDFS_SCHEME, true, null, fileFilters,\n                  fileFilterNames, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE );\n\n          if ( selectedFile != null ) {\n            m_avroFilenameText.setText( selectedFile.getURL().toString() );\n          }\n        } catch ( Exception ex ) {\n          logError( BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\" ), ex );\n          new ErrorDialog( shell, stepname, BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\" ),\n              ex );\n        }\n      }\n    } );\n\n    m_avroFilenameText = new TextVar( transMeta, wSourceComp, SWT.SIMPLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_avroFilenameText );\n    m_avroFilenameText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_avroFilenameText.setToolTipText( transMeta.environmentSubstitute( m_avroFilenameText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_sourceInFieldBut, margin );\n    fd.right = new FormAttachment( m_avroFileBrowse, -margin );\n    m_avroFilenameText.setLayoutData( fd );\n\n    Label avroFieldNameL = new Label( wSourceComp, SWT.RIGHT );\n    props.setLook( avroFieldNameL );\n    avroFieldNameL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.AvroField.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_avroFilenameText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    avroFieldNameL.setLayoutData( fd );\n\n    m_avroFieldNameText = new CCombo( wSourceComp, SWT.BORDER );\n    props.setLook( m_avroFieldNameText );\n    m_avroFieldNameText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_avroFieldNameText.setToolTipText( transMeta.environmentSubstitute( m_avroFieldNameText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_avroFilenameText, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_avroFieldNameText.setLayoutData( fd );\n\n    // json encoded check box\n    Label jsonL = new Label( wSourceComp, SWT.RIGHT );\n    props.setLook( jsonL );\n    jsonL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.JsonEncoded.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_avroFieldNameText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    jsonL.setLayoutData( fd );\n    jsonL.setToolTipText( BaseMessages.getString( PKG, \"AvroInputDialog.JsonEncoded.TipText\" ) );\n\n    m_jsonEncodedBut = new Button( wSourceComp, SWT.CHECK );\n    props.setLook( m_jsonEncodedBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_avroFieldNameText, margin );\n    m_jsonEncodedBut.setLayoutData( fd );\n    m_jsonEncodedBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n      }\n    } );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wSourceComp.setLayoutData( fd );\n    wSourceComp.layout();\n    m_wSourceTab.setControl( wSourceComp );\n\n    // -- start of the schema tab\n    m_wSchemaTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wSchemaTab.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaTab.Title\" ) );\n    Composite wSchemaComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wSchemaComp );\n\n    FormLayout schemaLayout = new FormLayout();\n    schemaLayout.marginWidth = 3;\n    schemaLayout.marginHeight = 3;\n    wSchemaComp.setLayout( schemaLayout );\n\n    // schema filename line\n    m_defaultSchemaL = new Label( wSchemaComp, SWT.RIGHT );\n    props.setLook( m_defaultSchemaL );\n    m_defaultSchemaL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaFilename.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    m_defaultSchemaL.setLayoutData( fd );\n    m_defaultSchemaL.setToolTipText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaFilename.TipText\" ) );\n\n    m_schemaFileBrowse = new Button( wSchemaComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_schemaFileBrowse );\n    m_schemaFileBrowse.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Button.FileBrowse\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    m_schemaFileBrowse.setLayoutData( fd );\n\n    // add listener to pop up VFS browse dialog\n    m_schemaFileBrowse.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        try {\n          String[] fileFilters = new String[] { \"*\" };\n          String[] fileFilterNames =\n              new String[] { BaseMessages.getString( TextFileInputMeta.class, \"System.FileType.AllFiles\" ) };\n\n          // get current file\n          FileObject rootFile = null;\n          FileObject initialFile = null;\n          FileObject defaultInitialFile = null;\n\n          if ( m_schemaFilenameText.getText() != null ) {\n            String fname = transMeta.environmentSubstitute( m_schemaFilenameText.getText() );\n\n            if ( !Const.isEmpty( fname ) ) {\n              initialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( fname );\n              rootFile = initialFile.getFileSystem().getRoot();\n            } else {\n              defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() )\n                .getFileObject( Spoon.getInstance().getLastFileOpened() );\n            }\n          } else {\n            defaultInitialFile = KettleVFS.getInstance( transMeta.getBowl() ).getFileObject( \"file:///c:/\" );\n          }\n\n          if ( rootFile == null ) {\n            rootFile = defaultInitialFile.getFileSystem().getRoot();\n          }\n\n          VfsFileChooserDialog fileChooserDialog = Spoon.getInstance().getVfsFileChooserDialog( rootFile, initialFile );\n          fileChooserDialog.defaultInitialFile = defaultInitialFile;\n          FileObject selectedFile =\n              fileChooserDialog.open( shell, null, HadoopSpoonPlugin.HDFS_SCHEME, true, null, fileFilters,\n                  fileFilterNames, VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE );\n\n          if ( selectedFile != null ) {\n            m_schemaFilenameText.setText( selectedFile.getURL().toString() );\n          }\n        } catch ( Exception ex ) {\n          logError( BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\" ), ex );\n          new ErrorDialog( shell, stepname, BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\" ),\n              ex );\n        }\n      }\n    } );\n\n    m_schemaFilenameText = new TextVar( transMeta, wSchemaComp, SWT.SIMPLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( m_schemaFilenameText );\n    m_schemaFilenameText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_avroFilenameText.setToolTipText( transMeta.environmentSubstitute( m_schemaFilenameText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( m_schemaFileBrowse, -margin );\n    m_schemaFilenameText.setLayoutData( fd );\n\n    // Schema in field line\n    Label schemaInFieldL = new Label( wSchemaComp, SWT.RIGHT );\n    props.setLook( schemaInFieldL );\n    schemaInFieldL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaInField.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_schemaFilenameText, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    schemaInFieldL.setLayoutData( fd );\n\n    m_schemaInFieldBut = new Button( wSchemaComp, SWT.CHECK );\n    props.setLook( m_schemaInFieldBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_schemaFilenameText, margin );\n    m_schemaInFieldBut.setLayoutData( fd );\n\n    m_schemaInFieldBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        m_currentMeta.setChanged();\n        checkWidgets();\n      }\n    } );\n\n    // schema is path line\n    m_schemaInFieldIsPathL = new Label( wSchemaComp, SWT.RIGHT );\n    props.setLook( m_schemaInFieldIsPathL );\n    m_schemaInFieldIsPathL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaInFieldIsPath.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_schemaInFieldBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    m_schemaInFieldIsPathL.setLayoutData( fd );\n\n    m_schemaInFieldIsPathBut = new Button( wSchemaComp, SWT.CHECK );\n    props.setLook( m_schemaInFieldIsPathBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_schemaInFieldBut, margin );\n    m_schemaInFieldIsPathBut.setLayoutData( fd );\n\n    // cache schemas line\n    m_cacheSchemasL = new Label( wSchemaComp, SWT.RIGHT );\n    props.setLook( m_cacheSchemasL );\n    m_cacheSchemasL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.CacheSchemas.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_schemaInFieldIsPathBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    m_cacheSchemasL.setLayoutData( fd );\n\n    m_cacheSchemasBut = new Button( wSchemaComp, SWT.CHECK );\n    props.setLook( m_cacheSchemasBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_schemaInFieldIsPathBut, margin );\n    m_cacheSchemasBut.setLayoutData( fd );\n\n    // schema field name line\n    m_schemaFieldNameL = new Label( wSchemaComp, SWT.RIGHT );\n    props.setLook( m_schemaFieldNameL );\n    m_schemaFieldNameL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaFieldName.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_cacheSchemasBut, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    m_schemaFieldNameL.setLayoutData( fd );\n\n    m_schemaFieldNameText = new CCombo( wSchemaComp, SWT.BORDER );\n    props.setLook( m_schemaFieldNameText );\n    m_schemaFieldNameText.addModifyListener( new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        m_currentMeta.setChanged();\n        m_schemaFieldNameText.setToolTipText( transMeta.environmentSubstitute( m_schemaFieldNameText.getText() ) );\n      }\n    } );\n    fd = new FormData();\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( m_cacheSchemasBut, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    m_schemaFieldNameText.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wSchemaComp.setLayoutData( fd );\n\n    wSchemaComp.layout();\n    m_wSchemaTab.setControl( wSchemaComp );\n\n    // -- start of the fields tab\n    m_wFieldsTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wFieldsTab.setText( BaseMessages.getString( PKG, \"AvroInputDialog.FieldsTab.Title\" ) );\n    Composite wFieldsComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wFieldsComp );\n\n    FormLayout fieldsLayout = new FormLayout();\n    fieldsLayout.marginWidth = 3;\n    fieldsLayout.marginHeight = 3;\n    wFieldsComp.setLayout( fieldsLayout );\n\n    // missing fields button\n    Label missingFieldsLab = new Label( wFieldsComp, SWT.RIGHT );\n    props.setLook( missingFieldsLab );\n    missingFieldsLab.setText( BaseMessages.getString( PKG, \"AvroInputDialog.MissingFields.Label\" ) );\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    fd.right = new FormAttachment( middle, -margin );\n    missingFieldsLab.setLayoutData( fd );\n\n    m_missingFieldsBut = new Button( wFieldsComp, SWT.CHECK );\n    props.setLook( m_missingFieldsBut );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.left = new FormAttachment( middle, 0 );\n    fd.top = new FormAttachment( 0, margin );\n    m_missingFieldsBut.setLayoutData( fd );\n\n    // get fields button\n    m_getFields = new Button( wFieldsComp, SWT.PUSH );\n    m_getFields.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Button.GetFields\" ) );\n    props.setLook( m_getFields );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    m_getFields.setLayoutData( fd );\n    m_getFields.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        // populate table from schema\n        getFields();\n      }\n    } );\n\n    wPreview = new Button( wFieldsComp, SWT.PUSH | SWT.CENTER );\n    wPreview.setText( BaseMessages.getString( PKG, \"System.Button.Preview\" ) );\n    props.setLook( wPreview );\n    fd = new FormData();\n    fd.right = new FormAttachment( m_getFields, margin );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wPreview.setLayoutData( fd );\n    wPreview.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        preview();\n      }\n    } );\n\n    // fields stuff\n    final ColumnInfo[] colinf =\n        new ColumnInfo[] {\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.FIELD_NAME\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ),\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.FIELD_PATH\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ),\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.FIELD_TYPE\" ),\n              ColumnInfo.COLUMN_TYPE_CCOMBO, false ),\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.FIELD_INDEXED\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ), };\n\n    colinf[2].setComboValues( ValueMeta.getTypes() );\n\n    m_fieldsView = new TableView( transMeta, wFieldsComp, SWT.FULL_SELECTION | SWT.MULTI, colinf, 1, lsMod, props );\n\n    fd = new FormData();\n    fd.top = new FormAttachment( m_missingFieldsBut, margin * 2 );\n    fd.bottom = new FormAttachment( m_getFields, -margin * 2 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_fieldsView.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wFieldsComp.setLayoutData( fd );\n\n    wFieldsComp.layout();\n    m_wFieldsTab.setControl( wFieldsComp );\n\n    // -- start of the variables tab\n    m_wVarsTab = new CTabItem( m_wTabFolder, SWT.NONE );\n    m_wVarsTab.setText( BaseMessages.getString( PKG, \"AvroInputDialog.VarsTab.Title\" ) );\n    Composite wVarsComp = new Composite( m_wTabFolder, SWT.NONE );\n    props.setLook( wVarsComp );\n\n    FormLayout varsLayout = new FormLayout();\n    varsLayout.marginWidth = 3;\n    varsLayout.marginHeight = 3;\n    wVarsComp.setLayout( varsLayout );\n\n    // lookup fields (variables) tab\n    final ColumnInfo[] colinf2 =\n        new ColumnInfo[] {\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.LOOKUP_NAME\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ),\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.LOOKUP_VARIABLE\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ),\n          new ColumnInfo( BaseMessages.getString( PKG, \"AvroInputDialog.Fields.LOOKUP_DEFAULT_VALUE\" ),\n              ColumnInfo.COLUMN_TYPE_TEXT, false ), };\n\n    // get lookup fields but\n    m_getLookupFieldsBut = new Button( wVarsComp, SWT.PUSH | SWT.CENTER );\n    props.setLook( m_getLookupFieldsBut );\n    m_getLookupFieldsBut.setText( BaseMessages.getString( PKG, \"AvroInputDialog.Button.GetLookupFields\" ) );\n    fd = new FormData();\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -margin * 2 );\n    m_getLookupFieldsBut.setLayoutData( fd );\n\n    m_getLookupFieldsBut.addSelectionListener( new SelectionAdapter() {\n      @Override\n      public void widgetSelected( SelectionEvent e ) {\n        // get incoming field names\n        getIncomingFields();\n      }\n    } );\n\n    m_lookupView = new TableView( transMeta, wVarsComp, SWT.FULL_SELECTION | SWT.MULTI, colinf2, 1, lsMod, props );\n    fd = new FormData();\n    fd.top = new FormAttachment( 0, margin * 2 );\n    fd.bottom = new FormAttachment( m_getLookupFieldsBut, -margin * 2 );\n    fd.left = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    m_lookupView.setLayoutData( fd );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( 0, 0 );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, 0 );\n    wVarsComp.setLayoutData( fd );\n\n    wVarsComp.layout();\n    m_wVarsTab.setControl( wVarsComp );\n\n    fd = new FormData();\n    fd.left = new FormAttachment( 0, 0 );\n    fd.top = new FormAttachment( m_stepnameText, margin );\n    fd.right = new FormAttachment( 100, 0 );\n    fd.bottom = new FormAttachment( 100, -50 );\n    m_wTabFolder.setLayoutData( fd );\n\n    populateFieldsCombo();\n\n    // Buttons inherited from BaseStepDialog\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) );\n\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) );\n\n    setButtonPositions( new Button[] { wOK, wCancel }, margin, m_wTabFolder );\n\n    // Add listeners\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wCancel.addListener( SWT.Selection, lsCancel );\n    wOK.addListener( SWT.Selection, lsOK );\n\n    lsDef = new SelectionAdapter() {\n      @Override\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    m_stepnameText.addSelectionListener( lsDef );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      @Override\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    m_wTabFolder.setSelection( 0 );\n\n    setSize();\n\n    getData();\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n\n    return stepname;\n  }\n\n  protected void cancel() {\n    stepname = null;\n    m_currentMeta.setChanged( changed );\n\n    dispose();\n  }\n\n  protected void ok() {\n    if ( Const.isEmpty( m_stepnameText.getText() ) ) {\n      return;\n    }\n\n    stepname = m_stepnameText.getText();\n\n    setMeta( m_currentMeta );\n\n    if ( !m_originalMeta.equals( m_currentMeta ) ) {\n      m_currentMeta.setChanged();\n      changed = m_currentMeta.hasChanged();\n    }\n\n    dispose();\n  }\n\n  protected void setMeta( AvroInputMeta avroMeta ) {\n    avroMeta.setFilename( m_avroFilenameText.getText() );\n    avroMeta.setSchemaFilename( m_schemaFilenameText.getText() );\n    avroMeta.setAvroIsJsonEncoded( m_jsonEncodedBut.getSelection() );\n    avroMeta.setAvroInField( m_sourceInFieldBut.getSelection() );\n    avroMeta.setAvroFieldName( m_avroFieldNameText.getText() );\n\n    avroMeta.setSchemaInField( m_schemaInFieldBut.getSelection() );\n    avroMeta.setSchemaInFieldIsPath( m_schemaInFieldIsPathBut.getSelection() );\n    avroMeta.setCacheSchemasInMemory( m_cacheSchemasBut.getSelection() );\n    avroMeta.setSchemaFieldName( m_schemaFieldNameText.getText() );\n    avroMeta.setDontComplainAboutMissingFields( m_missingFieldsBut.getSelection() );\n\n    int numNonEmpty = m_fieldsView.nrNonEmpty();\n    if ( numNonEmpty > 0 ) {\n      List<AvroInputMeta.AvroField> outputFields = new ArrayList<AvroInputMeta.AvroField>();\n\n      for ( int i = 0; i < numNonEmpty; i++ ) {\n        TableItem item = m_fieldsView.getNonEmpty( i );\n        AvroInputMeta.AvroField newField = new AvroInputMeta.AvroField();\n        newField.m_fieldName = item.getText( 1 ).trim();\n        newField.m_fieldPath = item.getText( 2 ).trim();\n        newField.m_kettleType = item.getText( 3 ).trim();\n\n        if ( !Const.isEmpty( item.getText( 4 ) ) ) {\n          newField.m_indexedVals = AvroInputMeta.indexedValsList( item.getText( 4 ).trim() );\n        }\n\n        outputFields.add( newField );\n      }\n      avroMeta.setAvroFields( outputFields );\n    }\n\n    numNonEmpty = m_lookupView.nrNonEmpty();\n    if ( numNonEmpty > 0 ) {\n      List<AvroInputMeta.LookupField> varFields = new ArrayList<AvroInputMeta.LookupField>();\n\n      for ( int i = 0; i < numNonEmpty; i++ ) {\n        TableItem item = m_lookupView.getNonEmpty( i );\n        AvroInputMeta.LookupField newField = new AvroInputMeta.LookupField();\n        boolean add = false;\n\n        newField.m_fieldName = item.getText( 1 ).trim();\n        if ( !Const.isEmpty( item.getText( 2 ) ) ) {\n          newField.m_variableName = item.getText( 2 ).trim();\n          add = true;\n          if ( !Const.isEmpty( item.getText( 3 ) ) ) {\n            newField.m_defaultValue = item.getText( 3 ).trim();\n          }\n        }\n\n        if ( add ) {\n          varFields.add( newField );\n        }\n      }\n      avroMeta.setLookupFields( varFields );\n    }\n  }\n\n  protected void getFields() {\n    if ( !Const.isEmpty( m_schemaFilenameText.getText() ) ) {\n      // this schema overrides any that might be in a container file\n      String sName = m_schemaFilenameText.getText();\n      sName = transMeta.environmentSubstitute( sName );\n      try {\n        Schema s = AvroInputData.loadSchema( transMeta.getBowl(), sName );\n        List<AvroInputMeta.AvroField> schemaFields = AvroInputData.getLeafFields( s );\n\n        setTableFields( schemaFields );\n\n      } catch ( Exception ex ) {\n        logError( BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\" + \" \" + sName ), ex );\n        new ErrorDialog( shell, stepname, BaseMessages.getString( PKG, \"AvroInputDialog.Error.KettleFileException\"\n            + \" \" + sName ), ex );\n      }\n    } else {\n      String avroFileName = m_avroFilenameText.getText();\n      avroFileName = transMeta.environmentSubstitute( avroFileName );\n      try {\n        Schema s = AvroInputData.loadSchemaFromContainer( transMeta.getBowl(), avroFileName );\n        List<AvroInputMeta.AvroField> schemaFields = AvroInputData.getLeafFields( s );\n\n        setTableFields( schemaFields );\n      } catch ( Exception ex ) {\n        logError( BaseMessages.getString( PKG, \"AvroInput.Error.UnableToLoadSchemaFromContainerFile\" ), ex );\n        new ErrorDialog( shell, stepname, BaseMessages.getString( PKG,\n            \"AvroInput.Error.UnableToLoadSchemaFromContainerFile\", avroFileName ), ex );\n      }\n    }\n  }\n\n  protected void setTableFields( List<AvroInputMeta.AvroField> fields ) {\n    m_fieldsView.clearAll();\n    for ( AvroInputMeta.AvroField f : fields ) {\n      TableItem item = new TableItem( m_fieldsView.table, SWT.NONE );\n\n      if ( !Const.isEmpty( f.m_fieldName ) ) {\n        item.setText( 1, f.m_fieldName );\n      }\n\n      if ( !Const.isEmpty( f.m_fieldPath ) ) {\n        item.setText( 2, f.m_fieldPath );\n      }\n\n      if ( !Const.isEmpty( f.m_kettleType ) ) {\n        item.setText( 3, f.m_kettleType );\n      }\n\n      if ( f.m_indexedVals != null && f.m_indexedVals.size() > 0 ) {\n        item.setText( 4, AvroInputMeta.indexedValsList( f.m_indexedVals ) );\n      }\n    }\n\n    m_fieldsView.removeEmptyRows();\n    m_fieldsView.setRowNums();\n    m_fieldsView.optWidth( true );\n  }\n\n  protected void setVariableTableFields( List<AvroInputMeta.LookupField> fields ) {\n    m_lookupView.clearAll();\n\n    for ( AvroInputMeta.LookupField f : fields ) {\n      TableItem item = new TableItem( m_lookupView.table, SWT.NONE );\n\n      if ( !Const.isEmpty( f.m_fieldName ) ) {\n        item.setText( 1, f.m_fieldName );\n      }\n\n      if ( !Const.isEmpty( f.m_variableName ) ) {\n        item.setText( 2, f.m_variableName );\n      }\n\n      if ( !Const.isEmpty( f.m_defaultValue ) ) {\n        item.setText( 3, f.m_defaultValue );\n      }\n    }\n\n    m_lookupView.removeEmptyRows();\n    m_lookupView.setRowNums();\n    m_lookupView.optWidth( true );\n  }\n\n  protected void getData() {\n    if ( !Const.isEmpty( m_currentMeta.getFilename() ) ) {\n      m_avroFilenameText.setText( m_currentMeta.getFilename() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getSchemaFilename() ) ) {\n      m_schemaFilenameText.setText( m_currentMeta.getSchemaFilename() );\n    }\n\n    if ( !Const.isEmpty( m_currentMeta.getAvroFieldName() ) ) {\n      m_avroFieldNameText.setText( m_currentMeta.getAvroFieldName() );\n    }\n\n    m_jsonEncodedBut.setSelection( m_currentMeta.getAvroIsJsonEncoded() );\n    m_sourceInFieldBut.setSelection( m_currentMeta.getAvroInField() );\n    if ( !m_currentMeta.getAvroInField() ) {\n      m_sourceInFileBut.setSelection( true );\n    }\n\n    m_schemaInFieldBut.setSelection( m_currentMeta.getSchemaInField() );\n    m_schemaInFieldIsPathBut.setSelection( m_currentMeta.getSchemaInFieldIsPath() );\n    m_cacheSchemasBut.setSelection( m_currentMeta.getCacheSchemasInMemory() );\n    m_missingFieldsBut.setSelection( m_currentMeta.getDontComplainAboutMissingFields() );\n    if ( !Const.isEmpty( m_currentMeta.getSchemaFieldName() ) ) {\n      m_schemaFieldNameText.setText( m_currentMeta.getSchemaFieldName() );\n    }\n\n    // fields\n    if ( m_currentMeta.getAvroFields() != null && m_currentMeta.getAvroFields().size() > 0 ) {\n      setTableFields( m_currentMeta.getAvroFields() );\n    }\n\n    if ( m_currentMeta.getLookupFields() != null && m_currentMeta.getLookupFields().size() > 0 ) {\n      setVariableTableFields( m_currentMeta.getLookupFields() );\n    }\n\n    checkWidgets();\n  }\n\n  private void checkWidgets() {\n    boolean sifile = m_sourceInFileBut.getSelection();\n\n    m_avroFilenameText.setEnabled( sifile );\n    m_avroFileBrowse.setEnabled( sifile );\n\n    boolean sifield = m_sourceInFieldBut.getSelection();\n    if ( sifield ) {\n      m_sourceInFileBut.setSelection( !sifield );\n    }\n    m_avroFilenameText.setEnabled( !sifield );\n    m_avroFileBrowse.setEnabled( !sifield );\n\n    m_avroFieldNameText.setEnabled( sifield );\n    // }\n\n    wPreview.setEnabled( m_sourceInFileBut.getSelection() );\n\n    if ( sifile ) {\n      m_schemaInFieldBut.setSelection( false );\n    }\n    m_schemaInFieldBut.setEnabled( !sifile );\n\n    boolean sField = m_schemaInFieldBut.getSelection();\n    m_schemaInFieldIsPathL.setEnabled( sField );\n    m_schemaInFieldIsPathBut.setEnabled( sField );\n    m_cacheSchemasL.setEnabled( sField );\n    m_cacheSchemasBut.setEnabled( sField );\n    m_schemaFieldNameL.setEnabled( sField );\n    m_schemaFieldNameText.setEnabled( sField );\n    if ( sField ) {\n      m_defaultSchemaL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.DefaultSchemaFilename.Label\" ) );\n    } else {\n      m_defaultSchemaL.setText( BaseMessages.getString( PKG, \"AvroInputDialog.SchemaFilename.Label\" ) );\n    }\n  }\n\n  private void preview() {\n    AvroInputMeta tempMeta = new AvroInputMeta();\n    setMeta( tempMeta );\n\n    TransMeta previewMeta =\n        TransPreviewFactory.generatePreviewTransformation( transMeta, tempMeta, m_stepnameText.getText() );\n    transMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n    previewMeta.getVariable( \"Internal.Transformation.Filename.Directory\" );\n\n    EnterNumberDialog numberDialog =\n        new EnterNumberDialog( shell, props.getDefaultPreviewSize(), BaseMessages.getString( PKG,\n            \"CsvInputDialog.PreviewSize.DialogTitle\" ), BaseMessages.getString( PKG,\n              \"AvroInputDialog.PreviewSize.DialogMessage\" ) );\n    int previewSize = numberDialog.open();\n\n    if ( previewSize > 0 ) {\n      TransPreviewProgressDialog progressDialog =\n          new TransPreviewProgressDialog( shell, previewMeta, new String[] { m_stepnameText.getText() },\n              new int[] { previewSize } );\n      progressDialog.open();\n\n      Trans trans = progressDialog.getTrans();\n      String loggingText = progressDialog.getLoggingText();\n\n      if ( !progressDialog.isCancelled() ) {\n        if ( trans.getResult() != null && trans.getResult().getNrErrors() > 0 ) {\n          EnterTextDialog etd =\n              new EnterTextDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Title\" ),\n                  BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Message\" ), loggingText, true );\n          etd.setReadOnly();\n          etd.open();\n        }\n      }\n\n      PreviewRowsDialog prd =\n          new PreviewRowsDialog( shell, transMeta, SWT.NONE, m_stepnameText.getText(), progressDialog\n              .getPreviewRowsMeta( m_stepnameText.getText() ),\n              progressDialog.getPreviewRows( m_stepnameText.getText() ), loggingText );\n      prd.open();\n    }\n  }\n\n  private void getIncomingFields() {\n    try {\n      RowMetaInterface r = transMeta.getPrevStepFields( stepname );\n      if ( r != null ) {\n        BaseStepDialog.getFieldsFromPrevious( r, m_lookupView, 1, new int[] { 1 }, null, -1, -1, null );\n      }\n    } catch ( KettleException e ) {\n      new ErrorDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.GetFieldsFailed.Title\" ), BaseMessages\n          .getString( PKG, \"System.Dialog.GetFieldsFailed.Message\" ), e );\n    }\n  }\n\n  private void populateFieldsCombo() {\n    StepMeta stepMeta = transMeta.findStep( stepname );\n\n    if ( stepMeta != null ) {\n      try {\n        RowMetaInterface rowMeta = transMeta.getPrevStepFields( stepMeta );\n        if ( rowMeta != null && rowMeta.size() > 0 ) {\n          m_avroFieldNameText.removeAll();\n          m_schemaFieldNameText.removeAll();\n          for ( int i = 0; i < rowMeta.size(); i++ ) {\n            ValueMetaInterface vm = rowMeta.getValueMeta( i );\n            String fieldName = vm.getName();\n            m_avroFieldNameText.add( fieldName );\n            m_schemaFieldNameText.add( fieldName );\n          }\n        }\n      } catch ( KettleException ex ) {\n        new ErrorDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.GetFieldsFailed.Title\" ), BaseMessages\n            .getString( PKG, \"System.Dialog.GetFieldsFailed.Message\" ), ex );\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/avroinput/AvroInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\nimport com.fasterxml.jackson.databind.node.BooleanNode;\nimport com.fasterxml.jackson.databind.node.DoubleNode;\nimport com.fasterxml.jackson.databind.node.IntNode;\nimport com.fasterxml.jackson.databind.node.LongNode;\nimport com.fasterxml.jackson.databind.node.NumericNode;\nimport com.fasterxml.jackson.databind.node.TextNode;\nimport org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericContainer;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.generic.GenericFixed;\nimport org.apache.avro.util.Utf8;\nimport org.pentaho.di.core.CheckResultInterface;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleValueException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionDeep;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\n/**\n * Class providing an input step for reading data from an Avro serialized file or an incoming field. Handles both\n * container files (where the schema is serialized into the file) and schemaless files. In the case of the later (and\n * incoming field), the user must supply a schema in order to read objects from the file/field. In the case of the\n * former, a schema can be optionally supplied.\n *\n * Currently supports Avro records, arrays, maps, unions and primitive types. Union types are limited to two base types,\n * where one of the base types must be \"null\". Paths use the \"dot\" notation and \"$\" indicates the root of the object.\n * Arrays and maps are accessed via \"[]\" and differ only in that array elements are accessed via zero-based integer\n * indexes and map values are accessed by string keys.\n *\n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision$\n */\n@Step( id = \"AvroInput\", image = \"ui/images/deprecated.svg\", name = \"AvroInput.Name\",\n    description = \"AvroInput.Description\",\n    categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.Deprecated\",\n    documentationUrl = \"pdi-transformation-steps-reference-overview/avro-input\",\n    i18nPackageName = \"org.pentaho.di.trans.steps.avroinput\",\n    suggestion = \"AvroInput.SuggestedStep\" )\n@InjectionSupported( localizationPrefix = \"AvroInput.Injection.\", groups = { \"AVRO_FIELDS\", \"LOOKUP_FIELDS\" } )\npublic class AvroInputMeta extends BaseStepMeta implements StepMetaInterface {\n\n  protected static Class<?> PKG = AvroInputMeta.class;\n\n  /**\n   * Inner class encapsulating a field to provide lookup values. Field values (non-avro) from the incoming row stream\n   * can be substituted into the avro paths used to extract avro fields from an incoming binary/json avro field.\n   *\n   * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n   *\n   */\n  public static class LookupField {\n\n    /** The name of the field in the incoming rows to use for a lookup */\n    @Injection( name = \"FIELDNAME\", group = \"LOOKUP_FIELDS\" )\n    public String m_fieldName = \"\";\n\n    /** The name of the variable to hold this field's values */\n    @Injection( name = \"VARIABLE_NAME\", group = \"LOOKUP_FIELDS\" )\n    public String m_variableName = \"\";\n\n    /** A default value to use if the incoming field is null */\n    @Injection( name = \"DEFAULT_VALUE\", group = \"LOOKUP_FIELDS\" )\n    public String m_defaultValue = \"\";\n\n    protected String m_cleansedVariableName;\n    protected String m_resolvedFieldName;\n    protected String m_resolvedDefaultValue;\n\n    /** False if this field does not exist in the incoming row stream */\n    protected boolean m_isValid = true;\n\n    /** Index of this field in the incoming row stream */\n    protected int m_inputIndex = -1;\n\n    protected ValueMetaInterface m_fieldVM;\n\n    public boolean init( RowMetaInterface inRowMeta, VariableSpace space ) {\n\n      if ( inRowMeta == null ) {\n        m_isValid = false;\n        return false;\n      }\n\n      m_resolvedFieldName = ( space != null ) ? space.environmentSubstitute( m_fieldName ) : m_fieldName;\n\n      m_inputIndex = inRowMeta.indexOfValue( m_resolvedFieldName );\n      if ( m_inputIndex < 0 ) {\n        m_isValid = false;\n\n        return m_isValid;\n      }\n\n      m_fieldVM = inRowMeta.getValueMeta( m_inputIndex );\n\n      if ( !Const.isEmpty( m_variableName ) ) {\n        m_cleansedVariableName = m_variableName.replaceAll( \"\\\\.\", \"_\" );\n      } else {\n        m_isValid = false;\n        return m_isValid;\n      }\n\n      m_resolvedDefaultValue = space.environmentSubstitute( m_defaultValue );\n\n      return m_isValid;\n    }\n\n    public void setVariable( VariableSpace space, Object[] inRow ) {\n      if ( !m_isValid ) {\n        return;\n      }\n\n      String valueToSet = \"\";\n      try {\n        if ( m_fieldVM.isNull( inRow[m_inputIndex] ) ) {\n          if ( !Const.isEmpty( m_resolvedDefaultValue ) ) {\n            valueToSet = m_resolvedDefaultValue;\n          } else {\n            valueToSet = \"null\";\n          }\n        } else {\n          valueToSet = m_fieldVM.getString( inRow[m_inputIndex] );\n        }\n      } catch ( KettleValueException e ) {\n        valueToSet = \"null\";\n      }\n\n      space.setVariable( m_cleansedVariableName, valueToSet );\n    }\n  }\n\n  /**\n   * Inner class for encapsulating name, path and type information for a field to be extracted from an Avro file.\n   *\n   * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n   * @version $Revision$\n   */\n  public static class AvroField {\n\n    /** the name that the field will take in the outputted kettle stream */\n    @Injection( name = \"FIELD_NAME\", group = \"AVRO_FIELDS\" )\n    public String m_fieldName = \"\";\n\n    /** the path to the field in the avro file */\n    @Injection( name = \"FIELD_PATH\", group = \"AVRO_FIELDS\" )\n    public String m_fieldPath = \"\";\n\n    /** the kettle type for this field */\n    @Injection( name = \"KETTLE_TYPE\", group = \"AVRO_FIELDS\" )\n    public String m_kettleType = \"\";\n\n    /** any indexed values (i.e. enum types in avro) */\n    @Injection( name = \"INDEXED_VALS\", group = \"AVRO_FIELDS\" )\n    public List<String> m_indexedVals;\n\n    protected int m_outputIndex; // the index that this field is in the output\n                                 // row structure\n    private ValueMeta m_tempValueMeta;\n    private List<String> m_pathParts;\n    private List<String> m_tempParts;\n\n    /**\n     * Initialize this field by parsing the path etc.\n     *\n     * @param outputIndex\n     *          the index in the output row structure for this field\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public void init( int outputIndex ) throws KettleException {\n      if ( Const.isEmpty( m_fieldPath ) ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.NoPathSet\" ) );\n      }\n      if ( m_pathParts != null ) {\n        return;\n      }\n\n      String fieldPath = AvroInputData.cleansePath( m_fieldPath );\n\n      String[] temp = fieldPath.split( \"\\\\.\" );\n      m_pathParts = new ArrayList<String>();\n      for ( String part : temp ) {\n        m_pathParts.add( part );\n      }\n\n      if ( m_pathParts.get( 0 ).equals( \"$\" ) ) {\n        m_pathParts.remove( 0 ); // root record indicator\n      } else if ( m_pathParts.get( 0 ).startsWith( \"$[\" ) ) {\n\n        // strip leading $ off of array\n        String r = m_pathParts.get( 0 ).substring( 1, m_pathParts.get( 0 ).length() );\n        m_pathParts.set( 0, r );\n      }\n\n      m_tempParts = new ArrayList<String>();\n\n      m_tempValueMeta = new ValueMeta();\n      m_tempValueMeta.setType( ValueMeta.getType( m_kettleType ) );\n      m_outputIndex = outputIndex;\n    }\n\n    /**\n     * Reset this field. Should be called prior to processing a new field value from the avro file\n     *\n     * @param space\n     *          environment variables (values that environment variables resolve to cannot contain \".\"s)\n     */\n    public void reset( VariableSpace space ) {\n      // first clear because there may be stuff left over from processing\n      // the previous avro object (especially if a path exited early due to\n      // non-existent map key or array index out of bounds)\n      m_tempParts.clear();\n\n      for ( String part : m_pathParts ) {\n        m_tempParts.add( space.environmentSubstitute( part ) );\n      }\n    }\n\n    /**\n     * Perform Kettle type conversions for the Avro leaf field value.\n     *\n     * @param fieldValue\n     *          the leaf value from the Avro structure\n     * @return an Object of the appropriate Kettle type\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    protected Object getKettleValue( Object fieldValue ) throws KettleException {\n\n      switch ( m_tempValueMeta.getType() ) {\n        case ValueMetaInterface.TYPE_BIGNUMBER:\n          return m_tempValueMeta.getBigNumber( fieldValue );\n        case ValueMetaInterface.TYPE_BINARY:\n          return m_tempValueMeta.getBinary( fieldValue );\n        case ValueMetaInterface.TYPE_BOOLEAN:\n          return m_tempValueMeta.getBoolean( fieldValue );\n        case ValueMetaInterface.TYPE_DATE:\n          return m_tempValueMeta.getDate( fieldValue );\n        case ValueMetaInterface.TYPE_INTEGER:\n          return m_tempValueMeta.getInteger( fieldValue );\n        case ValueMetaInterface.TYPE_NUMBER:\n          return m_tempValueMeta.getNumber( fieldValue );\n        case ValueMetaInterface.TYPE_STRING:\n          return m_tempValueMeta.getString( fieldValue );\n        default:\n          return null;\n      }\n    }\n\n    /**\n     * Get the value of the Avro leaf primitive with respect to the Kettle type for this path.\n     *\n     * @param fieldValue\n     *          the Avro leaf value\n     * @param s\n     *          the schema for the leaf value\n     * @return the appropriate Kettle typed value\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    protected Object getPrimitive( Object fieldValue, Schema s ) throws KettleException {\n\n      if ( fieldValue == null ) {\n        return null;\n      }\n\n      try {\n\n        switch ( s.getType() ) {\n          case BOOLEAN:\n            if ( fieldValue.getClass() == BooleanNode.class ) {\n              return getKettleValue( ( (BooleanNode) fieldValue ).booleanValue() );\n            }\n            return getKettleValue( fieldValue );\n          case LONG:\n            if ( fieldValue.getClass() == LongNode.class ) {\n              return getKettleValue( ( (LongNode) fieldValue ).longValue() );\n            }\n            return getKettleValue( fieldValue );\n          case DOUBLE:\n            if ( fieldValue.getClass() == DoubleNode.class ) {\n              return getKettleValue( ( (DoubleNode) fieldValue ).doubleValue() );\n            }\n            return getKettleValue( fieldValue );\n          case STRING:\n            if ( fieldValue.getClass() == TextNode.class ) {\n              return getKettleValue( fieldValue.toString() );\n            }\n            return getKettleValue( fieldValue );\n          case BYTES:\n          case ENUM:\n            return getKettleValue( fieldValue );\n          case INT:\n            if ( fieldValue.getClass() == IntNode.class ) {\n              return getKettleValue( (long) ( ( (IntNode) fieldValue ).intValue() ) );\n            }\n            return getKettleValue( (long) ( (Integer) fieldValue ) );\n          case FLOAT:\n            if ( fieldValue instanceof NumericNode ) {\n              return getKettleValue( ( (DoubleNode) fieldValue ).doubleValue() );\n            }\n            return getKettleValue( (double) ( (Float) fieldValue ) );\n          case FIXED:\n            return ( (GenericFixed) fieldValue ).bytes();\n          default:\n            return null;\n        }\n\n      } catch ( ClassCastException e ) {\n        return null;\n      }\n    }\n\n    /**\n     * Processes a map at this point in the path.\n     *\n     * @param map\n     *          the map to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return the field value or null for out-of-bounds array indexes, non-existent map keys or unsupported avro types.\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object convertToKettleValue(\n        Map<Utf8, Object> map, Schema s, Schema defaultSchema, boolean ignoreMissing ) throws KettleException {\n\n      if ( map == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.MalformedPathMap\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( !( part.charAt( 0 ) == '[' ) ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.MalformedPathMap2\", part ) );\n      }\n\n      String key = part.substring( 1, part.indexOf( ']' ) );\n\n      if ( part.indexOf( ']' ) < part.length() - 1 ) {\n        // more dimensions to the array/map\n        part = part.substring( part.indexOf( ']' ) + 1, part.length() );\n        m_tempParts.add( 0, part );\n      }\n\n      Object value = map.get( new Utf8( key ) );\n      if ( value == null ) {\n        return null;\n      }\n\n      Schema valueType = s.getValueType();\n\n      if ( valueType.getType() == Schema.Type.UNION ) {\n        if ( value instanceof GenericContainer ) {\n          // we can ask these things for their schema (covers\n          // records, arrays, enums and fixed)\n          valueType = ( (GenericContainer) value ).getSchema();\n        } else {\n          // either have a map or primitive here\n          if ( value instanceof Map ) {\n            // now have to look for the schema of the map\n            Schema mapSchema = null;\n            for ( Schema ts : valueType.getTypes() ) {\n              if ( ts.getType() == Schema.Type.MAP ) {\n                mapSchema = ts;\n                break;\n              }\n            }\n            if ( mapSchema == null ) {\n              throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n            }\n            valueType = mapSchema;\n          } else {\n            if ( m_tempValueMeta.getType() != ValueMetaInterface.TYPE_STRING ) {\n              // we have a two element union, where one element is the type\n              // \"null\". So in this case we actually have just one type and can\n              // output specific values of it (instead of using String as a\n              // catch all for varying primitive types in the union)\n              valueType = AvroInputData.checkUnion( valueType );\n            } else {\n              // use the string representation of the value\n              valueType = Schema.create( Schema.Type.STRING );\n            }\n          }\n        }\n      }\n\n      // what have we got?\n      if ( valueType.getType() == Schema.Type.RECORD ) {\n        return convertToKettleValue( (GenericData.Record) value, valueType, defaultSchema, ignoreMissing );\n      } else if ( valueType.getType() == Schema.Type.ARRAY ) {\n        return convertToKettleValue( (GenericData.Array) value, valueType, defaultSchema, ignoreMissing );\n      } else if ( valueType.getType() == Schema.Type.MAP ) {\n        return convertToKettleValue( (Map<Utf8, Object>) value, valueType, defaultSchema, ignoreMissing );\n      } else {\n        // assume a primitive\n        return getPrimitive( value, valueType );\n      }\n    }\n\n    /**\n     * Processes an array at this point in the path.\n     *\n     * @param array\n     *          the array to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return the field value or null for out-of-bounds array indexes, non-existent map keys or unsupported avro types.\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object convertToKettleValue( GenericData.Array array, Schema s, Schema defaultSchema, boolean ignoreMissing )\n      throws KettleException {\n\n      if ( array == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.MalformedPathArray\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( !( part.charAt( 0 ) == '[' ) ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.MalformedPathArray2\", part ) );\n      }\n\n      String index = part.substring( 1, part.indexOf( ']' ) );\n      int arrayI = 0;\n      try {\n        arrayI = Integer.parseInt( index.trim() );\n      } catch ( NumberFormatException e ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.UnableToParseArrayIndex\", index ) );\n      }\n\n      if ( part.indexOf( ']' ) < part.length() - 1 ) {\n        // more dimensions to the array\n        part = part.substring( part.indexOf( ']' ) + 1, part.length() );\n        m_tempParts.add( 0, part );\n      }\n\n      if ( arrayI >= array.size() || arrayI < 0 ) {\n        return null;\n      }\n\n      Object element = array.get( arrayI );\n      Schema elementType = s.getElementType();\n\n      if ( element == null ) {\n        return null;\n      }\n\n      if ( elementType.getType() == Schema.Type.UNION ) {\n        if ( element instanceof GenericContainer ) {\n          // we can ask these things for their schema (covers\n          // records, arrays, enums and fixed)\n          elementType = ( (GenericContainer) element ).getSchema();\n        } else {\n          // either have a map or primitive here\n          if ( element instanceof Map ) {\n            // now have to look for the schema of the map\n            Schema mapSchema = null;\n            for ( Schema ts : elementType.getTypes() ) {\n              if ( ts.getType() == Schema.Type.MAP ) {\n                mapSchema = ts;\n                break;\n              }\n            }\n            if ( mapSchema == null ) {\n              throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n            }\n            elementType = mapSchema;\n          } else {\n            if ( m_tempValueMeta.getType() != ValueMetaInterface.TYPE_STRING ) {\n              // we have a two element union, where one element is the type\n              // \"null\". So in this case we actually have just one type and can\n              // output specific values of it (instead of using String as a\n              // catch all for varying primitive types in the union)\n              elementType = AvroInputData.checkUnion( elementType );\n            } else {\n              // use the string representation of the value\n              elementType = Schema.create( Schema.Type.STRING );\n            }\n          }\n        }\n      }\n\n      // what have we got?\n      if ( elementType.getType() == Schema.Type.RECORD ) {\n        return convertToKettleValue( (GenericData.Record) element, elementType, defaultSchema, ignoreMissing );\n      } else if ( elementType.getType() == Schema.Type.ARRAY ) {\n        return convertToKettleValue( (GenericData.Array) element, elementType, defaultSchema, ignoreMissing );\n      } else if ( elementType.getType() == Schema.Type.MAP ) {\n        return convertToKettleValue( (Map<Utf8, Object>) element, elementType, defaultSchema, ignoreMissing );\n      } else {\n        // assume a primitive (covers bytes encapsulated in FIXED type)\n        return getPrimitive( element, elementType );\n      }\n    }\n\n    /**\n     * Processes a record at this point in the path.\n     *\n     * @param record\n     *          the record to process\n     * @param s\n     *          the current schema at this point in the path\n     * @param ignoreMissing\n     *          true if null is to be returned for user fields that don't appear in the schema\n     * @return the field value or null for out-of-bounds array indexes, non-existent map keys or unsupported avro types.\n     * @throws KettleException\n     *           if a problem occurs\n     */\n    public Object convertToKettleValue( GenericData.Record record, Schema s, Schema defaultSchema, boolean ignoreMissing )\n      throws KettleException {\n\n      if ( record == null ) {\n        return null;\n      }\n\n      if ( m_tempParts.size() == 0 ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.MalformedPathRecord\" ) );\n      }\n\n      String part = m_tempParts.remove( 0 );\n      if ( part.charAt( 0 ) == '[' ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.InvalidPath\" ) + m_tempParts );\n      }\n\n      if ( part.indexOf( '[' ) > 0 ) {\n        String arrayPart = part.substring( part.indexOf( '[' ) );\n        part = part.substring( 0, part.indexOf( '[' ) );\n\n        // put the array section back into location zero\n        m_tempParts.add( 0, arrayPart );\n      }\n\n      // part is a named field of the record\n      Schema.Field fieldS = s.getField( part );\n      if ( fieldS == null && !ignoreMissing ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"AvroInput.Error.NonExistentField\", part ) );\n      }\n      Object field = record.get( part );\n\n      if ( field == null ) {\n        fieldS = defaultSchema.getField( part );\n        if ( fieldS == null || fieldS.defaultVal() == null ) {\n          return null;\n        }\n        field = fieldS.defaultVal();\n      }\n\n      Schema.Type fieldT = fieldS.schema().getType();\n      Schema fieldSchema = fieldS.schema();\n\n      if ( fieldT == Schema.Type.UNION ) {\n        if ( field instanceof GenericContainer ) {\n          // we can ask these things for their schema (covers\n          // records, arrays, enums and fixed)\n          fieldSchema = ( (GenericContainer) field ).getSchema();\n          fieldT = fieldSchema.getType();\n        } else {\n          // either have a map or primitive here\n          if ( field instanceof Map ) {\n            // now have to look for the schema of the map\n            Schema mapSchema = null;\n            for ( Schema ts : fieldSchema.getTypes() ) {\n              if ( ts.getType() == Schema.Type.MAP ) {\n                mapSchema = ts;\n                break;\n              }\n            }\n            if ( mapSchema == null ) {\n              throw new KettleException( BaseMessages.getString( AvroInputMeta.PKG,\n                  \"AvroInput.Error.UnableToFindSchemaForUnionMap\" ) );\n\n            }\n            fieldSchema = mapSchema;\n            fieldT = Schema.Type.MAP;\n          } else {\n            if ( m_tempValueMeta.getType() != ValueMetaInterface.TYPE_STRING ) {\n              // we have a two element union, where one element is the type\n              // \"null\". So in this case we actually have just one type and can\n              // output specific values of it (instead of using String as a\n              // catch all for varying primitive types in the union)\n              fieldSchema = AvroInputData.checkUnion( fieldSchema );\n              fieldT = fieldSchema.getType();\n            } else {\n\n              // use the string representation of the value\n              fieldSchema = Schema.create( Schema.Type.STRING );\n              fieldT = fieldSchema.getType();\n            }\n          }\n        }\n      }\n\n      // what have we got?\n      if ( fieldT == Schema.Type.RECORD ) {\n        return convertToKettleValue( (GenericData.Record) field, fieldSchema, defaultSchema, ignoreMissing );\n      } else if ( fieldT == Schema.Type.ARRAY ) {\n        return convertToKettleValue( (GenericData.Array) field, fieldSchema, defaultSchema, ignoreMissing );\n      } else if ( fieldT == Schema.Type.MAP ) {\n        return convertToKettleValue( (Map<Utf8, Object>) field, fieldSchema, defaultSchema, ignoreMissing );\n      } else {\n        // assume primitive (covers bytes encapsulated in FIXED type)\n        return getPrimitive( field, fieldSchema );\n      }\n    }\n  }\n\n  /** The avro file to read */\n  @Injection( name = \"FILENAME\" )\n  protected String m_filename = \"\";\n\n  /** The schema to use if not reading from a container file */\n  @Injection( name = \"SCHEMA_FILENAME\" )\n  protected String m_schemaFilename = \"\";\n\n  /** True if the user's avro file is json encoded rather than binary */\n  @Injection( name = \"IS_JSON_ENCODED\" )\n  protected boolean m_isJsonEncoded = false;\n\n  /** True if the avro to be decoded is contained in an incoming field */\n  @Injection( name = \"AVRO_INFIELD\" )\n  protected boolean m_avroInField = false;\n\n  /** Holds the source field name (if decoding from an incoming field) */\n  @Injection( name = \"AVRO_FIELDNAME\" )\n  protected String m_avroFieldName = \"\";\n\n  /**\n   * True if the schema to be used to decode an incoming Avro object is contained in an incoming field (only applies\n   * when m_avroInField == true)\n   */\n  @Injection( name = \"SCHEMA_INFIELD\" )\n  protected boolean m_schemaInField;\n\n  /**\n   * The name of the source field holding the avro schema (either the JSON schema itself or a path to the schema on\n   * disk.\n   */\n  @Injection( name = \"SCHEMA_FIELDNAME\" )\n  protected String m_schemaFieldName;\n\n  /**\n   * True if the value in the incoming schema field is actual a path to the schema on disk (rather than the actual\n   * schema itself)\n   */\n  @Injection( name = \"SCHEMA_INFIELD_IS_PATH\" )\n  protected boolean m_schemaInFieldIsPath;\n\n  /**\n   * True if schemas read from incoming fields are to be cached in memory for speed\n   */\n  @Injection( name = \"CACHE_SCHEMAS_IN_MEMORY\" )\n  protected boolean m_cacheSchemasInMemory;\n\n  /**\n   * True if null should be output if a specified field is not present in the Avro schema (otherwise an exception is\n   * raised)\n   */\n  @Injection( name = \"DONT_COMPLAIN_ABOUT_MISSING_FIELDS\" )\n  protected boolean m_dontComplainAboutMissingFields;\n\n  /** The fields to emit */\n  @InjectionDeep\n  protected List<AvroField> m_fields;\n\n  /** Incoming field values to use for lookup/substitution in avro paths */\n  @InjectionDeep\n  protected List<LookupField> m_lookups;\n\n  /**\n   * Set whether the avro to be decoded is contained in an incoming field\n   *\n   * @param a\n   *          true if the avro to be decoded is contained in an incoming field\n   */\n  public void setAvroInField( boolean a ) {\n    m_avroInField = a;\n  }\n\n  /**\n   * Get whether the avro to be decoded is contained in an incoming field\n   *\n   * @return true if the avro to be decoded is contained in an incoming field\n   */\n  public boolean getAvroInField() {\n    return m_avroInField;\n  }\n\n  /**\n   * Set the name of the incoming field to decode avro from (if decoding from a field rather than a file)\n   *\n   * @param f\n   *          the name of the incoming field to decode from\n   */\n  public void setAvroFieldName( String f ) {\n    m_avroFieldName = f;\n  }\n\n  /**\n   * Get the name of the incoming field to decode avro from (if decoding from a field rather than a file)\n   *\n   * @return the name of the incoming field to decode from\n   */\n  public String getAvroFieldName() {\n    return m_avroFieldName;\n  }\n\n  /**\n   * Set whether the schema to use for decoding incoming Avro objects is contained in an incoming field\n   *\n   * @param s\n   *          true if the schema to use for decoding incoming objects is itself in an incoming field\n   */\n  public void setSchemaInField( boolean s ) {\n    m_schemaInField = s;\n  }\n\n  /**\n   * Get whether the schema to use for decoding incoming Avro objects is contained in an incoming field\n   *\n   * @return true if the schema to use for decoding incoming objects is itself in an incoming field\n   */\n  public boolean getSchemaInField() {\n    return m_schemaInField;\n  }\n\n  /**\n   * Set the name of the incoming field that contains the schema to use.\n   *\n   * @param fn\n   *          the name of the incoming field that holds the schema to use\n   */\n  public void setSchemaFieldName( String fn ) {\n    m_schemaFieldName = fn;\n  }\n\n  /**\n   * Get the name of the incoming field that contains the schema to use.\n   *\n   * @return the name of the incoming field that holds the schema to use\n   */\n  public String getSchemaFieldName() {\n    return m_schemaFieldName;\n  }\n\n  /**\n   * Set whether the incoming schema field value contains a path to a schema on disk.\n   *\n   * @param p\n   *          true if the incoming schema field value is actually a path to an on disk schema file.\n   */\n  public void setSchemaInFieldIsPath( boolean p ) {\n    m_schemaInFieldIsPath = p;\n  }\n\n  /**\n   * Get whether the incoming schema field value contains a path to a schema on disk.\n   *\n   * @return true if the incoming schema field value is actually a path to an on disk schema file.\n   */\n  public boolean getSchemaInFieldIsPath() {\n    return m_schemaInFieldIsPath;\n  }\n\n  /**\n   * Set whether to cache schemas in memory when they are being supplied via an incoming field.\n   *\n   * @param c\n   *          true if schemas are to be cached in memory.\n   */\n  public void setCacheSchemasInMemory( boolean c ) {\n    m_cacheSchemasInMemory = c;\n  }\n\n  /**\n   * Get whether to cache schemas in memory when they are being supplied via an incoming field.\n   *\n   * @return true if schemas are to be cached in memory.\n   */\n  public boolean getCacheSchemasInMemory() {\n    return m_cacheSchemasInMemory;\n  }\n\n  /**\n   * Set the avro filename\n   *\n   * @param filename\n   *          the avro filename\n   */\n  public void setFilename( String filename ) {\n    m_filename = filename;\n  }\n\n  /**\n   * Get the avro filename\n   *\n   * @return the avro filename\n   */\n  public String getFilename() {\n    return m_filename;\n  }\n\n  /**\n   * Set the schema filename to use\n   *\n   * @param schemaFile\n   *          the name of the schema file to use\n   */\n  public void setSchemaFilename( String schemaFile ) {\n    m_schemaFilename = schemaFile;\n  }\n\n  /**\n   * Get the schema filename to use\n   *\n   * @return the name of the schema file to use\n   */\n  public String getSchemaFilename() {\n    return m_schemaFilename;\n  }\n\n  /**\n   * Get whether the avro file to read is json encoded rather than binary\n   *\n   * @return true if the file to read is json encoded\n   */\n  public boolean getAvroIsJsonEncoded() {\n    return m_isJsonEncoded;\n  }\n\n  /**\n   * Set whether the avro file to read is json encoded rather than binary\n   *\n   * @param j\n   *          true if the file to read is json encoded\n   */\n  public void setAvroIsJsonEncoded( boolean j ) {\n    m_isJsonEncoded = j;\n  }\n\n  /**\n   * Set the Avro fields that will be extracted\n   *\n   * @param fields\n   *          the Avro fields that will be extracted\n   */\n  public void setAvroFields( List<AvroField> fields ) {\n    m_fields = fields;\n  }\n\n  /**\n   * Get the Avro fields that will be extracted\n   *\n   * @return the Avro fields that will be extracted\n   */\n  public List<AvroField> getAvroFields() {\n    return m_fields;\n  }\n\n  /**\n   * Get the incoming field values that will be used for lookup/substitution in the avro paths\n   *\n   * @return the lookup fields\n   */\n  public List<LookupField> getLookupFields() {\n    return m_lookups;\n  }\n\n  /**\n   * Set the incoming field values that will be used for lookup/substitution in the avro paths\n   *\n   * @param lookups\n   *          the lookup fields\n   */\n  public void setLookupFields( List<LookupField> lookups ) {\n    m_lookups = lookups;\n  }\n\n  /**\n   * Set whether null is to be output if a user-supplied field path does not exist in the Avro schema being used.\n   * Otherwise, an exception is raised\n   *\n   * @param c\n   *          true to ignore missing fields\n   */\n  public void setDontComplainAboutMissingFields( boolean c ) {\n    m_dontComplainAboutMissingFields = c;\n  }\n\n  /**\n   * Get whether null is to be output if a user-supplied field path does not exist in the Avro schema being used.\n   * Otherwise, an exception is raised\n   *\n   * @return true to ignore missing fields\n   */\n  public boolean getDontComplainAboutMissingFields() {\n    return m_dontComplainAboutMissingFields;\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.BaseStepMeta#getFields(org.pentaho.di.core.row .RowMetaInterface, java.lang.String,\n   * org.pentaho.di.core.row.RowMetaInterface[], org.pentaho.di.trans.step.StepMeta,\n   * org.pentaho.di.core.variables.VariableSpace)\n   */\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n      VariableSpace space ) throws KettleStepException {\n\n    List<AvroField> fieldsToOutput = null;\n\n    if ( m_fields != null && m_fields.size() > 0 ) {\n      // we have some stored field info - use this\n      fieldsToOutput = m_fields;\n    } else {\n      // outputting all fields from either supplied schema or schema embedded\n      // in a container file\n\n      if ( !Const.isEmpty( getSchemaFilename() ) ) {\n        String fn = space.environmentSubstitute( m_schemaFilename );\n\n        try {\n          Schema s = AvroInputData.loadSchema( bowl, fn );\n          fieldsToOutput = AvroInputData.getLeafFields( s );\n        } catch ( KettleException e ) {\n          throw new KettleStepException( BaseMessages.getString( PKG, \"AvroInput.Error.UnableToLoadSchema\", fn ), e );\n        }\n      } else {\n\n        if ( m_avroInField ) {\n          throw new KettleStepException( BaseMessages.getString( PKG, \"AvroInput.Error.NoSchemaSupplied\" ) );\n        }\n        // assume a container file and grab from there...\n        String avroFilename = m_filename;\n        avroFilename = space.environmentSubstitute( avroFilename );\n        try {\n          Schema s = AvroInputData.loadSchemaFromContainer( bowl, avroFilename );\n          fieldsToOutput = AvroInputData.getLeafFields( s );\n        } catch ( KettleException e ) {\n          throw new KettleStepException( BaseMessages.getString( PKG,\n              \"AvroInput.Error.UnableToLoadSchemaFromContainerFile\", avroFilename ) );\n        }\n      }\n    }\n\n    for ( AvroField f : fieldsToOutput ) {\n      ValueMetaInterface vm = new ValueMeta();\n      vm.setName( f.m_fieldName );\n      vm.setOrigin( origin );\n      vm.setType( ValueMeta.getType( f.m_kettleType ) );\n      if ( f.m_indexedVals != null ) {\n        vm.setIndex( f.m_indexedVals.toArray() ); // indexed values\n      }\n      rowMeta.addValueMeta( vm );\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#check(java.util.List, org.pentaho.di.trans.TransMeta,\n   * org.pentaho.di.trans.step.StepMeta, org.pentaho.di.core.row.RowMetaInterface, java.lang.String[],\n   * java.lang.String[], org.pentaho.di.core.row.RowMetaInterface)\n   */\n  public void check( List<CheckResultInterface> remarks, TransMeta transMeta, StepMeta stepMeta, RowMetaInterface prev,\n      String[] input, String[] output, RowMetaInterface info ) {\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#getStep(org.pentaho.di.trans .step.StepMeta,\n   * org.pentaho.di.trans.step.StepDataInterface, int, org.pentaho.di.trans.TransMeta, org.pentaho.di.trans.Trans)\n   */\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr,\n      TransMeta transMeta, Trans trans ) {\n\n    return new AvroInput( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#getStepData()\n   */\n  public StepDataInterface getStepData() {\n    return new AvroInputData();\n  }\n\n  /**\n   * Helper function that takes a list of indexed values and returns them as a String in comma-separated form.\n   *\n   * @param indexedVals\n   *          a list of indexed values\n   * @return the list a String in comma-separated form\n   */\n  protected static String indexedValsList( List<String> indexedVals ) {\n    StringBuffer temp = new StringBuffer();\n\n    for ( int i = 0; i < indexedVals.size(); i++ ) {\n      temp.append( indexedVals.get( i ) );\n      if ( i < indexedVals.size() - 1 ) {\n        temp.append( \",\" );\n      }\n    }\n\n    return temp.toString();\n  }\n\n  /**\n   * Helper function that takes a comma-separated list in a String and returns a list.\n   *\n   * @param indexedVals\n   *          the String containing the lsit\n   * @return a List containing the values\n   */\n  protected static List<String> indexedValsList( String indexedVals ) {\n\n    String[] parts = indexedVals.split( \",\" );\n    List<String> list = new ArrayList<String>();\n    for ( String s : parts ) {\n      list.add( s.trim() );\n    }\n\n    return list;\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.BaseStepMeta#getXML()\n   */\n  @Override\n  public String getXML() {\n    StringBuffer retval = new StringBuffer();\n\n    if ( !Const.isEmpty( m_filename ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"avro_filename\", m_filename ) );\n    }\n\n    if ( !Const.isEmpty( m_schemaFilename ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"schema_filename\", m_schemaFilename ) );\n    }\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"json_encoded\", m_isJsonEncoded ) );\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"avro_in_field\", m_avroInField ) );\n\n    if ( !Const.isEmpty( m_avroFieldName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"avro_field_name\", m_avroFieldName ) );\n    }\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"schema_in_field\", m_schemaInField ) );\n\n    if ( !Const.isEmpty( m_schemaFieldName ) ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"schema_field_name\", m_schemaFieldName ) );\n    }\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"schema_in_field_is_path\", m_schemaInFieldIsPath ) );\n\n    retval.append( \"\\n    \" ).append( XMLHandler.addTagValue( \"cache_schemas\", m_cacheSchemasInMemory ) );\n\n    retval.append( \"\\n    \" ).append(\n        XMLHandler.addTagValue( \"ignore_missing_fields\", m_dontComplainAboutMissingFields ) );\n\n    if ( m_fields != null && m_fields.size() > 0 ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.openTag( \"avro_fields\" ) );\n\n      for ( AvroField f : m_fields ) {\n        retval.append( \"\\n      \" ).append( XMLHandler.openTag( \"avro_field\" ) );\n\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"field_name\", f.m_fieldName ) );\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"field_path\", f.m_fieldPath ) );\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"field_type\", f.m_kettleType ) );\n        if ( f.m_indexedVals != null && f.m_indexedVals.size() > 0 ) {\n          retval.append( \"\\n        \" ).append(\n              XMLHandler.addTagValue( \"indexed_vals\", indexedValsList( f.m_indexedVals ) ) );\n        }\n        retval.append( \"\\n      \" ).append( XMLHandler.closeTag( \"avro_field\" ) );\n      }\n\n      retval.append( \"\\n    \" ).append( XMLHandler.closeTag( \"avro_fields\" ) );\n    }\n\n    if ( m_lookups != null && m_lookups.size() > 0 ) {\n      retval.append( \"\\n    \" ).append( XMLHandler.openTag( \"lookup_fields\" ) );\n\n      for ( LookupField f : m_lookups ) {\n        retval.append( \"\\n      \" ).append( XMLHandler.openTag( \"lookup_field\" ) );\n\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"lookup_field_name\", f.m_fieldName ) );\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"variable_name\", f.m_variableName ) );\n        retval.append( \"\\n        \" ).append( XMLHandler.addTagValue( \"default_value\", f.m_defaultValue ) );\n\n        retval.append( \"\\n      \" ).append( XMLHandler.closeTag( \"lookup_field\" ) );\n      }\n\n      retval.append( \"\\n    \" ).append( XMLHandler.closeTag( \"lookup_fields\" ) );\n    }\n\n    return retval.toString();\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#loadXML(org.w3c.dom.Node, java.util.List, java.util.Map)\n   */\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore )\n    throws KettleXMLException {\n    m_filename = XMLHandler.getTagValue( stepnode, \"avro_filename\" );\n    m_schemaFilename = XMLHandler.getTagValue( stepnode, \"schema_filename\" );\n\n    String jsonEnc = XMLHandler.getTagValue( stepnode, \"json_encoded\" );\n    if ( !Const.isEmpty( jsonEnc ) ) {\n      m_isJsonEncoded = jsonEnc.equalsIgnoreCase( \"Y\" );\n    }\n\n    String avroInField = XMLHandler.getTagValue( stepnode, \"avro_in_field\" );\n    if ( !Const.isEmpty( avroInField ) ) {\n      m_avroInField = avroInField.equalsIgnoreCase( \"Y\" );\n    }\n    m_avroFieldName = XMLHandler.getTagValue( stepnode, \"avro_field_name\" );\n\n    String schemaInField = XMLHandler.getTagValue( stepnode, \"schema_in_field\" );\n    if ( !Const.isEmpty( schemaInField ) ) {\n      m_schemaInField = schemaInField.equalsIgnoreCase( \"Y\" );\n    }\n    m_schemaFieldName = XMLHandler.getTagValue( stepnode, \"schema_field_name\" );\n\n    String schemaInFieldIsPath = XMLHandler.getTagValue( stepnode, \"schema_in_field_is_path\" );\n    if ( !Const.isEmpty( schemaInFieldIsPath ) ) {\n      m_schemaInFieldIsPath = schemaInFieldIsPath.equalsIgnoreCase( \"Y\" );\n    }\n\n    String cacheSchemas = XMLHandler.getTagValue( stepnode, \"cache_schemas\" );\n    if ( !Const.isEmpty( cacheSchemas ) ) {\n      m_cacheSchemasInMemory = cacheSchemas.equalsIgnoreCase( \"Y\" );\n    }\n\n    String ignoreMissing = XMLHandler.getTagValue( stepnode, \"ignore_missing_fields\" );\n    if ( !Const.isEmpty( ignoreMissing ) ) {\n      m_dontComplainAboutMissingFields = ignoreMissing.equalsIgnoreCase( \"Y\" );\n    }\n\n    Node fields = XMLHandler.getSubNode( stepnode, \"avro_fields\" );\n    if ( fields != null && XMLHandler.countNodes( fields, \"avro_field\" ) > 0 ) {\n      int nrfields = XMLHandler.countNodes( fields, \"avro_field\" );\n\n      m_fields = new ArrayList<AvroField>();\n      for ( int i = 0; i < nrfields; i++ ) {\n        Node fieldNode = XMLHandler.getSubNodeByNr( fields, \"avro_field\", i );\n\n        AvroField newField = new AvroField();\n        newField.m_fieldName = XMLHandler.getTagValue( fieldNode, \"field_name\" );\n        newField.m_fieldPath = XMLHandler.getTagValue( fieldNode, \"field_path\" );\n        newField.m_kettleType = XMLHandler.getTagValue( fieldNode, \"field_type\" );\n        String indexedVals = XMLHandler.getTagValue( fieldNode, \"indexed_vals\" );\n        if ( indexedVals != null && indexedVals.length() > 0 ) {\n          newField.m_indexedVals = indexedValsList( indexedVals );\n        }\n\n        m_fields.add( newField );\n      }\n    }\n\n    Node lFields = XMLHandler.getSubNode( stepnode, \"lookup_fields\" );\n    if ( lFields != null && XMLHandler.countNodes( lFields, \"lookup_field\" ) > 0 ) {\n      int nrfields = XMLHandler.countNodes( lFields, \"lookup_field\" );\n\n      m_lookups = new ArrayList<LookupField>();\n\n      for ( int i = 0; i < nrfields; i++ ) {\n        Node fieldNode = XMLHandler.getSubNodeByNr( lFields, \"lookup_field\", i );\n\n        LookupField newField = new LookupField();\n        newField.m_fieldName = XMLHandler.getTagValue( fieldNode, \"lookup_field_name\" );\n        newField.m_variableName = XMLHandler.getTagValue( fieldNode, \"variable_name\" );\n        newField.m_defaultValue = XMLHandler.getTagValue( fieldNode, \"default_value\" );\n\n        m_lookups.add( newField );\n      }\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#readRep(org.pentaho.di.repository .Repository,\n   * org.pentaho.di.repository.ObjectId, java.util.List, java.util.Map)\n   */\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n\n    m_filename = rep.getStepAttributeString( id_step, 0, \"avro_filename\" );\n    m_schemaFilename = rep.getStepAttributeString( id_step, 0, \"schema_filename\" );\n\n    m_isJsonEncoded = rep.getStepAttributeBoolean( id_step, 0, \"json_encoded\" );\n\n    m_avroInField = rep.getStepAttributeBoolean( id_step, 0, \"avro_in_field\" );\n    m_avroFieldName = rep.getStepAttributeString( id_step, 0, \"avro_field_name\" );\n\n    m_schemaInField = rep.getStepAttributeBoolean( id_step, 0, \"schema_in_field\" );\n    m_schemaFieldName = rep.getStepAttributeString( id_step, 0, \"schema_field_name\" );\n    m_schemaInFieldIsPath = rep.getStepAttributeBoolean( id_step, 0, \"schema_in_field_is_path\" );\n    m_cacheSchemasInMemory = rep.getStepAttributeBoolean( id_step, 0, \"cache_schemas\" );\n    m_dontComplainAboutMissingFields = rep.getStepAttributeBoolean( id_step, 0, \"ignore_missing_fields\" );\n\n    int nrfields = rep.countNrStepAttributes( id_step, \"field_name\" );\n    if ( nrfields > 0 ) {\n      m_fields = new ArrayList<AvroField>();\n\n      for ( int i = 0; i < nrfields; i++ ) {\n        AvroField newField = new AvroField();\n\n        newField.m_fieldName = rep.getStepAttributeString( id_step, i, \"field_name\" );\n        newField.m_fieldPath = rep.getStepAttributeString( id_step, i, \"field_path\" );\n        newField.m_kettleType = rep.getStepAttributeString( id_step, i, \"field_type\" );\n        String indexedVals = rep.getStepAttributeString( id_step, i, \"indexed_vals\" );\n        if ( indexedVals != null && indexedVals.length() > 0 ) {\n          newField.m_indexedVals = indexedValsList( indexedVals );\n        }\n\n        m_fields.add( newField );\n      }\n    }\n\n    nrfields = rep.countNrStepAttributes( id_step, \"lookup_field_name\" );\n    if ( nrfields > 0 ) {\n      m_lookups = new ArrayList<LookupField>();\n\n      for ( int i = 0; i < nrfields; i++ ) {\n        LookupField newField = new LookupField();\n\n        newField.m_fieldName = rep.getStepAttributeString( id_step, i, \"lookup_field_name\" );\n        newField.m_variableName = rep.getStepAttributeString( id_step, i, \"variable_name\" );\n        newField.m_defaultValue = rep.getStepAttributeString( id_step, i, \"default_value\" );\n\n        m_lookups.add( newField );\n      }\n    }\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.StepMetaInterface#saveRep(org.pentaho.di.repository .Repository,\n   * org.pentaho.di.repository.ObjectId, org.pentaho.di.repository.ObjectId)\n   */\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step ) throws KettleException {\n\n    if ( !Const.isEmpty( m_filename ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"avro_filename\", m_filename );\n    }\n    if ( !Const.isEmpty( m_schemaFilename ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"schema_filename\", m_schemaFilename );\n    }\n\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"json_encoded\", m_isJsonEncoded );\n\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"avro_in_field\", m_avroInField );\n    if ( !Const.isEmpty( m_avroFieldName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"avro_field_name\", m_avroFieldName );\n    }\n\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"schema_in_field\", m_schemaInField );\n    if ( !Const.isEmpty( m_schemaFieldName ) ) {\n      rep.saveStepAttribute( id_transformation, id_step, 0, \"schema_field_name\", m_schemaFieldName );\n    }\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"schema_in_field_is_path\", m_schemaInFieldIsPath );\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"cache_schemas\", m_cacheSchemasInMemory );\n    rep.saveStepAttribute( id_transformation, id_step, 0, \"ignore_missing_fields\", m_dontComplainAboutMissingFields );\n\n    if ( m_fields != null && m_fields.size() > 0 ) {\n      for ( int i = 0; i < m_fields.size(); i++ ) {\n        AvroField f = m_fields.get( i );\n\n        rep.saveStepAttribute( id_transformation, id_step, i, \"field_name\", f.m_fieldName );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"field_path\", f.m_fieldPath );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"field_type\", f.m_kettleType );\n        if ( f.m_indexedVals != null && f.m_indexedVals.size() > 0 ) {\n          String indexedVals = indexedValsList( f.m_indexedVals );\n\n          rep.saveStepAttribute( id_transformation, id_step, i, \"indexed_vals\", indexedVals );\n        }\n      }\n    }\n\n    if ( m_lookups != null && m_lookups.size() > 0 ) {\n      for ( int i = 0; i < m_lookups.size(); i++ ) {\n        LookupField f = m_lookups.get( i );\n\n        rep.saveStepAttribute( id_transformation, id_step, i, \"lookup_field_name\", f.m_fieldName );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"variable_name\", f.m_variableName );\n        rep.saveStepAttribute( id_transformation, id_step, i, \"default_value\", f.m_defaultValue );\n      }\n    }\n  }\n\n  public void setDefault() {\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.BaseStepMeta#getDialogClassName()\n   */\n  @Override\n  public String getDialogClassName() {\n    return AvroInputDialog.class.getCanonicalName();\n  }\n\n  /*\n   * (non-Javadoc)\n   *\n   * @see org.pentaho.di.trans.step.BaseStepMeta#supportsErrorHandling()\n   */\n  @Override\n  public boolean supportsErrorHandling() {\n    return true;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/couchdbinput/CouchDbInput.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.couchdbinput;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.lang.StringUtils;\nimport org.apache.http.HttpHost;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.client.AuthCache;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.protocol.HttpClientContext;\nimport org.apache.http.impl.auth.BasicScheme;\nimport org.apache.http.impl.client.BasicAuthCache;\nimport org.pentaho.di.cluster.SlaveConnectionManager;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.row.RowDataUtil;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.util.HttpClientManager;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStep;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\n\nimport java.io.BufferedInputStream;\nimport java.io.IOException;\n\npublic class CouchDbInput extends BaseStep implements StepInterface {\n  private static Class<?> PKG = CouchDbInputMeta.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n\n  private final HttpClientFactory httpClientFactory = new HttpClientFactory();\n  private final HttpClientManager httpClientManager = createHttpClientManager();\n\n  private final GetMethodFactory getMethodFactory;\n\n  private CouchDbInputMeta meta;\n  private CouchDbInputData data;\n\n  public CouchDbInput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                       Trans trans ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n    this.getMethodFactory = new GetMethodFactory();\n  }\n\n  @Deprecated\n  public CouchDbInput( StepMeta stepMeta, StepDataInterface stepDataInterface, int copyNr, TransMeta transMeta,\n                       Trans trans, HttpClientFactory httpClientFactory,\n                       GetMethodFactory getMethodFactory ) {\n    super( stepMeta, stepDataInterface, copyNr, transMeta, trans );\n    this.getMethodFactory = getMethodFactory;\n  }\n\n  public static String buildUrl( String hostname, int port, String db, String design, String view ) {\n    String url = \"http://\" + hostname;\n    if ( port >= 0 ) {\n      url += \":\" + port;\n    }\n    url += \"/\" + db;\n    url += \"/_design/\" + design;\n    url += \"/_view/\" + view;\n    return url;\n  }\n\n  public boolean processRow( StepMetaInterface smi, StepDataInterface sdi ) throws KettleException {\n    try {\n      if ( first ) {\n        first = false;\n\n        data.outputRowMeta = new RowMeta();\n        meta.getFields( getTransMeta().getBowl(), data.outputRowMeta, getStepname(), null, null, this, repository,\n                        metaStore );\n\n        // Skip over first introduction row containing the number of results...\n        //\n        // Example: {\"total_rows\":3,\"offset\":0,\"rows\":[\n        //\n        data.buffer = new StringBuilder( 1000 );\n        data.open = 0;\n        boolean cont = true;\n        int c = data.bufferedInputStream.read();\n        while ( c >= 0 && cont && !isStopped() ) {\n          data.buffer.append( (char) c );\n\n          switch ( (char) c ) {\n            case '{':\n              data.open++;\n\n              // Second JSON nested block means: another row of data...\n              if ( data.open == 2 ) {\n                logBasic( \"Read header: >>\" + data.buffer.substring( 0, data.buffer.length() - 1 ) + \"<<\" );\n                data.buffer.delete( 0, data.buffer.length() - 1 );\n                cont = false; // Stop the while loop.\n              }\n\n              break;\n            case '}':\n              data.open--;\n              break;\n\n            case '\"':\n              // skip until the next \"\n              //\n              int prev = c;\n              c = data.bufferedInputStream.read();\n              while ( c != '\"' && prev != '\\\\' && c >= 0 ) {\n                data.buffer.append( (char) c );\n                prev = c;\n                c = data.bufferedInputStream.read();\n              }\n          }\n\n          if ( cont ) {\n            c = data.bufferedInputStream.read();\n          }\n        }\n\n        if ( c < 0 ) {\n          setOutputDone();\n          return false;\n        }\n      }\n\n      // read one JSON block from the data until no data is left on the input stream\n      //\n      boolean cont = true;\n      int c = data.bufferedInputStream.read();\n      while ( c >= 0 && cont && !isStopped() ) {\n        data.buffer.append( (char) c );\n\n        switch ( (char) c ) {\n          case '{':\n            data.open++;\n\n            // Second JSON nested block means: another row of data...\n            if ( data.open == 2 ) {\n\n              sendBufferRow( false );\n\n              cont = false; // Stop the while loop.\n            }\n\n            break;\n          case '}':\n            data.open--;\n            break;\n        }\n\n        if ( cont ) {\n          c = data.bufferedInputStream.read();\n        }\n      }\n\n      if ( c < 0 ) {\n        if ( data.buffer.length() > 0 ) {\n          sendBufferRow( true );\n        }\n        setOutputDone();\n        return false;\n      }\n\n      return true;\n    } catch ( IOException e ) {\n      throw new KettleException( \"Unable to read from the CouchDB REST web service\", e );\n    }\n  }\n\n  private void sendBufferRow( boolean lastRow ) throws KettleStepException {\n\n    int pos = data.buffer.length() - 2;\n\n    if ( lastRow ) {\n      // Get rid of any ]} at the end of the last row.\n      //\n      pos = removeTrailingSpaces( data.buffer, pos );\n      pos = removeTrailingCharacter( data.buffer, pos, '}' );\n      pos = removeTrailingSpaces( data.buffer, pos );\n      pos = removeTrailingCharacter( data.buffer, pos, ']' );\n    }\n\n    pos = removeTrailingSpaces( data.buffer, pos );\n    pos = removeTrailingCharacter( data.buffer, pos, ',' );\n\n    String json = data.buffer.substring( 0, pos + 1 );\n    data.buffer.delete( 0, data.buffer.length() - 1 );\n\n    if ( log.isDebug() ) {\n      logDebug( \"Read row: \" + json );\n    }\n    Object[] row = RowDataUtil.allocateRowData( data.outputRowMeta.size() );\n    int index = 0;\n    row[ index++ ] = json;\n\n    // putRow will send the row on to the default output hop.\n    //\n    putRow( data.outputRowMeta, row );\n  }\n\n  private int removeTrailingCharacter( StringBuilder buffer, int pos, char c ) {\n    if ( data.buffer.charAt( pos ) == c ) {\n      pos--;\n    }\n    return pos;\n  }\n\n  private int removeTrailingSpaces( StringBuilder buffer, int pos ) {\n    while ( pos >= 0 && Const.isSpace( data.buffer.charAt( pos ) ) ) {\n      pos--;\n    }\n    return pos;\n  }\n\n  public boolean init( StepMetaInterface stepMetaInterface, StepDataInterface stepDataInterface ) {\n    if ( super.init( stepMetaInterface, stepDataInterface ) ) {\n      meta = (CouchDbInputMeta) stepMetaInterface;\n      data = (CouchDbInputData) stepDataInterface;\n\n      String hostname = environmentSubstitute( meta.getHostname() );\n      int port = Const.toInt( environmentSubstitute( meta.getPort() ), 5984 );\n      String db = environmentSubstitute( meta.getDbName() );\n      String design = environmentSubstitute( meta.getDesignDocument() );\n      String view = environmentSubstitute( meta.getViewName() );\n\n      if ( StringUtils.isEmpty( design ) ) {\n        log.logError( \"Please provide a design document to use\" );\n        return false;\n      }\n\n      if ( StringUtils.isEmpty( view ) ) {\n        log.logError( \"Please provide a view name to look at\" );\n        return false;\n      }\n\n      String realUser = environmentSubstitute( meta.getAuthenticationUser() );\n      String realPass =\n        Encr.decryptPasswordOptionallyEncrypted( environmentSubstitute( meta.getAuthenticationPassword() ) );\n\n      String url = buildUrl( hostname, port, db, design, view );\n\n      logBasic( \"Querying CouchDB view on URL: \" + url );\n\n      try {\n        HttpClient client = createHttpClient( realUser, realPass );\n\n        HttpGet method = getMethodFactory.create( url );\n\n        // Execute request\n        data.inputStream = null;\n        data.bufferedInputStream = null;\n\n        //Client Preemptive Basic Authentication\n        HttpClientContext context = null;\n        if ( StringUtils.isNotBlank( hostname ) ) {\n          context = getHttpClientContext( hostname, port );\n        }\n\n        HttpResponse httpResponse =\n          context != null ? client.execute( method, context ) : client.execute( method );\n        int result = httpResponse.getStatusLine().getStatusCode();\n\n        // the response\n        data.inputStream = httpResponse.getEntity().getContent();\n        data.bufferedInputStream = new BufferedInputStream( data.inputStream, 1000 );\n\n        if ( result < 200 || result >= 300 ) {\n          StringBuilder err = new StringBuilder();\n          int c;\n          while ( ( c = data.bufferedInputStream.read() ) >= 0 ) {\n            err.append( (char) c );\n          }\n          logError( \"Web request returned code \" + result + \" : \" + err.toString() );\n          return false;\n        }\n\n        data.counter = 0;\n\n        return true;\n      } catch ( Exception e ) {\n        logError( BaseMessages.getString( PKG, \"CouchDbInput.ErrorConnectingToCouchDb.Exception\", hostname, \"\" + port,\n          db, view ), e );\n        return false;\n      }\n    }\n    return false;\n  }\n\n  @Override\n  public void dispose( StepMetaInterface smi, StepDataInterface sdi ) {\n\n    if ( data.bufferedInputStream != null ) {\n      try {\n        data.bufferedInputStream.close();\n      } catch ( Exception e ) {\n        setErrors( 1 );\n        logError( \"Error closing data stream\", e );\n      }\n    }\n\n    super.dispose( smi, sdi );\n  }\n\n  @Deprecated\n  static class HttpClientFactory {\n    public HttpClient createHttpClient() {\n      return SlaveConnectionManager.getInstance().createHttpClient();\n    }\n  }\n\n  @VisibleForTesting\n  HttpClient createHttpClient( String user, String password ) {\n    HttpClientManager.HttpClientBuilderFacade httpClientBuilder = httpClientManager.createBuilder();\n    // client.setTimeout(10000);\n    // client.setConnectionTimeout(10000);\n\n    if ( StringUtils.isNotBlank( user ) ) {\n      httpClientBuilder.setCredentials( user, password );\n    }\n    return httpClientBuilder.build();\n  }\n\n  static class GetMethodFactory {\n    public HttpGet create( String url ) {\n      return new HttpGet( url );\n    }\n  }\n\n  @VisibleForTesting\n  HttpClientManager createHttpClientManager() {\n    return HttpClientManager.getInstance();\n  }\n\n  @VisibleForTesting\n  HttpClientContext getHttpClientContext( String hostname, int port ) {\n    HttpClientContext context;\n    HttpHost target = new HttpHost( hostname, port, \"http\" );\n    // Create AuthCache instance\n    AuthCache authCache = new BasicAuthCache();\n    // Generate BASIC scheme object and add it to the local\n    // auth cache\n    BasicScheme basicAuth = new BasicScheme();\n    authCache.put( target, basicAuth );\n\n    // Add AuthCache to the execution context\n    context = HttpClientContext.create();\n    context.setAuthCache( authCache );\n    return context;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/couchdbinput/CouchDbInputData.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.couchdbinput;\n\nimport java.io.BufferedInputStream;\nimport java.io.InputStream;\n\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.trans.step.BaseStepData;\nimport org.pentaho.di.trans.step.StepDataInterface;\n\n/**\n * @author Matt\n * @since 24-jan-2005\n */\npublic class CouchDbInputData extends BaseStepData implements StepDataInterface {\n  public RowMetaInterface outputRowMeta;\n\n  public int counter;\n\n  public InputStream inputStream;\n  public BufferedInputStream bufferedInputStream;\n\n  public StringBuilder buffer;\n\n  public int open;\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/trans/steps/couchdbinput/CouchDbInputMeta.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.couchdbinput;\n\nimport org.pentaho.di.core.annotations.Step;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.injection.Injection;\nimport org.pentaho.di.core.injection.InjectionSupported;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDataInterface;\nimport org.pentaho.di.trans.step.StepInterface;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.step.StepMetaInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.util.List;\n\n@Step( id = \"CouchDbInput\", image = \"couchdb-input.svg\", name = \"CouchDbInput.Name\",\n  description = \"CouchDbInput.Description\",\n  documentationUrl = \"pdi-transformation-steps-reference-overview/couchdb-input\",\n  categoryDescription = \"i18n:org.pentaho.di.trans.step:BaseStep.Category.BigData\",\n  i18nPackageName = \"org.pentaho.di.trans.steps.couchdbinput\" )\n\n@InjectionSupported( localizationPrefix = \"CouchDbInput.Injection.\" )\npublic class CouchDbInputMeta extends BaseStepMeta implements StepMetaInterface {\n  public static final String DEFAULT_HOSTNAME = \"localhost\";\n  public static final String DEFAULT_PORT = \"5984\";\n  public static final String DEFAULT_DB_NAME = \"db\";\n  public static final String DEFAULT_VIEW_NAME = \"design-document/view-name\";\n  public static final String VALUE_META_NAME = \"json\";\n  private static Class<?> PKG = CouchDbInputMeta.class; // for i18n purposes, needed by Translator2!! $NON-NLS-1$\n\n  public CouchDbInputMeta() {\n    super(); // allocate BaseStepMeta\n  }\n\n  @Injection( name = \"HOSTNAME\" )\n  private String hostname;\n\n  @Injection( name = \"PORT\" )\n  private String port;\n\n  @Injection( name = \"DBNAME\" )\n  private String dbName;\n\n  @Injection( name = \"DESIGN_DOCUMENT\" )\n  private String designDocument;\n\n  @Injection( name = \"VIEW_NAME\" )\n  private String viewName;\n\n  @Injection( name = \"AUTHENTICATION_USER\" )\n  private String authenticationUser;\n\n  @Injection( name = \"AUTHENTICATION_PASSWORD\" )\n  private String authenticationPassword;\n\n  @Override\n  public void loadXML( Node stepnode, List<DatabaseMeta> databases, IMetaStore metaStore )\n    throws KettleXMLException {\n    try {\n      hostname = XMLHandler.getTagValue( stepnode, \"hostname\" ); //$NON-NLS-1$ //$NON-NLS-2$\n      port = XMLHandler.getTagValue( stepnode, \"port\" ); //$NON-NLS-1$ //$NON-NLS-2$\n      dbName = XMLHandler.getTagValue( stepnode, \"db_name\" ); //$NON-NLS-1$\n      designDocument = XMLHandler.getTagValue( stepnode, \"design_document\" ); //$NON-NLS-1$\n      viewName = XMLHandler.getTagValue( stepnode, \"view_name\" ); //$NON-NLS-1$\n      authenticationUser = XMLHandler.getTagValue( stepnode, \"auth_user\" ); //$NON-NLS-1$\n      authenticationPassword =\n        Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( stepnode, \"auth_password\" ) ); //$NON-NLS-1$\n    } catch ( Exception e ) {\n      throw new KettleXMLException( BaseMessages.getString( PKG, \"CouchDbInputMeta.Exception.UnableToLoadStepInfo\" ),\n        e ); //$NON-NLS-1$\n    }\n  }\n\n  @Override\n  public Object clone() {\n    return super.clone();\n  }\n\n  @Override\n  public void setDefault() {\n    hostname = DEFAULT_HOSTNAME; //$NON-NLS-1$\n    port = DEFAULT_PORT; //$NON-NLS-1$\n    dbName = DEFAULT_DB_NAME; //$NON-NLS-1$\n    viewName = DEFAULT_VIEW_NAME; //$NON-NLS-1$\n  }\n\n  @Override\n  public void getFields( Bowl bowl, RowMetaInterface rowMeta, String origin, RowMetaInterface[] info, StepMeta nextStep,\n                         VariableSpace space, Repository repository, IMetaStore metaStore ) throws KettleStepException {\n    ValueMetaInterface idValueMeta = new ValueMeta( VALUE_META_NAME, ValueMetaInterface.TYPE_STRING );\n    idValueMeta.setOrigin( origin );\n    rowMeta.addValueMeta( idValueMeta );\n  }\n\n  @Override\n  public String getXML() {\n    StringBuilder retval = new StringBuilder( 300 );\n\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"hostname\", hostname ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"port\", port ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"db_name\", dbName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"    \" )\n      .append( XMLHandler.addTagValue( \"design_document\", designDocument ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"view_name\", viewName ) ); //$NON-NLS-1$ //$NON-NLS-2$\n    retval.append( \"    \" ).append( XMLHandler.addTagValue( \"auth_user\", authenticationUser ) );\n    retval.append( \"    \" ).append(\n      XMLHandler.addTagValue( \"auth_password\", Encr.encryptPasswordIfNotUsingVariables( authenticationPassword ) ) );\n\n    return retval.toString();\n  }\n\n  @Override\n  public void readRep( Repository rep, IMetaStore metaStore, ObjectId id_step, List<DatabaseMeta> databases )\n    throws KettleException {\n    try {\n      hostname = rep.getStepAttributeString( id_step, \"hostname\" ); //$NON-NLS-1$\n      port = rep.getStepAttributeString( id_step, \"port\" ); //$NON-NLS-1$\n      dbName = rep.getStepAttributeString( id_step, \"db_name\" ); //$NON-NLS-1$\n      designDocument = rep.getStepAttributeString( id_step, \"design_document\" ); //$NON-NLS-1$\n      viewName = rep.getStepAttributeString( id_step, \"view_name\" ); //$NON-NLS-1$\n      authenticationUser = rep.getStepAttributeString( id_step, \"auth_user\" );\n      authenticationPassword =\n        Encr.decryptPasswordOptionallyEncrypted( rep.getStepAttributeString( id_step, \"auth_password\" ) );\n    } catch ( Exception e ) {\n      throw new KettleException( BaseMessages.getString( PKG,\n        \"CouchDbInputMeta.Exception.UnexpectedErrorWhileReadingStepInfo\" ), e ); //$NON-NLS-1$\n    }\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_transformation, ObjectId id_step )\n    throws KettleException {\n    try {\n      rep.saveStepAttribute( id_transformation, id_step, \"hostname\", hostname ); //$NON-NLS-1$\n      rep.saveStepAttribute( id_transformation, id_step, \"port\", port ); //$NON-NLS-1$\n      rep.saveStepAttribute( id_transformation, id_step, \"db_name\", dbName ); //$NON-NLS-1$\n      rep.saveStepAttribute( id_transformation, id_step, \"design_document\", designDocument ); //$NON-NLS-1$\n      rep.saveStepAttribute( id_transformation, id_step, \"view_name\", viewName ); //$NON-NLS-1$\n      rep.saveStepAttribute( id_transformation, id_step, \"auth_user\", authenticationUser );\n      rep.saveStepAttribute( id_transformation, id_step, \"auth_password\", Encr\n        .encryptPasswordIfNotUsingVariables( authenticationPassword ) );\n    } catch ( Exception e ) {\n      throw new KettleException(\n        BaseMessages.getString( PKG, \"CouchDbInputMeta.Exception.UnableToSaveStepInfo\" ) + id_step, e ); //$NON-NLS-1$\n    }\n  }\n\n  @Override\n  public StepInterface getStep( StepMeta stepMeta, StepDataInterface stepDataInterface, int cnr, TransMeta tr,\n                                Trans trans ) {\n    return new CouchDbInput( stepMeta, stepDataInterface, cnr, tr, trans );\n  }\n\n  @Override\n  public StepDataInterface getStepData() {\n    return new CouchDbInputData();\n  }\n\n  /**\n   * @return the hostname\n   */\n  public String getHostname() {\n    return hostname;\n  }\n\n  /**\n   * @param hostname the hostname to set\n   */\n  public void setHostname( String hostname ) {\n    this.hostname = hostname;\n  }\n\n  /**\n   * @return the port\n   */\n  public String getPort() {\n    return port;\n  }\n\n  /**\n   * @param port the port to set\n   */\n  public void setPort( String port ) {\n    this.port = port;\n  }\n\n  /**\n   * @return the dbName\n   */\n  public String getDbName() {\n    return dbName;\n  }\n\n  /**\n   * @param dbName the dbName to set\n   */\n  public void setDbName( String dbName ) {\n    this.dbName = dbName;\n  }\n\n  /**\n   * @return the authenticationUser\n   */\n  public String getAuthenticationUser() {\n    return authenticationUser;\n  }\n\n  /**\n   * @param authenticationUser the authenticationUser to set\n   */\n  public void setAuthenticationUser( String authenticationUser ) {\n    this.authenticationUser = authenticationUser;\n  }\n\n  /**\n   * @return the authenticationPassword\n   */\n  public String getAuthenticationPassword() {\n    return authenticationPassword;\n  }\n\n  /**\n   * @param authenticationPassword the authenticationPassword to set\n   */\n  public void setAuthenticationPassword( String authenticationPassword ) {\n    this.authenticationPassword = authenticationPassword;\n  }\n\n  /**\n   * @return the viewName\n   */\n  public String getViewName() {\n    return viewName;\n  }\n\n  /**\n   * @param viewName the viewName to set\n   */\n  public void setViewName( String viewName ) {\n    this.viewName = viewName;\n  }\n\n  /**\n   * @return the designDocument\n   */\n  public String getDesignDocument() {\n    return designDocument;\n  }\n\n  /**\n   * @param designDocument the designDocument to set\n   */\n  public void setDesignDocument( String designDocument ) {\n    this.designDocument = designDocument;\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/core/namedcluster/HadoopClusterDelegate.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.IMetaStore;\n\n/**\n * Created by bryan on 8/17/15.\n */\npublic interface HadoopClusterDelegate {\n  String editNamedCluster( IMetaStore metaStore, NamedCluster namedCluster, Shell shell );\n\n  String newNamedCluster( VariableSpace variableSpace, IMetaStore metaStore, Shell shell );\n\n  void dupeNamedCluster( IMetaStore metaStore, NamedCluster nc, Shell shell );\n\n  void delNamedCluster( IMetaStore metaStore, NamedCluster namedCluster );\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/core/namedcluster/NamedClusterDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n/**\n * Created by bryan on 8/17/15.\n */\npublic interface NamedClusterDialog {\n  void setNamedCluster( NamedCluster namedCluster );\n\n  NamedCluster getNamedCluster();\n\n  void setNewClusterCheck( boolean newClusterCheck );\n\n  String open();\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/core/namedcluster/NamedClusterUIFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n\n/**\n * Created by bryan on 8/17/15.\n */\npublic interface NamedClusterUIFactory {\n  NamedClusterWidget createNamedClusterWidget( Composite parent, boolean showLabel );\n\n  HadoopClusterDelegate createHadoopClusterDelegate( Spoon spoon );\n\n  NamedClusterDialog createNamedClusterDialog( Shell shell );\n\n  NamedCluster getNamedClusterFromVfsFileChooser( Spoon spoon );\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/core/namedcluster/NamedClusterUIHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.metastore.api.exceptions.MetaStoreException;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class NamedClusterUIHelper {\n  private static final NamedClusterUIFactoryHolder NAMED_CLUSTER_UI_FACTORY_HOLDER = new NamedClusterUIFactoryHolder();\n\n  public static synchronized NamedClusterUIFactory getNamedClusterUIFactory() {\n    return NAMED_CLUSTER_UI_FACTORY_HOLDER.getNamedClusterUIFactory();\n  }\n\n  /**\n   * Being used to inject the widgets from OSGi (where all the test functionality is located) this should be removed\n   * once we OSGiify the rest of the big data stuff\n   *\n   * @param namedClusterUIFactory\n   */\n  public static void setNamedClusterUIFactory(\n    NamedClusterUIFactory namedClusterUIFactory ) {\n    NAMED_CLUSTER_UI_FACTORY_HOLDER.setNamedClusterUIFactory( namedClusterUIFactory );\n  }\n\n  /**\n   * Being used to inject the widgets from OSGi (where all the test functionality is located) this should be removed\n   * once we OSGiify the rest of the big data stuff\n   *\n   * WARNING: THIS WILL BLOCK UNTIL THE FACTORY IS AVAILABLE, DO NOT CALL FROM ANYTHING THAT COULD BLOCK STARTUP\n   */\n  public static List<NamedCluster> getNamedClusters() {\n    try {\n      NamedClusterService namedClusterService = BigDataServicesHelper.getNamedClusterService();\n      if ( namedClusterService != null ) {\n        return namedClusterService.list( Spoon.getInstance().getMetaStore() );\n      }\n      return new ArrayList<>();\n    } catch ( MetaStoreException e ) {\n      return new ArrayList<>();\n    }\n  }\n\n  public static NamedCluster getNamedCluster( String namedCluster ) throws MetaStoreException {\n    NamedClusterService namedClusterService = BigDataServicesHelper.getNamedClusterService();\n    if ( namedClusterService != null ) {\n      return namedClusterService.read( namedCluster, Spoon.getInstance().getMetaStore() );\n    }\n    return null;\n  }\n\n  static class NamedClusterUIFactoryHolder {\n    private NamedClusterUIFactory namedClusterUIFactory;\n\n    public synchronized NamedClusterUIFactory getNamedClusterUIFactory() {\n      while ( namedClusterUIFactory == null ) {\n        try {\n          wait();\n        } catch ( InterruptedException e ) {\n          Thread.currentThread().interrupt();\n        }\n      }\n      return namedClusterUIFactory;\n    }\n\n    public synchronized void setNamedClusterUIFactory( NamedClusterUIFactory namedClusterUIFactory ) {\n      this.namedClusterUIFactory = namedClusterUIFactory;\n      notifyAll();\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/core/namedcluster/NamedClusterWidget.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.eclipse.swt.events.SelectionListener;\nimport org.eclipse.swt.widgets.Composite;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n\n/**\n * Created by bryan on 8/17/15.\n */\npublic interface NamedClusterWidget {\n  void initiate();\n\n  Composite getComposite();\n\n  NamedCluster getSelectedNamedCluster();\n\n  void addSelectionListener( SelectionListener selectionListener );\n\n  void setSelectedNamedCluster( String name );\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/job/entries/hadoopjobexecutor/UserDefinedItem.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.job.entries.hadoopjobexecutor;\n\nimport java.beans.PropertyChangeListener;\n\nimport org.pentaho.ui.xul.XulEventSource;\n\npublic class UserDefinedItem implements XulEventSource {\n  private String name;\n  private String value;\n\n  public UserDefinedItem() {\n  }\n\n  public String getName() {\n    return name;\n  }\n\n  public void setName( String name ) {\n    this.name = name;\n  }\n\n  public String getValue() {\n    return value;\n  }\n\n  public void setValue( String value ) {\n    this.value = value;\n  }\n\n  public void addPropertyChangeListener( PropertyChangeListener listener ) {\n  }\n\n  public void removePropertyChangeListener( PropertyChangeListener listener ) {\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/repository/repositoryexplorer/controllers/NamedClustersController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.repository.repositoryexplorer.controllers;\n\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.big.data.api.services.BigDataServicesHelper;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.dialog.ErrorDialog;\nimport org.pentaho.di.ui.core.namedcluster.NamedClusterDialog;\nimport org.pentaho.di.ui.core.namedcluster.NamedClusterUIHelper;\nimport org.pentaho.di.ui.repository.repositoryexplorer.ContextChangeVetoer;\nimport org.pentaho.di.ui.repository.repositoryexplorer.ContextChangeVetoer.TYPE;\nimport org.pentaho.di.ui.repository.repositoryexplorer.ContextChangeVetoerCollection;\nimport org.pentaho.di.ui.repository.repositoryexplorer.ControllerInitializationException;\nimport org.pentaho.di.ui.repository.repositoryexplorer.IUISupportController;\nimport org.pentaho.di.ui.repository.repositoryexplorer.RepositoryExplorer;\nimport org.pentaho.di.ui.repository.repositoryexplorer.model.UINamedCluster;\nimport org.pentaho.di.ui.repository.repositoryexplorer.model.UINamedClusters;\nimport org.pentaho.di.ui.repository.repositoryexplorer.model.UIObjectCreationException;\nimport org.pentaho.di.ui.repository.repositoryexplorer.model.UINamedClusterObjectRegistry;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.components.XulButton;\nimport org.pentaho.ui.xul.containers.XulTree;\nimport org.pentaho.ui.xul.swt.tags.SwtDialog;\n\npublic class NamedClustersController extends LazilyInitializedController implements IUISupportController {\n\n  private static Class<?> PKG = RepositoryExplorer.class; // for i18n purposes, needed by Translator2!!\n\n  private XulTree namedClustersTable = null;\n\n  protected BindingFactory bf = null;\n\n  private boolean isRepReadOnly = true;\n\n  private Binding bindButtonNew = null;\n\n  private Binding bindButtonEdit = null;\n\n  private Binding bindButtonRemove = null;\n\n  private Shell shell = null;\n\n  private UINamedClusters namedClustersList = new UINamedClusters();\n\n  private NamedClusterDialog namedClusterDialog;\n\n  //The MainController is instantiated by a different classloader.  We will have to use reflection to access it\n  private Object mainController;\n\n  protected ContextChangeVetoerCollection contextChangeVetoers;\n\n  protected List<UINamedCluster> selectedNamedClusters;\n  protected List<UINamedCluster> repositoryNamedClusters;\n\n  private Repository diRepository;\n\n  private NamedClusterService namedClusterService;\n\n  public NamedClustersController() {\n  }\n\n  @Override\n  public String getName() {\n    return \"namedClustersController\";\n  }\n\n  public void init( Repository repository ) throws ControllerInitializationException {\n    this.diRepository = repository;\n  }\n\n  private NamedClusterService getNamedClusterService() {\n    if ( namedClusterService == null ) {\n      namedClusterService = BigDataServicesHelper.getNamedClusterService();\n    }\n    return namedClusterService;\n  }\n\n  private NamedClusterDialog getNamedClusterDialog() {\n    if ( namedClusterDialog == null ) {\n      namedClusterDialog = NamedClusterUIHelper.getNamedClusterUIFactory().createNamedClusterDialog( shell );\n    }\n    return namedClusterDialog;\n  }\n\n  private void createBindings() {\n    refreshNamedClustersList();\n    namedClustersTable = (XulTree) document.getElementById( \"named-clusters-table\" );\n\n    // Bind the named clusters table to a list of named clusters\n    bf.setBindingType( Binding.Type.ONE_WAY );\n\n    //CHECKSTYLE:LineLength:OFF\n    try {\n      bf.createBinding( namedClustersList, \"children\", namedClustersTable, \"elements\" ).fireSourceChanged();\n      ( bindButtonRemove = bf.createBinding( this, \"repReadOnly\", \"named-clusters-remove\", \"disabled\" ) ).fireSourceChanged();\n\n      if ( diRepository != null ) {\n        bf.createBinding( namedClustersTable, \"selectedItems\", this, \"selectedNamedClusters\" );\n      }\n    } catch ( Exception ex ) {\n      if ( testForNoController( ex ) ) {\n        // convert to runtime exception so it bubbles up through the UI\n        throw new RuntimeException( ex );\n      }\n    }\n  }\n\n  @Override\n  protected boolean doLazyInit() {\n    try {\n      mainController = this.getXulDomContainer().getEventHandler( \"mainController\" );\n    } catch ( XulException e ) {\n      return false;\n    }\n\n    try {\n      setRepReadOnly( this.diRepository.getRepositoryMeta().getRepositoryCapabilities().isReadOnly() );\n\n      // Load the SWT Shell from the explorer dialog\n      shell = ( (SwtDialog) document.getElementById( \"repository-explorer-dialog\" ) ).getShell();\n      bf = new DefaultBindingFactory();\n      bf.setDocument( this.getXulDomContainer().getDocumentRoot() );\n\n      if ( bf != null ) {\n        createBindings();\n      }\n      enableButtons( false );\n\n      return true;\n    } catch ( Exception e ) {\n      if ( testForNoController( e ) ) {\n        return false;\n      }\n    }\n\n    return false;\n  }\n\n  public Repository getRepository() {\n    return diRepository;\n  }\n\n  public void setRepReadOnly( boolean isRepReadOnly ) {\n    try {\n      if ( this.isRepReadOnly != isRepReadOnly ) {\n        this.isRepReadOnly = isRepReadOnly;\n\n        if ( initialized ) {\n          bindButtonNew.fireSourceChanged();\n          bindButtonEdit.fireSourceChanged();\n          bindButtonRemove.fireSourceChanged();\n        }\n      }\n    } catch ( Exception e ) {\n      if ( testForNoController( e ) ) {\n        // convert to runtime exception so it bubbles up through the UI\n        throw new RuntimeException( e );\n      }\n    }\n  }\n\n  public boolean isRepReadOnly() {\n    return isRepReadOnly;\n  }\n\n  private void refreshNamedClustersList() {\n    final List<UINamedCluster> tmpList = new ArrayList<UINamedCluster>();\n    Runnable r = new Runnable() {\n      public void run() {\n        try {\n          for ( NamedCluster namedCluster : getNamedClusterService().list( diRepository.getRepositoryMetaStore() ) ) {\n            try {\n              tmpList.add( UINamedClusterObjectRegistry.getInstance().constructUINamedCluster( namedCluster, diRepository ) );\n            } catch ( UIObjectCreationException uoe ) {\n              tmpList.add( new UINamedCluster( namedCluster, diRepository ) );\n            }\n          }\n        } catch ( Exception e ) {\n          if ( testForNoController( e ) ) {\n            // convert to runtime exception so it bubbles up through the UI\n            throw new RuntimeException( e );\n          }\n        }\n      }\n    };\n    doWithBusyIndicator( r );\n    namedClustersList.setChildren( tmpList );\n  }\n\n  public void createNamedCluster() {\n    try {\n      // user will have to select from list of templates\n      // for now hard code to hadoop-cluster\n      NamedCluster namedCluterTemplate = getNamedClusterService().getClusterTemplate();\n      namedCluterTemplate.initializeVariablesFrom( null );\n      getNamedClusterDialog().setNamedCluster( namedCluterTemplate );\n      getNamedClusterDialog().setNewClusterCheck( true );\n\n      String namedClusterName = getNamedClusterDialog().open();\n      if ( namedClusterName != null && !namedClusterName.equals( \"\" ) ) {\n        // See if this named cluster exists...\n        NamedCluster namedCluster = getNamedClusterService().read( namedClusterName, Spoon.getInstance().getMetaStore() );\n        if ( namedCluster == null ) {\n          getNamedClusterService().create( getNamedClusterDialog().getNamedCluster(), Spoon.getInstance().getMetaStore() );\n        } else {\n          MessageBox mb = new MessageBox( shell, SWT.ICON_ERROR | SWT.OK );\n          mb.setMessage( BaseMessages.getString(\n            PKG, \"RepositoryExplorerDialog.NamedCluster.Create.AlreadyExists.Message\" ) );\n          mb.setText( BaseMessages.getString(\n            PKG, \"RepositoryExplorerDialog.NamedCluster.Create.AlreadyExists.Title\" ) );\n          mb.open();\n        }\n      }\n    } catch ( Exception e ) {\n      if ( testForNoController( e ) ) {\n        new ErrorDialog( shell,\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Create.UnexpectedError.Title\" ),\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Create.UnexpectedError.Message\" ), e );\n      }\n    } finally {\n      refreshNamedClustersList();\n    }\n  }\n\n  /**\n   * Fire all current {@link ContextChangeVetoer}. Every one who has added their self as a vetoer has a change to vote\n   * on what should happen.\n   */\n  List<TYPE> pollContextChangeVetoResults() {\n    if ( contextChangeVetoers != null ) {\n      return contextChangeVetoers.fireContextChange();\n    } else {\n      List<TYPE> returnValue = new ArrayList<TYPE>();\n      returnValue.add( TYPE.NO_OP );\n      return returnValue;\n    }\n  }\n\n  public void addContextChangeVetoer( ContextChangeVetoer listener ) {\n    if ( contextChangeVetoers == null ) {\n      contextChangeVetoers = new ContextChangeVetoerCollection();\n    }\n    contextChangeVetoers.add( listener );\n  }\n\n  public void removeContextChangeVetoer( ContextChangeVetoer listener ) {\n    if ( contextChangeVetoers != null ) {\n      contextChangeVetoers.remove( listener );\n    }\n  }\n\n  private boolean contains( TYPE type, List<TYPE> typeList ) {\n    for ( TYPE t : typeList ) {\n      if ( t.equals( type ) ) {\n        return true;\n      }\n    }\n    return false;\n  }\n\n  boolean compareNamedClusters( List<UINamedCluster> ro1, List<UINamedCluster> ro2 ) {\n    if ( ro1 != null && ro2 != null ) {\n      if ( ro1.size() != ro2.size() ) {\n        return false;\n      }\n      for ( int i = 0; i < ro1.size(); i++ ) {\n        if ( ro1.get( i ) != null && ro2.get( i ) != null ) {\n          if ( !ro1.get( i ).getName().equals( ro2.get( i ).getName() ) ) {\n            return false;\n          }\n        }\n      }\n    } else {\n      return false;\n    }\n    return true;\n  }\n\n  public void editNamedCluster() {\n    try {\n      Collection<UINamedCluster> namedClusters = namedClustersTable.getSelectedItems();\n\n      if ( namedClusters != null && !namedClusters.isEmpty() ) {\n        // Grab the first item in the list & send it to the database dialog\n        NamedCluster original = ( (UINamedCluster) namedClusters.toArray()[0] ).getNamedCluster();\n        NamedCluster namedCluster = original.clone();\n\n        // Make sure this NamedCluster already exists and store its id for updating\n        if ( getNamedClusterService().read( namedCluster.getName(), Spoon.getInstance().getMetaStore() ) == null ) {\n          MessageBox mb = new MessageBox( shell, SWT.ICON_ERROR | SWT.OK );\n          mb.setMessage( BaseMessages.getString(\n            PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.DoesNotExists.Message\" ) );\n          mb\n            .setText( BaseMessages.getString(\n              PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.DoesNotExists.Title\" ) );\n          mb.open();\n        } else {\n          getNamedClusterDialog().setNamedCluster( namedCluster );\n          getNamedClusterDialog().setNewClusterCheck( false );\n          String namedClusterName = getNamedClusterDialog().open();\n          if ( namedClusterName != null && !namedClusterName.equals( \"\" ) ) {\n            // delete original\n            getNamedClusterService().delete( original.getName(), Spoon.getInstance().getMetaStore() );\n            getNamedClusterService().create( namedCluster, Spoon.getInstance().getMetaStore() );\n          }\n        }\n      } else {\n        MessageBox mb = new MessageBox( shell, SWT.ICON_ERROR | SWT.OK );\n        mb.setMessage( BaseMessages.getString(\n          PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.NoItemSelected.Message\" ) );\n        mb\n          .setText( BaseMessages\n            .getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.NoItemSelected.Title\" ) );\n        mb.open();\n      }\n    } catch ( Exception e ) {\n      if ( testForNoController( e ) ) {\n        new ErrorDialog( shell,\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.UnexpectedError.Title\" ),\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Edit.UnexpectedError.Message\" ), e );\n      }\n    } finally {\n      refreshNamedClustersList();\n    }\n  }\n\n  public void removeNamedCluster() {\n    try {\n      Collection<UINamedCluster> namedClusters = namedClustersTable.getSelectedItems();\n\n      if ( namedClusters != null && !namedClusters.isEmpty() ) {\n        for ( Object obj : namedClusters ) {\n          if ( obj != null && obj instanceof UINamedCluster ) {\n            UINamedCluster uiNamedCluster = (UINamedCluster) obj;\n\n            NamedCluster namedCluster = uiNamedCluster.getNamedCluster();\n\n            // Make sure this named cluster already exists and store its id for updating\n            if ( getNamedClusterService().read( namedCluster.getName(), diRepository.getRepositoryMetaStore() ) == null ) {\n              MessageBox mb = new MessageBox( shell, SWT.ICON_ERROR | SWT.OK );\n              mb\n                .setMessage( BaseMessages.getString(\n                  PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.DoesNotExists.Message\", namedCluster.getName() ) );\n              mb.setText( BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.DoesNotExists.Title\" ) );\n              mb.open();\n            } else {\n              getNamedClusterService().delete( namedCluster.getName(), diRepository.getRepositoryMetaStore() );\n            }\n          }\n        }\n      } else {\n        MessageBox mb = new MessageBox( shell, SWT.ICON_ERROR | SWT.OK );\n        mb.setMessage( BaseMessages.getString(\n          PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.NoItemSelected.Message\" ) );\n        mb.setText( BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.NoItemSelected.Title\" ) );\n        mb.open();\n      }\n    } catch ( Exception e ) {\n      if ( testForNoController( e ) ) {\n        new ErrorDialog( shell,\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.UnexpectedError.Title\" ),\n          BaseMessages.getString( PKG, \"RepositoryExplorerDialog.NamedCluster.Delete.UnexpectedError.Message\" ), e );\n      }\n    } finally {\n      refreshNamedClustersList();\n    }\n  }\n\n  public void setSelectedNamedClusters( List<UINamedCluster> namedClusters ) {\n    // SELECTION LOGIC\n    if ( !compareNamedClusters( namedClusters, this.selectedNamedClusters ) ) {\n      List<TYPE> pollResults = pollContextChangeVetoResults();\n      if ( !contains( TYPE.CANCEL, pollResults ) ) {\n        this.selectedNamedClusters = namedClusters;\n        setRepositoryNamedClusters( namedClusters );\n      } else {\n        namedClustersTable.setSelectedItems( this.selectedNamedClusters );\n        return;\n      }\n    }\n\n    // ENABLE BUTTONS LOGIC\n    boolean enableRemove = false;\n    if ( namedClusters != null && namedClusters.size() > 0 ) {\n      enableRemove = true;\n    }\n    // Convenience - Leave 'new' enabled, modify 'edit' and 'remove'\n    enableButtons( enableRemove );\n  }\n\n  public List<UINamedCluster> getRepositoryNamedClusters() {\n    return repositoryNamedClusters;\n  }\n\n  public void setRepositoryNamedClusters( List<UINamedCluster> repositoryNamedClusters ) {\n    this.repositoryNamedClusters = repositoryNamedClusters;\n    firePropertyChange( \"repositoryNamedClusters\", null, repositoryNamedClusters );\n  }\n\n  public void enableButtons( boolean enableRemove ) {\n    XulButton bRemove = (XulButton) document.getElementById( \"named-clusters-remove\" );\n    bRemove.setDisabled( !enableRemove );\n  }\n\n  public void tabClicked() {\n    lazyInit();\n  }\n\n  private boolean testForNoController( Throwable e ) {\n    if ( mainController == null ) {\n      return true;\n    }\n    try {\n      Method method = mainController.getClass().getMethod( \"handleLostRepository\", Throwable.class );\n      method.invoke( mainController, e );\n    } catch ( NoSuchMethodException | IllegalAccessException | InvocationTargetException ex ) {\n      // If any of these exceptions we still did not get the mainController\n      return true;\n    }\n    return false;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/repository/repositoryexplorer/model/UINamedCluster.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.repository.repositoryexplorer.model;\n\nimport java.text.SimpleDateFormat;\nimport java.util.Date;\n\nimport org.pentaho.di.core.hadoop.HadoopSpoonPlugin;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulEventSourceAdapter;\n\npublic class UINamedCluster extends XulEventSourceAdapter {\n\n  private static final Class<?> CLZ = HadoopSpoonPlugin.class;\n\n  protected NamedCluster namedCluster;\n  // inheriting classes may need access to the repository\n  protected Repository rep;\n\n  public UINamedCluster() {\n    super();\n  }\n\n  public UINamedCluster( NamedCluster namedCluster, Repository rep ) {\n    super();\n    this.namedCluster = namedCluster;\n    this.rep = rep;\n  }\n\n  public String getName() {\n    if ( namedCluster != null ) {\n      return namedCluster.getName();\n    }\n    return null;\n  }\n\n  public String getDisplayName() {\n    return getName();\n  }\n\n  public String getType() {\n    return BaseMessages.getString( CLZ, \"NamedClustersController.Type\" );\n  }\n\n  public String getDateModified() {\n    return SimpleDateFormat.getDateTimeInstance().format( new Date( namedCluster.getLastModifiedDate() ) );\n  }\n\n  public NamedCluster getNamedCluster() {\n    return namedCluster;\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/repository/repositoryexplorer/model/UINamedClusterObjectRegistry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.repository.repositoryexplorer.model;\n\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\n\nimport java.lang.reflect.Constructor;\n\npublic class UINamedClusterObjectRegistry {\n\n  public static final Class<?> DEFAULT_NAMED_CLUSTER_CLASS = UINamedCluster.class;\n  private static UINamedClusterObjectRegistry instance;\n\n  private Class<?> namedClusterClass = DEFAULT_NAMED_CLUSTER_CLASS;\n\n  private UINamedClusterObjectRegistry() {\n  }\n\n  public static UINamedClusterObjectRegistry getInstance() {\n    if ( instance == null ) {\n      instance = new UINamedClusterObjectRegistry();\n    }\n    return instance;\n  }\n\n  public UINamedCluster constructUINamedCluster( NamedCluster namedCluster, Repository rep ) throws UIObjectCreationException {\n    try {\n      Constructor<?> constructor = namedClusterClass.getConstructor( NamedCluster.class, Repository.class );\n      if ( constructor != null ) {\n        return (UINamedCluster) constructor.newInstance( namedCluster, rep );\n      } else {\n        throw new UIObjectCreationException( \"Unable to get the constructor for \" + namedClusterClass );\n      }\n    } catch ( Exception e ) {\n      throw new UIObjectCreationException( \"Unable to instantiate object for \" + namedClusterClass );\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/repository/repositoryexplorer/model/UINamedClusters.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.repository.repositoryexplorer.model;\n\nimport java.util.List;\n\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\npublic class UINamedClusters extends AbstractModelList<UINamedCluster> {\n\n  private static final long serialVersionUID = -5835985536358581894L;\n\n  public UINamedClusters() {\n  }\n\n  public UINamedClusters( List<UINamedCluster> namedClusters ) {\n    super( namedClusters );\n  }\n\n  @Override\n  protected void fireCollectionChanged() {\n    this.changeSupport.firePropertyChange( \"children\", null, this.getChildren() );\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/trans/steps/couchdbinput/CouchDbInputDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.trans.steps.couchdbinput;\n\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.events.ModifyEvent;\nimport org.eclipse.swt.events.ModifyListener;\nimport org.eclipse.swt.events.SelectionAdapter;\nimport org.eclipse.swt.events.SelectionEvent;\nimport org.eclipse.swt.events.ShellAdapter;\nimport org.eclipse.swt.events.ShellEvent;\nimport org.eclipse.swt.layout.FormAttachment;\nimport org.eclipse.swt.layout.FormData;\nimport org.eclipse.swt.layout.FormLayout;\nimport org.eclipse.swt.widgets.Button;\nimport org.eclipse.swt.widgets.Control;\nimport org.eclipse.swt.widgets.Display;\nimport org.eclipse.swt.widgets.Event;\nimport org.eclipse.swt.widgets.Label;\nimport org.eclipse.swt.widgets.Listener;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.trans.Trans;\nimport org.pentaho.di.trans.TransMeta;\nimport org.pentaho.di.trans.TransPreviewFactory;\nimport org.pentaho.di.trans.step.BaseStepMeta;\nimport org.pentaho.di.trans.step.StepDialogInterface;\nimport org.pentaho.di.trans.steps.couchdbinput.CouchDbInputMeta;\nimport org.pentaho.di.ui.core.dialog.EnterNumberDialog;\nimport org.pentaho.di.ui.core.dialog.EnterTextDialog;\nimport org.pentaho.di.ui.core.dialog.PreviewRowsDialog;\nimport org.pentaho.di.ui.core.widget.TextVar;\nimport org.pentaho.di.ui.trans.dialog.TransPreviewProgressDialog;\nimport org.pentaho.di.ui.trans.step.BaseStepDialog;\n\npublic class CouchDbInputDialog extends BaseStepDialog implements StepDialogInterface {\n  private static Class<?> PKG = CouchDbInputMeta.class; // for Translator.sh\n\n  private TextVar wHostname;\n  private TextVar wPort;\n  private TextVar wDbName;\n  private TextVar wDesignDocument;\n  private TextVar wViewName;\n\n  private TextVar wAuthUser;\n  private TextVar wAuthPass;\n\n  private CouchDbInputMeta input;\n\n  public CouchDbInputDialog( Shell parent, Object in, TransMeta tr, String sname ) {\n    super( parent, (BaseStepMeta) in, tr, sname );\n    input = (CouchDbInputMeta) in;\n  }\n\n  public String open() {\n    Shell parent = getParent();\n    Display display = parent.getDisplay();\n\n    shell = new Shell( parent, SWT.DIALOG_TRIM | SWT.RESIZE | SWT.MAX | SWT.MIN );\n    props.setLook( shell );\n    setShellImage( shell, input );\n\n    ModifyListener lsMod = new ModifyListener() {\n      public void modifyText( ModifyEvent e ) {\n        input.setChanged();\n      }\n    };\n    changed = input.hasChanged();\n\n    FormLayout formLayout = new FormLayout();\n    formLayout.marginWidth = Const.FORM_MARGIN;\n    formLayout.marginHeight = Const.FORM_MARGIN;\n\n    shell.setLayout( formLayout );\n    shell.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.Shell.Title\" ) ); //$NON-NLS-1$\n\n    int middle = props.getMiddlePct();\n    int margin = Const.MARGIN;\n\n    // Stepname line\n    wlStepname = new Label( shell, SWT.RIGHT );\n    wlStepname.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.Stepname.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlStepname );\n    fdlStepname = new FormData();\n    fdlStepname.left = new FormAttachment( 0, 0 );\n    fdlStepname.right = new FormAttachment( middle, -margin );\n    fdlStepname.top = new FormAttachment( 0, margin );\n    wlStepname.setLayoutData( fdlStepname );\n    wStepname = new Text( shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    wStepname.setText( stepname );\n    props.setLook( wStepname );\n    wStepname.addModifyListener( lsMod );\n    fdStepname = new FormData();\n    fdStepname.left = new FormAttachment( middle, 0 );\n    fdStepname.top = new FormAttachment( 0, margin );\n    fdStepname.right = new FormAttachment( 100, 0 );\n    wStepname.setLayoutData( fdStepname );\n    Control lastControl = wStepname;\n\n    // Hostname input ...\n    //\n    Label wlHostname = new Label( shell, SWT.RIGHT );\n    wlHostname.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.Hostname.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlHostname );\n    FormData fdlHostname = new FormData();\n    fdlHostname.left = new FormAttachment( 0, 0 );\n    fdlHostname.right = new FormAttachment( middle, -margin );\n    fdlHostname.top = new FormAttachment( lastControl, margin );\n    wlHostname.setLayoutData( fdlHostname );\n    wHostname = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wHostname );\n    wHostname.addModifyListener( lsMod );\n    FormData fdHostname = new FormData();\n    fdHostname.left = new FormAttachment( middle, 0 );\n    fdHostname.top = new FormAttachment( lastControl, margin );\n    fdHostname.right = new FormAttachment( 100, 0 );\n    wHostname.setLayoutData( fdHostname );\n    lastControl = wHostname;\n\n    // Port input ...\n    //\n    Label wlPort = new Label( shell, SWT.RIGHT );\n    wlPort.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.Port.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlPort );\n    FormData fdlPort = new FormData();\n    fdlPort.left = new FormAttachment( 0, 0 );\n    fdlPort.right = new FormAttachment( middle, -margin );\n    fdlPort.top = new FormAttachment( lastControl, margin );\n    wlPort.setLayoutData( fdlPort );\n    wPort = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wPort );\n    wPort.addModifyListener( lsMod );\n    FormData fdPort = new FormData();\n    fdPort.left = new FormAttachment( middle, 0 );\n    fdPort.top = new FormAttachment( lastControl, margin );\n    fdPort.right = new FormAttachment( 100, 0 );\n    wPort.setLayoutData( fdPort );\n    lastControl = wPort;\n\n    // DbName input ...\n    //\n    Label wlDbName = new Label( shell, SWT.RIGHT );\n    wlDbName.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.DbName.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlDbName );\n    FormData fdlDbName = new FormData();\n    fdlDbName.left = new FormAttachment( 0, 0 );\n    fdlDbName.right = new FormAttachment( middle, -margin );\n    fdlDbName.top = new FormAttachment( lastControl, margin );\n    wlDbName.setLayoutData( fdlDbName );\n    wDbName = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wDbName );\n    wDbName.addModifyListener( lsMod );\n    FormData fdDbName = new FormData();\n    fdDbName.left = new FormAttachment( middle, 0 );\n    fdDbName.top = new FormAttachment( lastControl, margin );\n    fdDbName.right = new FormAttachment( 100, 0 );\n    wDbName.setLayoutData( fdDbName );\n    lastControl = wDbName;\n\n    // Design document ...\n    //\n    Label wlDesignDocument = new Label( shell, SWT.RIGHT );\n    wlDesignDocument.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.DesignDocument.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlDesignDocument );\n    FormData fdlDesignDocument = new FormData();\n    fdlDesignDocument.left = new FormAttachment( 0, 0 );\n    fdlDesignDocument.right = new FormAttachment( middle, -margin );\n    fdlDesignDocument.top = new FormAttachment( lastControl, margin );\n    wlDesignDocument.setLayoutData( fdlDesignDocument );\n    wDesignDocument = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wDesignDocument );\n    wDesignDocument.addModifyListener( lsMod );\n    FormData fdDesignDocument = new FormData();\n    fdDesignDocument.left = new FormAttachment( middle, 0 );\n    fdDesignDocument.top = new FormAttachment( lastControl, margin );\n    fdDesignDocument.right = new FormAttachment( 100, 0 );\n    wDesignDocument.setLayoutData( fdDesignDocument );\n    lastControl = wDesignDocument;\n\n    // view input ...\n    //\n    Label wlViewName = new Label( shell, SWT.RIGHT );\n    wlViewName.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.ViewName.Label\" ) ); //$NON-NLS-1$\n    props.setLook( wlViewName );\n    FormData fdlViewName = new FormData();\n    fdlViewName.left = new FormAttachment( 0, 0 );\n    fdlViewName.right = new FormAttachment( middle, -margin );\n    fdlViewName.top = new FormAttachment( lastControl, margin );\n    wlViewName.setLayoutData( fdlViewName );\n    wViewName = new TextVar( transMeta, shell, SWT.SINGLE | SWT.LEFT | SWT.BORDER );\n    props.setLook( wViewName );\n    wViewName.addModifyListener( lsMod );\n    FormData fdViewName = new FormData();\n    fdViewName.left = new FormAttachment( middle, 0 );\n    fdViewName.top = new FormAttachment( lastControl, margin );\n    fdViewName.right = new FormAttachment( 100, 0 );\n    wViewName.setLayoutData( fdViewName );\n    lastControl = wViewName;\n\n    // Authentication...\n    //\n    // AuthUser line\n    Label wlAuthUser = new Label( shell, SWT.RIGHT );\n    wlAuthUser.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.AuthenticationUser.Label\" ) );\n    props.setLook( wlAuthUser );\n    FormData fdlAuthUser = new FormData();\n    fdlAuthUser.left = new FormAttachment( 0, -margin );\n    fdlAuthUser.top = new FormAttachment( lastControl, margin );\n    fdlAuthUser.right = new FormAttachment( middle, -margin );\n    wlAuthUser.setLayoutData( fdlAuthUser );\n\n    wAuthUser = new TextVar( transMeta, shell, SWT.BORDER | SWT.READ_ONLY );\n    wAuthUser.setEditable( true );\n    props.setLook( wAuthUser );\n    wAuthUser.addModifyListener( lsMod );\n    FormData fdAuthUser = new FormData();\n    fdAuthUser.left = new FormAttachment( middle, 0 );\n    fdAuthUser.top = new FormAttachment( lastControl, margin );\n    fdAuthUser.right = new FormAttachment( 100, 0 );\n    wAuthUser.setLayoutData( fdAuthUser );\n    lastControl = wAuthUser;\n\n    // AuthPass line\n    Label wlAuthPass = new Label( shell, SWT.RIGHT );\n    wlAuthPass.setText( BaseMessages.getString( PKG, \"CouchDbInputDialog.AuthenticationPassword.Label\" ) );\n    props.setLook( wlAuthPass );\n    FormData fdlAuthPass = new FormData();\n    fdlAuthPass.left = new FormAttachment( 0, -margin );\n    fdlAuthPass.top = new FormAttachment( lastControl, margin );\n    fdlAuthPass.right = new FormAttachment( middle, -margin );\n    wlAuthPass.setLayoutData( fdlAuthPass );\n\n    wAuthPass = new TextVar( transMeta, shell, SWT.BORDER | SWT.READ_ONLY );\n    wAuthPass.setEditable( true );\n    wAuthPass.setEchoChar( '*' );\n    props.setLook( wAuthPass );\n    wAuthPass.addModifyListener( lsMod );\n    FormData fdAuthPass = new FormData();\n    fdAuthPass.left = new FormAttachment( middle, 0 );\n    fdAuthPass.top = new FormAttachment( wAuthUser, margin );\n    fdAuthPass.right = new FormAttachment( 100, 0 );\n    wAuthPass.setLayoutData( fdAuthPass );\n    lastControl = wAuthPass;\n\n    // Some buttons\n    wOK = new Button( shell, SWT.PUSH );\n    wOK.setText( BaseMessages.getString( PKG, \"System.Button.OK\" ) ); //$NON-NLS-1$\n    wPreview = new Button( shell, SWT.PUSH );\n    wPreview.setText( BaseMessages.getString( PKG, \"System.Button.Preview\" ) ); //$NON-NLS-1$\n    wCancel = new Button( shell, SWT.PUSH );\n    wCancel.setText( BaseMessages.getString( PKG, \"System.Button.Cancel\" ) ); //$NON-NLS-1$\n\n    setButtonPositions( new Button[] { wOK, wPreview, wCancel }, margin, lastControl );\n\n    // Add listeners\n    lsCancel = new Listener() {\n      public void handleEvent( Event e ) {\n        cancel();\n      }\n    };\n    lsPreview = new Listener() {\n      public void handleEvent( Event e ) {\n        preview();\n      }\n    };\n    lsOK = new Listener() {\n      public void handleEvent( Event e ) {\n        ok();\n      }\n    };\n\n    wCancel.addListener( SWT.Selection, lsCancel );\n    wPreview.addListener( SWT.Selection, lsPreview );\n    wOK.addListener( SWT.Selection, lsOK );\n\n    lsDef = new SelectionAdapter() {\n      public void widgetDefaultSelected( SelectionEvent e ) {\n        ok();\n      }\n    };\n\n    wStepname.addSelectionListener( lsDef );\n    wHostname.addSelectionListener( lsDef );\n    wDbName.addSelectionListener( lsDef );\n    wViewName.addSelectionListener( lsDef );\n    wAuthUser.addSelectionListener( lsDef );\n    wAuthPass.addSelectionListener( lsDef );\n\n    // Detect X or ALT-F4 or something that kills this window...\n    shell.addShellListener( new ShellAdapter() {\n      public void shellClosed( ShellEvent e ) {\n        cancel();\n      }\n    } );\n\n    getData();\n    input.setChanged( changed );\n\n    // Set the shell size, based upon previous time...\n    setSize();\n\n    shell.open();\n    while ( !shell.isDisposed() ) {\n      if ( !display.readAndDispatch() ) {\n        display.sleep();\n      }\n    }\n    return stepname;\n  }\n\n  /**\n   * Copy information from the meta-data input to the dialog fields.\n   */\n  public void getData() {\n    wHostname.setText( Const.NVL( input.getHostname(), \"\" ) ); //$NON-NLS-1$\n    wPort.setText( Const.NVL( input.getPort(), \"\" ) ); //$NON-NLS-1$\n    wDbName.setText( Const.NVL( input.getDbName(), \"\" ) ); //$NON-NLS-1$\n    wDesignDocument.setText( Const.NVL( input.getDesignDocument(), \"\" ) ); //$NON-NLS-1$\n    wViewName.setText( Const.NVL( input.getViewName(), \"\" ) ); //$NON-NLS-1$\n\n    wAuthUser.setText( Const.NVL( input.getAuthenticationUser(), \"\" ) ); // $NON-NLS-1$\n    wAuthPass.setText( Const.NVL( input.getAuthenticationPassword(), \"\" ) ); // $NON-NLS-1$\n\n    wStepname.selectAll();\n  }\n\n  private void cancel() {\n    stepname = null;\n    input.setChanged( changed );\n    dispose();\n  }\n\n  private void getInfo( CouchDbInputMeta meta ) {\n\n    meta.setHostname( wHostname.getText() );\n    meta.setPort( wPort.getText() );\n    meta.setDbName( wDbName.getText() );\n    meta.setDesignDocument( wDesignDocument.getText() );\n    meta.setViewName( wViewName.getText() );\n\n    meta.setAuthenticationUser( wAuthUser.getText() );\n    meta.setAuthenticationPassword( wAuthPass.getText() );\n  }\n\n  private void ok() {\n    if ( Const.isEmpty( wStepname.getText() ) ) {\n      return;\n    }\n    stepname = wStepname.getText(); // return value\n\n    getInfo( input );\n\n    dispose();\n  }\n\n  // Preview the data\n  private void preview() {\n    // Create the XML input step\n    CouchDbInputMeta oneMeta = new CouchDbInputMeta();\n    getInfo( oneMeta );\n\n    TransMeta previewMeta =\n        TransPreviewFactory.generatePreviewTransformation( transMeta, oneMeta, wStepname.getText() );\n\n    EnterNumberDialog numberDialog =\n        new EnterNumberDialog( shell, props.getDefaultPreviewSize(), BaseMessages.getString( PKG,\n            \"CouchDbInputDialog.PreviewSize.DialogTitle\" ), BaseMessages.getString( PKG,\n              \"CouchDbInputDialog.PreviewSize.DialogMessage\" ) );\n    int previewSize = numberDialog.open();\n    if ( previewSize > 0 ) {\n      TransPreviewProgressDialog progressDialog =\n          new TransPreviewProgressDialog( shell, previewMeta, new String[] { wStepname.getText() },\n              new int[] { previewSize } );\n      progressDialog.open();\n\n      Trans trans = progressDialog.getTrans();\n      String loggingText = progressDialog.getLoggingText();\n\n      if ( !progressDialog.isCancelled() ) {\n        if ( trans.getResult() != null && trans.getResult().getNrErrors() > 0 ) {\n          EnterTextDialog etd =\n              new EnterTextDialog( shell, BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Title\" ),\n                  BaseMessages.getString( PKG, \"System.Dialog.PreviewError.Message\" ), loggingText, true );\n          etd.setReadOnly();\n          etd.open();\n        }\n      }\n\n      PreviewRowsDialog prd =\n          new PreviewRowsDialog( shell, transMeta, SWT.NONE, wStepname.getText(), progressDialog\n              .getPreviewRowsMeta( wStepname.getText() ), progressDialog.getPreviewRows( wStepname.getText() ),\n              loggingText );\n      prd.open();\n    }\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/di/ui/vfs/VfsFileChooserHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.vfs;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\n\n/**\n * User: RFellows Date: 6/8/12\n */\npublic class VfsFileChooserHelper {\n  private static final Logger logger = LogManager.getLogger( VfsFileChooserHelper.class );\n  private VfsFileChooserDialog fileChooserDialog = null;\n  private Shell shell = null;\n  private VariableSpace variableSpace = null;\n  private FileSystemOptions fileSystemOptions = null;\n  private String defaultScheme = \"file\";\n  private String[] schemeRestrictions = null;\n  private boolean showFileScheme = true;\n\n  public VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace ) {\n    this( shell, fileChooserDialog, variableSpace, new FileSystemOptions() );\n  }\n\n  public VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace,\n      FileSystemOptions fileSystemOptions ) {\n    this.fileChooserDialog = fileChooserDialog;\n    this.shell = shell;\n    this.variableSpace = variableSpace;\n    this.fileSystemOptions = fileSystemOptions;\n    this.schemeRestrictions = new String[0];\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri )\n    throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n    int fileDialogMode ) throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n      int fileDialogMode, boolean showLocation ) throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, true );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n      int fileDialogMode, boolean showLocation, boolean showCustomUI ) throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, showCustomUI );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n    FileSystemOptions opts ) throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, opts, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n      FileSystemOptions opts, int fileDialogMode ) throws KettleException, FileSystemException {\n    return browse( bowl, fileFilters, fileFilterNames, fileUri, opts, fileDialogMode, true, true );\n  }\n\n  public FileObject browse( Bowl bowl, String[] fileFilters, String[] fileFilterNames, String fileUri,\n      FileSystemOptions opts, int fileDialogMode, boolean showLocation, boolean showCustomUI )\n    throws KettleException, FileSystemException {\n    // Get current file\n    FileObject rootFile = null;\n    FileObject initialFile = null;\n\n    if ( fileUri != null ) {\n      initialFile = KettleVFS.getInstance( bowl ).getFileObject( fileUri, variableSpace, opts );\n    } else {\n      initialFile = KettleVFS.getInstance( bowl ).getFileObject( Spoon.getInstance().getLastFileOpened() );\n    }\n    rootFile = initialFile.getFileSystem().getRoot();\n    fileChooserDialog.setRootFile( rootFile );\n    fileChooserDialog.setInitialFile( initialFile );\n    fileChooserDialog.defaultInitialFile = rootFile;\n\n    FileObject selectedFile = null;\n    selectedFile = fileChooserDialog.open(\n        shell, this.schemeRestrictions, getDefaultScheme(), showFileScheme(), initialFile.getName().getPath(),\n        fileFilters, fileFilterNames, returnsUserAuthenticatedFileObjects(), fileDialogMode, showLocation, showCustomUI );\n\n    return selectedFile;\n  }\n\n  public VariableSpace getVariableSpace() {\n    return variableSpace;\n  }\n\n  public void setVariableSpace( VariableSpace variableSpace ) {\n    this.variableSpace = variableSpace;\n  }\n\n  public FileSystemOptions getFileSystemOptions() {\n    return fileSystemOptions;\n  }\n\n  public void setFileSystemOptions( FileSystemOptions fileSystemOptions ) {\n    this.fileSystemOptions = fileSystemOptions;\n  }\n\n  public String getDefaultScheme() {\n    return defaultScheme;\n  }\n\n  public void setDefaultScheme( String defaultScheme ) {\n    this.defaultScheme = defaultScheme;\n  }\n\n  public String getSchemeRestriction() {\n    String schemaRestriction = null;\n    if ( this.schemeRestrictions != null && this.schemeRestrictions.length > 0 ) {\n      schemaRestriction = this.schemeRestrictions[0];\n    }\n    return schemaRestriction;\n  }\n\n  public void setSchemeRestriction( String schemeRestriction ) {\n    this.schemeRestrictions = new String[1];\n    this.schemeRestrictions[0] = schemeRestriction;\n  }\n\n  public void setSchemeRestrictions( String[] schemeRestrictions ) {\n    this.schemeRestrictions = schemeRestrictions;\n  }\n\n  public boolean showFileScheme() {\n    return this.showFileScheme;\n  }\n\n  public void setShowFileScheme( boolean showFileScheme ) {\n    this.showFileScheme = showFileScheme;\n  }\n\n  protected boolean returnsUserAuthenticatedFileObjects() {\n    return false;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n    for ( CustomVfsUiPanel currentPanel : dialog.getCustomVfsUiPanels() ) {\n      if ( currentPanel != null ) {\n        try {\n          Method setNamedCluster = currentPanel.getClass().getMethod( \"setNamedCluster\", new Class[] { String.class } );\n          setNamedCluster.invoke( currentPanel, namedCluster.getName() );\n        } catch ( NoSuchMethodException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because it doesn't have setNamedCluster method.\", e );\n          }\n        } catch ( InvocationTargetException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because of exception.\", e.getCause() );\n          }\n        } catch ( IllegalAccessException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because setNamedCluster method isn't accessible.\", e );\n          }\n        }\n      }\n    }\n  }\n\n  @VisibleForTesting\n    VfsFileChooserDialog getFileChooserDialog() {\n    return fileChooserDialog;\n  }\n\n  @VisibleForTesting\n    Shell getShell() {\n    return shell;\n  }\n\n  @VisibleForTesting\n    String[] getSchemeRestrictions() {\n    return schemeRestrictions;\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/hadoop/PluginPropertiesUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.vfs.KettleVFS;\n\nimport java.io.FileNotFoundException;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.util.Properties;\n\n/**\n * Utility for working with plugin.properties\n */\npublic class PluginPropertiesUtil {\n  public static final String PLUGIN_PROPERTIES_FILE = \"plugin.properties\";\n  public static final String VERSION_PROPERTIES_FILE = \"META-INF/version.properties\";\n  private static final String VERSION_REPLACE_STR = \"@VERSION@\";\n\n  private final String VERSION_PLACEHOLDER;\n\n  public PluginPropertiesUtil() {\n    this( VERSION_PROPERTIES_FILE );\n  }\n\n  public PluginPropertiesUtil( String versionPropertiesFile ) {\n    VERSION_PLACEHOLDER = getVersionPlaceholder( versionPropertiesFile );\n  }\n\n  private static String getVersionPlaceholder( String versionPropertiesFile ) {\n    try ( InputStream propertiesStream = PluginPropertiesUtil.class.getClassLoader().getResourceAsStream(\n        versionPropertiesFile ) ) {\n      Properties properties = new Properties();\n      properties.load( propertiesStream );\n      return properties.getProperty( \"version\", VERSION_REPLACE_STR );\n    } catch ( Exception e ) {\n      return VERSION_REPLACE_STR;\n    }\n  }\n\n  /**\n   * Loads a properties file from the plugin directory for the plugin interface provided\n   *\n   * @param plugin\n   * @return\n   * @throws KettleFileException\n   * @throws IOException\n   */\n  protected Properties loadProperties( PluginInterface plugin, String relativeName )\n    throws KettleFileException,\n    IOException {\n    if ( plugin == null ) {\n      throw new NullPointerException();\n    }\n    FileObject propFile = KettleVFS.getInstance( DefaultBowl.getInstance() )\n      .getFileObject( plugin.getPluginDirectory().getPath() + Const.FILE_SEPARATOR + relativeName );\n    if ( !propFile.exists() ) {\n      throw new FileNotFoundException( propFile.toString() );\n    }\n    try {\n      return new PropertiesConfigurationProperties( propFile );\n    } catch ( Exception e ) {\n      // Do not catch ConfigurationException. Different shims will use different\n      // packages for this exception.\n      throw new IOException( e );\n    }\n  }\n\n  /**\n   * Loads the plugin properties file for the plugin interface provided\n   *\n   * @param plugin\n   * @return Properties file for plugin\n   * @throws KettleFileException\n   * @throws IOException\n   */\n  public Properties loadPluginProperties( PluginInterface plugin ) throws KettleFileException, IOException {\n    return loadProperties( plugin, PLUGIN_PROPERTIES_FILE );\n  }\n\n  /**\n   * @return the version of this plugin\n   */\n  public String getVersion() {\n    // This value is replaced during the ant build process (task: compile.pre)\n    return VERSION_PLACEHOLDER;\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/java/org/pentaho/hadoop/PropertiesConfigurationProperties.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop;\n\nimport org.apache.commons.configuration.ConfigurationException;\nimport org.apache.commons.configuration.PropertiesConfiguration;\nimport org.apache.commons.configuration.reloading.FileChangedReloadingStrategy;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.io.PrintStream;\nimport java.io.PrintWriter;\nimport java.io.Reader;\nimport java.io.Writer;\nimport java.util.Collection;\nimport java.util.Collections;\nimport java.util.Enumeration;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.InvalidPropertiesFormatException;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.Vector;\n\n/**\n * Created by bryan on 8/6/15.\n */\npublic class PropertiesConfigurationProperties extends Properties {\n  private final PropertiesConfiguration propertiesConfiguration;\n\n  public PropertiesConfigurationProperties( FileObject fileObject ) throws ConfigurationException, FileSystemException {\n    this( initPropertiesConfiguration( fileObject ) );\n  }\n\n  public PropertiesConfigurationProperties( PropertiesConfiguration propertiesConfiguration ) {\n    this.propertiesConfiguration = propertiesConfiguration;\n  }\n\n  private static PropertiesConfiguration initPropertiesConfiguration( FileObject fileObject )\n    throws FileSystemException, ConfigurationException {\n    PropertiesConfiguration propertiesConfiguration = new PropertiesConfiguration( fileObject.getURL() );\n    propertiesConfiguration.setAutoSave( true );\n    FileChangedReloadingStrategy fileChangedReloadingStrategy = new FileChangedReloadingStrategy();\n    fileChangedReloadingStrategy.setRefreshDelay( 1000L );\n    propertiesConfiguration.setReloadingStrategy( fileChangedReloadingStrategy );\n    return propertiesConfiguration;\n  }\n\n  @Override public synchronized String getProperty( String key ) {\n    return getProperty( key, null );\n  }\n\n  @Override public synchronized String getProperty( String key, String defaultValue ) {\n    return propertiesConfiguration.getString( key, defaultValue );\n  }\n\n  @Override public synchronized Object get( Object key ) {\n    if ( key == null || key instanceof String ) {\n      return propertiesConfiguration.getProperty( (String) key );\n    } else {\n      return null;\n    }\n  }\n\n  @Override public synchronized Object setProperty( String key, String value ) {\n    return put( key, value );\n  }\n\n  @Override public synchronized Object put( Object key, Object value ) {\n    if ( key instanceof String ) {\n      Object previous = get( key );\n      propertiesConfiguration.setProperty( (String) key, value );\n      return previous;\n    }\n    throw new IllegalArgumentException( \"Can only store properties with String keys\" );\n  }\n\n  private Set<String> getPropertyNames() {\n    Set<String> result = new HashSet<>();\n    Iterator<String> keys = propertiesConfiguration.getKeys();\n    while ( keys.hasNext() ) {\n      result.add( keys.next() );\n    }\n    return result;\n  }\n\n  @Override public synchronized Set<String> stringPropertyNames() {\n    return Collections.unmodifiableSet( getPropertyNames() );\n  }\n\n  @Override public synchronized Set<Object> keySet() {\n    return Collections.<Object>unmodifiableSet( getPropertyNames() );\n  }\n\n  private Map<String, Object> toMap() {\n    Map<String, Object> result = new HashMap<>();\n    Iterator<String> keys = propertiesConfiguration.getKeys();\n    while ( keys.hasNext() ) {\n      String next = keys.next();\n      result.put( next, propertiesConfiguration.getProperty( next ) );\n    }\n    return result;\n  }\n\n  @Override public synchronized Set<Map.Entry<Object, Object>> entrySet() {\n    Map map = toMap();\n    return Collections.<Map.Entry<Object, Object>>unmodifiableSet( map.entrySet() );\n  }\n\n  @Override public synchronized int size() {\n    return getPropertyNames().size();\n  }\n\n  @Override public synchronized boolean isEmpty() {\n    return size() == 0;\n  }\n\n  @Override public synchronized Enumeration<Object> keys() {\n    return new Vector( getPropertyNames() ).elements();\n  }\n\n  @Override public synchronized Enumeration<Object> elements() {\n    return new Vector( toMap().values() ).elements();\n  }\n\n  @Override public synchronized boolean contains( Object value ) {\n    return containsValue( value );\n  }\n\n  @Override public synchronized boolean containsValue( Object value ) {\n    return values().contains( value );\n  }\n\n  @Override public synchronized Enumeration<?> propertyNames() {\n    return new Vector( getPropertyNames() ).elements();\n  }\n\n  @Override public synchronized Collection<Object> values() {\n    return toMap().values();\n  }\n\n  @Override public synchronized boolean containsKey( Object key ) {\n    if ( key == null || key instanceof String ) {\n      return propertiesConfiguration.containsKey( (String) key );\n    }\n    return false;\n  }\n\n  @Override public synchronized Object remove( Object key ) {\n    if ( key == null || key instanceof String ) {\n      Object result = propertiesConfiguration.getProperty( (String) key );\n      propertiesConfiguration.clearProperty( (String) key );\n      return result;\n    }\n    return null;\n  }\n\n  @Override public synchronized void putAll( Map<?, ?> t ) {\n    for ( Map.Entry<?, ?> entry : t.entrySet() ) {\n      put( entry.getKey(), entry.getValue() );\n    }\n  }\n\n  @Override public synchronized void clear() {\n    propertiesConfiguration.clear();\n  }\n\n  @Override public synchronized void load( Reader reader ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public synchronized void load( InputStream inStream ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void save( OutputStream out, String comments ) {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void store( Writer writer, String comments ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void store( OutputStream out, String comments ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public synchronized void loadFromXML( InputStream in )\n    throws IOException, InvalidPropertiesFormatException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void storeToXML( OutputStream os, String comment ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void storeToXML( OutputStream os, String comment, String encoding ) throws IOException {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void list( PrintStream out ) {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public void list( PrintWriter out ) {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override protected void rehash() {\n    throw new UnsupportedOperationException();\n  }\n\n  @Override public synchronized Object clone() {\n    throw new UnsupportedOperationException();\n  }\n}\n"
  },
  {
    "path": "legacy/src/main/resources/META-INF/version.properties",
    "content": "version=${project.version}"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/core/hadoop/explorer-layout-overlay.xul",
    "content": "<?xml version=\"1.0\"?>\n<overlay\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\" xmlns:pen=\"http://www.pentaho.org/2008/xul\">\n\n    <tabpanels id=\"repository-panels-set\">\n\n        <tabpanel id=\"named-clusters\" insertbefore=\"slaves\">\n            <hbox flex=\"1\">\n                <vbox flex=\"1\">\n                    <splitter flex=\"1\" orient=\"VERTICAL\">\n                        <vbox flex=\"4\">\n                            <!-- clusters list tool pane -->\n                            <hbox height=\"20px\">\n                                <label value=\"${RepositoryExplorerDialog.NamedClustersTab.Label}\"/>\n                                <label id=\"spacer-label\" flex=\"1\" />\n                                <button id=\"named-clusters-remove\" image=\"${Delete_image}\" pen:disabledimage=\"${Disabled_Delete_image}\" onclick=\"namedClustersController.removeNamedCluster()\" />\n                            </hbox>\n                            <!-- named clusters list -->\n                            <tree id=\"named-clusters-table\" flex=\"1\" hidecolumnpicker=\"true\" treeLines=\"false\" sortable=\"true\">\n                                <treecols>\n                                    <treecol id=\"name-col\" flex=\"1\" label=\"${RepositoryExplorerDialog.NamedClustersTab.TableColumn.Name}\" pen:binding=\"displayName\" pen:childrenbinding=\"children\" sortActive=\"true\" sortDirection=\"ASCENDING\"/>\n                                    <treecol id=\"type-col\" flex=\"1\" label=\"${RepositoryExplorerDialog.NamedClustersTab.TableColumn.Type}\" pen:binding=\"type\"/>\n                                    <treecol id=\"date-mod-col\" flex=\"1\" label=\"${RepositoryExplorerDialog.NamedClustersTab.TableColumn.DateModified}\" pen:binding=\"dateModified\"/>\n                                </treecols>\n                                <treechildren/>\n                            </tree>\n                        </vbox>\n                        <vbox flex=\"3\" id=\"named-clusters-addl-info-box\" visible=\"false\"/>\n                    </splitter>\n                </vbox>\n            </hbox>\n        </tabpanel>\n\n    </tabpanels>\n\n    <tabs id=\"repository-tab-set\">\n        <tab id=\"named-clusters-tab\" label=\"${RepositoryExplorerDialog.Tabs.NamedClusters}\" insertbefore=\"slaves-tab\"  onclick=\"namedClustersController.tabClicked()\" />\n    </tabs>\n\n</overlay>\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/core/hadoop/messages/messages_en_US.properties",
    "content": "HadoopConfigurationBootstrap.CannotLocatePlugin=Error locating plugin. Please make sure the Plugin Registry has been initialized.\nHadoopConfigurationBootstrap.NotInitialized=Not initialized. Please make sure the KettleEnvironment has been initialized.\nHadoopConfigurationBootstrap.PluginDirectoryNotFound=Hadoop configuration directory could not be located. Hadoop functionality will not work.\nHadoopConfigurationBootstrap.HadoopConfiguration.NoShimSet=Active configuration not set.  Aborting initialization.\nHadoopConfigurationBootstrap.HadoopConfiguration.StartupError=Error initializing Hadoop configurations. Aborting initialization.\nHadoopConfigurationBootstrap.HadoopConfiguration.InitializingShimPmr=Initializing shim for PMR\nHadoopConfigurationBootstrap.HadoopConfiguration.InitializedShimPmr=Initialized shim successfully\nHadoopConfigurationBootstrap.HadoopConfiguration.Loaded=Loaded {0} Hadoop configurations from {1}.\nHadoopConfigurationBootstrap.HadoopConfiguration.Using=Using Hadoop configuration: {0}.\nHadoopConfigurationBootstrap.LoggingPrefix=Hadoop Configuration\nHadoopConfigurationBootstrap.UnableToDetermineActiveConfiguration=Unable to determine active Hadoop configuration.\nHadoopConfigurationBootstrap.HadoopConfiguration.InvalidActiveConfiguration=Invalid active Hadoop configuration: \"{0}\".\nHadoopConfigurationBootstrap.UnableToLoadPluginProperties=Unable to load plugin properties.\nHadoopConfigurationBootstrap.MissingActiveConfigurationProperty=Active configuration property is not set in plugin.properties: \"{0}\".\nHadoopConfigurationBootstrap.WaitForShimLoad=Waiting {0} notifications before loading shim for maximum of {1} seconds.\n\nPopupMenuFactory.NAMEDCLUSTERS.New=New Cluster\nPopupMenuFactory.NAMEDCLUSTERS.Edit=Edit\nPopupMenuFactory.NAMEDCLUSTERS.Duplicate=Duplicate\nPopupMenuFactory.NAMEDCLUSTERS.Delete=Delete\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Message=Are you sure you want to delete your Hadoop Cluster {0}? This cannot be undone!\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Title=Delete Hadoop Cluster\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.Delete=Yes, Delete\nPopupMenuFactory.NAMEDCLUSTERS.DeleteNamedClusterAsk.DoNotDelete=No\n\nRepositoryExplorerDialog.NamedClustersTab.Label=Available\\:\nRepositoryExplorerDialog.Tabs.NamedClusters=Hadoop Clusters\nRepositoryExplorerDialog.NamedClustersTab.TableColumn.Name=Name\nRepositoryExplorerDialog.NamedClustersTab.TableColumn.Type=Type\nRepositoryExplorerDialog.NamedClustersTab.TableColumn.DateModified=Date Modified\n\nNamedClustersController.Type=Hadoop Cluster\nNamedClustersController.Message.CreatingNamedCluster=Creating Hadoop Cluster\\: {0}\nNamedClustersController.Message.UpdatingNamedCluster=Updating Hadoop Cluster\\: {0}\nRepositoryExplorerDialog.NamedCluster.Create.AlreadyExists.Title=Error\nRepositoryExplorerDialog.NamedCluster.Create.AlreadyExists.Message=Sorry, this Hadoop cluster already exists.\nRepositoryExplorerDialog.NamedCluster.Create.UnexpectedError.Title=Error\nRepositoryExplorerDialog.NamedCluster.Create.UnexpectedError.Message=Error while creating new Hadoop cluster\nRepositoryExplorerDialog.NamedCluster.Delete.DoesNotExists.Title=Error\nRepositoryExplorerDialog.NamedCluster.Delete.DoesNotExists.Message=Hadoop cluster does not exist\nRepositoryExplorerDialog.NamedCluster.Delete.NoItemSelected.Title=Error\nRepositoryExplorerDialog.NamedCluster.Delete.NoItemSelected.Message=No Hadoop cluster selected\nRepositoryExplorerDialog.NamedCluster.Delete.UnexpectedError.Title=Error\nRepositoryExplorerDialog.NamedCluster.Delete.UnexpectedError.Message=Error while deleting Hadoop cluster\nRepositoryExplorerDialog.NamedCluster.Edit.DoesNotExists.Title=Error\nRepositoryExplorerDialog.NamedCluster.Edit.DoesNotExists.Message=Hadoop cluster does not exist\nRepositoryExplorerDialog.NamedCluster.Edit.NoItemSelected.Title=Error\nRepositoryExplorerDialog.NamedCluster.Edit.NoItemSelected.Message=No Hadoop cluster selected\nRepositoryExplorerDialog.NamedCluster.Edit.UnexpectedError.Title=Error\nRepositoryExplorerDialog.NamedCluster.Edit.UnexpectedError.Message=Error while editing Hadoop cluster\n\nSpoon.ErrorDialog.ErrorFetchingFromRepo.NamedClusters=Couldn't fetch Hadoop clusters\nSpoon.Dialog.ErrorDeletingNamedCluster.Title=Error\nSpoon.Dialog.ErrorDeletingNamedCluster.Message=There was an error deleting the Hadoop cluster.\nSpoon.Dialog.ErrorSavingNamedCluster.Title=Error\nSpoon.Dialog.ErrorSavingNamedCluster.Message=There was an error saving the Hadoop cluster.\n\nNamedClusterDialog.STRING_NAMED_CLUSTERS=Hadoop clusters\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/trans/steps/avroinput/messages/messages_en_US.properties",
    "content": "AvroInput.Name=Avro input\nAvroInput.Description=Reads data from an Avro file\nAvroInput.SuggestedStep=Avro input\n\nAvroInputDialog.Shell.Title=Avro Input\nAvroInputDialog.StepName.Label=Step name\n\nAvroInputDialog.SourceTab.Title=Source\nAvroInputDialog.SchemaTab.Title=Schema\nAvroInputDialog.FieldsTab.Title=Avro fields\nAvroInputDialog.VarsTab.Title=Lookup fields\nAvroInputDialog.FileSource.Label=Avro source is in file\nAvroInputDialog.FieldSource.Label=Avro source is defined in a field\n\nAvroInputDialog.Filename.Label=Avro file\nAvroInputDialog.Button.FileBrowse=Browse\nAvroInputDialog.AvroField.Label=Avro field to decode from\n\nAvroInputDialog.SchemaFilename.Label=Schema file\nAvroInputDialog.DefaultSchemaFilename.Label=Default schema file\nAvroInputDialog.SchemaFilename.TipText=\"Schema to use (required if not reading from a container file)\"\nAvroInputDialog.SchemaInField.Label=Schema is defined in a field\nAvroInputDialog.SchemaInFieldIsPath.Label=Schema in field is a path\nAvroInputDialog.CacheSchemas.Label=Cache schemas in memory\nAvroInputDialog.SchemaFieldName.Label=Field containing schema\n\nAvroInputDialog.JsonEncoded.Label=Json encoded\nAvroInputDialog.JsonEncoded.TipText=Avro data read is encoded a Json rather than binary\nAvroInputDialog.Button.GetFields=Get fields\n\nAvroInputDialog.MissingFields.Label=Do not complain about fields not present in the schema\nAvroInputDialog.Fields.FIELD_NAME=Name\nAvroInputDialog.Fields.FIELD_PATH=Path\nAvroInputDialog.Fields.FIELD_TYPE=Type\nAvroInputDialog.Fields.FIELD_INDEXED=Indexed values\n\nAvroInputDialog.Fields.LOOKUP_NAME=Name\nAvroInputDialog.Fields.LOOKUP_VARIABLE=Variable\nAvroInputDialog.Fields.LOOKUP_DEFAULT_VALUE=Default value\nAvroInputDialog.Button.GetLookupFields=Get incoming fields\n\nAvroInputDialog.PreviewSize.DialogMessage=Enter the number of rows to preview\n\nAvroInput.Message.ClosingFile=Closing Avro file...\nAvroInput.Message.CheckFeedback=Read {0} rows from Avro file\nAvroInput.Message.UsingCachedSchema=Using cached schema: {0}\nAvroInput.Message.LoadingSchema=Loading schema: {0}\nAvroInput.Message.ParsingSchema=Parsing schema: {0}\nAvroInput.Message.StoringSchemaInCache=Storing schema in cache\nAvroInput.Message.IncommingSchemaIsMissing=Incoming schema is missing - using default\nAvroInput.Message.FailedToLoadSchmeaUsingDefault=Failed to load schema {0} - using default schema\nAvroInput.Message.NoDefaultSchemaWarning=Warning: reading schema from incoming field but there is no default schema to fall back on\n\nAvroInputDialog.Error.KettleFileException=Unable to open file\nAvroInput.Error.SchemaError=A problem occurred while trying to access schema file from the file system\nAvroInput.Error.NoFieldPathsDefined=No Avro fields to extract have been defined\nAvroInput.Error.NoAvroFileSpecified=No Avro file specified!\nAvroInput.Error.UnableToLoadSchema=Unable to load schema file from {0}\nAvroInput.Error.NoSchemaSupplied=A schema must be supplied for decoding avro from incoming field values\nAvroInput.Error.UnableToLoadSchemaFromContainerFile=Unable to load schema from container file {0}\nAvroInput.Error.NoSchema=Avro file does not contain a schema, and no schema has been provided - can't continue\nAvroInput.Error.UnsupportedTopLevelStructure=Unsupported top-level structure in Avro file\nAvroInput.Error.ObjectReadError=A problem occurred while trying to read an object from the Avro file\nAvroInput.Error.MalformedPathMap=Malformed path - was expecting a map key, but we seem to have run out of path parts!\nAvroInput.Error.MalformedPathMap2=Malformed path - was expecting a map key at this part of the path, but got {0} instead\nAvroInput.Error.MalformedPathArray=Malformed path - was expecting an array index, but we seem to have run out of path parts!\nAvroInput.Error.MalformedPathArray2=Malformed path - was expecting an array index at this part of the path, but got {0} instead\nAvroInput.Error.UnableToParseArrayIndex=Unable to parse array index {0} as an integer\nAvroInput.Error.MalformedPathRecord=Malformed path - was expecting a named field in a record, but we seem to have run out of path parts!\nAvroInput.Error.InvalidPath=Invalid path (we're processing a record!)\nAvroInput.Error.NonExistentField=Field {0} does not seem to exist in the schema! \nAvroInput.Error.PathContainsMultipleExpansions=Single path {0} contains multiple - expansions - we can't process that.\nAvroInput.Error.MutipleDifferentExpansions=There are multiple different array/map expansions defined - we can only handle one.\nAvroInput.Error.NoPathSet=No path has been set.\nAvroInput.Error.UnionError1=Can only handle unions involving two types\nAvroInput.Error.UnionError2=Can only handle unions that have two types, where one is 'null'\nAvroInput.Error.JsonDecoderError=A problem occurred while trying to establish a decoder for a JSON encoded avro file\nAvroInput.Error.ProblemDecodingAvroObject=An error occurred while decoding an Avro object: {0}\nAvroInput.Error.UnableToFindIncommingSchemaField=Unable to find schema field in incoming row structure\nAvroInput.Error.IncommingSchemaIsMissingAndNoDefault=Incoming schema is missing and no default is set - can't process this row\nAvroInput.Error.CantLoadIncommingSchemaAndNoDefault=Unable to load incoming schema {0} and no default is set - can't process this row\nAvroInput.Error.UnableToFindSchemaForUnionMap=Encountered a union value of type MAP, but was unable to find the associated schem in the union's list of types\nAvroInput.Error.EncounteredAPrimitivePriorToMapExpansion=Encountered a primitive prior to map expansion in an expansion path.\nAvroInput.Error.UnexpectedRecordFieldTypeAtNonExpansionPoint=Unexpected record field type at pre-expansion point in an expansion path\nAvroInput.Error.UnexpectedArrayElementTypeAtNonExpansionPoint=Unexpected array element type at pre-expansion point in an expansion path\nAvroInput.Error.UnexpectedMapValueTypeAtNonExpansionPoint=Unexpected map value type at pre-expansion point in an expansion path\nAvroInput.Injection.FIELDNAME=The name of the field in the incoming rows to use for a lookup.\nAvroInput.Injection.VARIABLE_NAME=The name of the variable to hold this field's value(s)\nAvroInput.Injection.DEFAULT_VALUE=A default value to use in case the incoming field value is null.\nAvroInput.Injection.FIELD_NAME=The name of the field that will take in the output kettle stream.\nAvroInput.Injection.FIELD_PATH=The path to the field in the Avro file.\nAvroInput.Injection.KETTLE_TYPE=The field type.\nAvroInput.Injection.INDEXED_VALS=The index valued specified.\nAvroInput.Injection.FILENAME=The name of the Avro file to decode.\nAvroInput.Injection.SCHEMA_FILENAME=The name of the Avro schema file.\nAvroInput.Injection.IS_JSON_ENCODED=This option will specify if the Avro data is encoded in JSON.\nAvroInput.Injection.AVRO_INFIELD=This option indicates that the source data comes from a field.\nAvroInput.Injection.AVRO_FIELDNAME=Specify which incoming field that contains the Avro data that you want to decode.\nAvroInput.Injection.SCHEMA_INFIELD=This option indicates if the schema comes from a field.\nAvroInput.Injection.SCHEMA_FIELDNAME=Specify which field contains the Avro schema.\nAvroInput.Injection.SCHEMA_INFIELD_IS_PATH=This option indicates if the schema field defines a path to the schema file.\nAvroInput.Injection.CACHE_SCHEMAS_IN_MEMORY=This option enables the step to cache schemas on incoming fields for performance.\nAvroInput.Injection.DONT_COMPLAIN_ABOUT_MISSING_FIELDS=This option will skip errors when specified paths or fields are not present in the active Avro schema.\nAvroInput.Injection.AVRO_FIELDS=\nAvroInput.Injection.LOOKUP_FIELDS=\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/trans/steps/couchdbinput/messages/messages_en_US.properties",
    "content": "#File generated by Hitachi Vantara Translator for package 'org.pentaho.di.trans.steps.mongodbinput' in locale 'en_US'\n#\n#\n#Sat Apr 09 14:23:39 CEST 2011\nCouchDbInput.Name=CouchDB input\nCouchDbInput.Description=Reads from a Couch DB view\n\nCouchDbInputDialog.DbName.Label=Database\nCouchDbInputDialog.PreviewSize.DialogMessage=Enter the number of rows to preview\\:\nCouchDbInputMeta.Exception.UnableToSaveStepInfo=Unable to save in repository step informations with id\\= \nCouchDbInputDialog.PreviewSize.DialogTitle=Enter preview size\nCouchDbInputDialog.Shell.Title=CouchDB input\nCouchDbInputMeta.Exception.UnableToLoadStepInfo=Unable to load step informations from XML\\!\nCouchDbInputDialog.Hostname.Label=Host name or IP address\nCouchDbInputDialog.Stepname.Label=Step name\nCouchDbInputDialog.Port.Label=Port\nCouchDbInput.ErrorConnectingToCouchDb.Exception=There was an error connecting to CouchDB with host {0}, port {1}, database {2} and view {3}\\: \nCouchDbInputMeta.Exception.UnexpectedErrorWhileReadingStepInfo=Unable to read step informations from repository\nCouchDbInputDialog.AuthenticationUser.Label=Authentication user\nCouchDbInputDialog.AuthenticationPassword.Label=Authentication password\nCouchDbInput.ErrorAuthenticating.Exception=It was not possible to authenticate the supplied username. Please look at the CouchDB log for further information.\n\nCouchDbInputDialog.DesignDocument.Label=Design document\nCouchDbInputDialog.ViewName.Label=View name\nCouchDbInputDialog.DesignDocument.Label=Design document\n\nCouchDbInput.Injection.HOSTNAME=The CouchDB host name.\nCouchDbInput.Injection.PORT=The CouchDB port number.\nCouchDbInput.Injection.DBNAME=The name of the CouchDB database.\nCouchDbInput.Injection.DESIGN_DOCUMENT=The CouchDB design document.\nCouchDbInput.Injection.VIEW_NAME=The CouchDB view name.\nCouchDbInput.Injection.AUTHENTICATION_USER=The username required to access CouchDB.\nCouchDbInput.Injection.AUTHENTICATION_PASSWORD=The password required to access CouchDB.\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/core/namedcluster/dialog/messages/messages_en_US.properties",
    "content": "NamedClusterDialog.Shell.Title=Hadoop Cluster\nNamedClusterDialog.NamedCluster.Configuration=Configuration\nNamedClusterDialog.NamedCluster.Type=Type\nNamedClusterDialog.NamedCluster.Name=Cluster name:\nNamedClusterDialog.NamedCluster.DisplayName=Display Name\n\nNamedClusterWidget.NamedCluster.New=New...\nNamedClusterWidget.NamedCluster.Edit=Edit...\n\nNamedClusterDialog.HDFS=HDFS\nNamedClusterDialog.ZooKeeper=ZooKeeper\nNamedClusterDialog.JobTracker=JobTracker\nNamedClusterDialog.Oozie=Oozie\nNamedClusterDialog.URL=URL:\nNamedClusterDialog.Port=Port:\nNamedClusterDialog.Hostname=Hostname:\nNamedClusterDialog.Username=Username:\nNamedClusterDialog.Password=Password:\nNamedClusterDialog.Storage=Storage:\n\nNamedClusterDialog.Error=Error\nNamedClusterDialog.Warning=Warning\nNamedClusterDialog.ClusterNameMissing=You must enter a Hadoop cluster name to continue.\nNamedClusterDialog.ShimIdentifierMissing=You must select a Vendor shim to continue.\nNamedClusterDialog.ClusterNameExists.Title=Hadoop Cluster Exists\nNamedClusterDialog.ClusterNameExists=Hadoop Cluster {0} already exists. Do you want to replace it with this one?\nNamedClusterDialog.ClusterNameExists.Replace=Yes, Replace\nNamedClusterDialog.ClusterNameExists.DoNotReplace=No\n\nNamedClusterDialog.NamedCluster.IsMapR=Use MapR client\nNamedClusterDialog.NamedCluster.IsMapR.Title=Select if this configuration is for a MapR cluster\n\nNamedClusterDialog.Shell.Doc=Data/Hadoop/Connect_to_Cluster\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/hadoop/configuration/messages/messages_en_US.properties",
    "content": "HadoopConfigurationsController.Toolbar=Hadoop Distribution...\n\nHadoopConfigurationRestartXulDialog.Shell.Title=Restart Required\nHadoopConfigurationRestartXulDialog.Text=Restart the application to apply changes to your Active shim.\nHadoopConfigurationRestartXulDialog.Help.Title=Hadoop Cluster Config Help\nHadoopConfigurationRestartXulDialog.Help.Header=Hadoop Cluster Config Help\nHadoopConfigurationRestartXulDialog.Help.Url=Work_with_data/Connect_to_a_Hadoop_cluster_with_the_PDI_client\n\nHadoopConfigurationSelectionDialog.Shell.Title=Hadoop Distribution\nHadoopConfigurationSelectionDialog.Text=Active Shim:\nHadoopConfigurationSelectionDialog.Help.Title=Hadoop Cluster Config Help\nHadoopConfigurationSelectionDialog.Help.Header=Hadoop Cluster Config Help\nHadoopConfigurationSelectionDialog.Help.Url=Work_with_data/Connect_to_a_Hadoop_cluster_with_the_PDI_client\n\nNoHadoopConfigurationSelectionDialog.Shell.Title=Hadoop Distribution(s) Missing\nNoHadoopConfigurationsXulDialog.Text=There are no Hadoop Distribution shims available. To get shims, please contact your system administrator or click Help to visit the online trouble shooting guide.\n\nDialog.Cancel=Cancel\nDialog.Close=Close\nDialog.Ok=OK\nDialog.Help=Help\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/hadoop/configuration/no-configs.xul",
    "content": "<?xml version=\"1.0\"?>\n<window id=\"noHadoopConfigurationSelectionWindow\" title=\"${NoHadoopConfigurationSelectionDialog.Shell.Title}\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\" appicon=\"ui/images/spoon.ico\">\n\n  <dialog width=\"425\" height=\"100\" id=\"noHadoopConfigurationSelectionDialog\"\n          title=\"${NoHadoopConfigurationSelectionDialog.Shell.Title}\"\n          appicon=\"ui/images/spoon.ico\"\n          buttons=\"\">\n    <vbox padding=\"15\">\n      <label id=\"noHadoopConfigurationSelectionDialogLabel\" value=\"${NoHadoopConfigurationsXulDialog.Text}\"\n             multiline=\"true\" width=\"425\"/>\n      <spacer height=\"15\"/>\n      <hbox padding=\"0\" spacing=\"0\">\n        <button id=\"helpButton\" label=\"${Dialog.Help}\" onclick=\"noHadoopConfigurationsXulDialog.showHelp()\"/>\n        <spacer flex=\"1\"/>\n        <button label=\"${Dialog.Close}\" width=\"75\" onclick=\"noHadoopConfigurationsXulDialog.close()\"/>\n      </hbox>\n    </vbox>\n  </dialog>\n</window>\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/hadoop/configuration/restart-prompt.xul",
    "content": "<?xml version=\"1.0\"?>\n<window id=\"hadoopConfigurationSelectionWindow\" title=\"${HadoopConfigurationSelectionDialog.Shell.Title}\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\" appicon=\"ui/images/spoon.ico\">\n\n  <dialog width=\"300\" height=\"100\" id=\"hadoopConfigurationRestartDialog\"\n          title=\"${HadoopConfigurationRestartXulDialog.Shell.Title}\"\n          appicon=\"ui/images/spoon.ico\"\n          buttons=\"\"\n          resizable=\"false\">\n    <vbox flex=\"1\" padding=\"15\">\n      <hbox flex=\"1\" padding=\"0\" spacing=\"10\">\n        <image src='ui/images/warning:32#32.svg' width='32' height='32'/>\n        <label id=\"hadoopConfigurationSelectionDialogLabel\" value=\"${HadoopConfigurationRestartXulDialog.Text}\"\n               multiline=\"true\" width=\"300\"/>\n      </hbox>\n      <spacer height=\"15\"/>\n      <hbox>\n        <spacer flex=\"1\"/>\n        <button label=\"${Dialog.Close}\" width=\"75\" onclick=\"hadoopConfigurationRestartXulDialog.close()\"/>\n      </hbox>\n    </vbox>\n  </dialog>\n</window>\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/hadoop/configuration/select-config.xul",
    "content": "<?xml version=\"1.0\"?>\n<window id=\"hadoopConfigurationSelectionWindow\" title=\"${HadoopConfigurationSelectionDialog.Shell.Title}\"\n        xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\" appicon=\"ui/images/spoon.ico\">\n\n  <dialog width=\"420\" height=\"160\" id=\"hadoopConfigurationSelectionDialog\"\n          appicon=\"ui/images/spoon.ico\"\n          title=\"${HadoopConfigurationSelectionDialog.Shell.Title}\"\n          buttons=\"\"\n          resizable=\"false\">\n    <vbox flex=\"1\" padding=\"15\">\n      <label id=\"hadoopConfigurationSelectionDialogLabel\" value=\"${HadoopConfigurationSelectionDialog.Text}\"\n             multiline=\"true\" width=\"425\" height=\"20\" padding=\"0\" spacing=\"0\"/>\n      <listbox id=\"hadoopConfigurationSelectionDialogMenuListBox\" padding=\"0\" spacing=\"0\" flex=\"1\"/>\n      <spacer flex=\"1\"/>\n      <hbox padding=\"0\" spacing=\"0\" align=\"right\">\n        <button id=\"helpButton\" label=\"${Dialog.Help}\" onclick=\"hadoopConfigurationsXulDialog.showHelp()\"/>\n        <spacer flex=\"1\"/>\n        <button label=\"${Dialog.Ok}\" width=\"75\" onclick=\"hadoopConfigurationsXulDialog.accept()\"/>\n        <spacer/>\n        <button label=\"${Dialog.Cancel}\" width=\"75\" onclick=\"hadoopConfigurationsXulDialog.cancel()\"/>\n      </hbox>\n    </vbox>\n  </dialog>\n</window>\n"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/di/ui/hadoop/configuration/toolbar-overlay.xul",
    "content": "<?xml version=\"1.0\"?>\n<overlay\n    xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\">\n  <menupopup id=\"tools-popup\">\n    <menuitem id=\"hadoop-configuration\" label=\"${HadoopConfigurationsController.Toolbar}\" position=\"3\"\n              command=\"hadoopConfigurationsController.promptForShim()\"/>\n  </menupopup>\n</overlay>"
  },
  {
    "path": "legacy/src/main/resources/org/pentaho/hadoop/messages/messages_en_US.properties",
    "content": "BigData.Category.Description=Big Data"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/database/TestSelectCount.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.database;\n\nimport static org.junit.Assert.assertTrue;\n\nimport org.junit.Test;\nimport org.pentaho.di.core.database.DatabaseMeta;\n\n/**\n * Tests the value returned from org.pentaho.di.core.database.DatabaseInterface.getSelectCountStatement for the database\n * the interface is fronting.\n * \n * As this release, Hive uses the following to select the number of rows:\n * \n * SELECT COUNT(1) FROM ....\n * \n * All other databases use:\n * \n * SELECT COUNT(*) FROM ....\n */\npublic class TestSelectCount {\n\n  private static final String HiveSelect = \"select count(1) from \";\n  private static final String TableName = \"NON_EXISTANT\";\n\n  public static final String HiveDatabaseXML = \"<?xml version=\\\"1.0\\\" encoding=\\\"UTF-8\\\"?>\" + \"<connection>\"\n      + \"<name>Hadoop Hive</name>\" + \"<server>127.0.0.1</server>\" + \"<type>Hadoop Hive</type>\" + \"<access></access>\"\n      + \"<database>default</database>\" + \"<port>10000</port>\" + \"<username>sean</username>\"\n      + \"<password>sean</password>\" + \"</connection>\";\n\n  @Test\n  public void testHiveDatabase() throws Exception {\n    try {\n      String expectedSQL = HiveSelect + TableName;\n      DatabaseMeta databaseMeta = new DatabaseMeta( HiveDatabaseXML );\n      String sql = databaseMeta.getDatabaseInterface().getSelectCountStatement( TableName );\n      assertTrue( sql.equalsIgnoreCase( expectedSQL ) );\n    } catch ( Exception e ) {\n      e.printStackTrace();\n    }\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/core/hadoop/HadoopConfigurationInfoTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.core.hadoop;\n\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\n\n/**\n * Created by bryan on 8/14/15.\n */\npublic class HadoopConfigurationInfoTest {\n  private String id;\n  private String name;\n  private boolean isActive;\n  private boolean willBeActiveAfterRestart;\n  private HadoopConfigurationInfo hadoopConfigurationInfo;\n\n  @Before\n  public void setup() {\n    id = \"testId\";\n    name = \"testName\";\n    isActive = true;\n    willBeActiveAfterRestart = true;\n    createHadoopConfigurationInfo();\n  }\n\n  private void createHadoopConfigurationInfo() {\n    hadoopConfigurationInfo = new HadoopConfigurationInfo( id, name, isActive, willBeActiveAfterRestart );\n  }\n\n  @Test\n  public void testGetId() {\n    assertEquals( id, hadoopConfigurationInfo.getId() );\n  }\n\n  @Test\n  public void testGetName() {\n    assertEquals( name, hadoopConfigurationInfo.getName() );\n  }\n\n  @Test\n  public void testIsActive() {\n    assertTrue( hadoopConfigurationInfo.isActive() );\n    isActive = false;\n    createHadoopConfigurationInfo();\n    assertFalse( hadoopConfigurationInfo.isActive() );\n  }\n\n  @Test\n  public void testWillBeActiveAfterRestart() {\n    assertTrue( hadoopConfigurationInfo.isWillBeActiveAfterRestart() );\n    willBeActiveAfterRestart = false;\n    createHadoopConfigurationInfo();\n    assertFalse( hadoopConfigurationInfo.isWillBeActiveAfterRestart() );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/avroinput/AvroInputDataTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport org.apache.avro.util.Utf8;\nimport org.junit.Test;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.variables.Variables;\n\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\n\n/**\n * Created by bryan on 10/21/15.\n */\npublic class AvroInputDataTest {\n  @Test\n  public void testCleansePath() {\n    assertEquals( \"const.name\", AvroInputData.cleansePath( \"const.name\" ) );\n    assertEquals( \"const.${var_name}\", AvroInputData.cleansePath( \"const.${var.name}\" ) );\n    assertEquals( \"const.${var.name\", AvroInputData.cleansePath( \"const.${var.name\" ) );\n    assertEquals( \"const.${var_name}.${var_2_name}.const\",\n      AvroInputData.cleansePath( \"const.${var.name}.${var.2.name}.const\" ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testAvroArrayExpansionInitThrowsExceptionIfExpansionPathIsEmpty() throws KettleException {\n    new AvroInputData.AvroArrayExpansion( Collections.<AvroInputMeta.AvroField>emptyList() ).init();\n  }\n\n  @Test\n  public void testConvertToKettleValuesNullMap() throws KettleException {\n    assertNull( new AvroInputData.AvroArrayExpansion( Collections.<AvroInputMeta.AvroField>emptyList() )\n      .convertToKettleValues( (Map<Utf8, Object>) null, null, null, null, true ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValuesPartsMapMalformed() throws KettleException {\n    AvroInputMeta.AvroField avroField = new AvroInputMeta.AvroField();\n    avroField.m_fieldName = \"a\";\n    avroField.m_fieldPath = \"c.b.a\";\n    avroField.init( 0 );\n    RowMeta rowMeta = new RowMeta();\n    ValueMetaInterface valueMeta = new ValueMetaString();\n    valueMeta.setName( avroField.m_fieldName );\n    avroField.m_kettleType = valueMeta.getTypeDesc();\n    rowMeta.addValueMeta( valueMeta );\n    AvroInputData.AvroArrayExpansion avroArrayExpansion =\n      new AvroInputData.AvroArrayExpansion( Arrays.asList( avroField ) );\n    avroArrayExpansion.m_outputRowMeta = rowMeta;\n    avroArrayExpansion.m_expansionPath = \"c\";\n    avroArrayExpansion.init();\n    assertNull( avroArrayExpansion\n      .convertToKettleValues( new HashMap<Utf8, Object>(), null, null, null, true ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValuesMapMalformed2() throws KettleException {\n    AvroInputMeta.AvroField avroField = new AvroInputMeta.AvroField();\n    avroField.m_fieldName = \"a\";\n    avroField.m_fieldPath = \"c.b.a\";\n    avroField.init( 0 );\n    RowMeta rowMeta = new RowMeta();\n    ValueMetaInterface valueMeta = new ValueMetaString();\n    valueMeta.setName( avroField.m_fieldName );\n    avroField.m_kettleType = valueMeta.getTypeDesc();\n    rowMeta.addValueMeta( valueMeta );\n    AvroInputData.AvroArrayExpansion avroArrayExpansion =\n      new AvroInputData.AvroArrayExpansion( Arrays.asList( avroField ) );\n    avroArrayExpansion.m_outputRowMeta = rowMeta;\n    avroArrayExpansion.m_expansionPath = \"c\";\n    avroArrayExpansion.init();\n    avroArrayExpansion.reset( new Variables() );\n    avroArrayExpansion.convertToKettleValues( new HashMap<Utf8, Object>(), null, null, null, true );\n  }\n\n  public void testConvertToKettleValuesMapMalformed() throws KettleException {\n    AvroInputMeta.AvroField avroField = new AvroInputMeta.AvroField();\n    avroField.m_fieldName = \"a\";\n    avroField.m_fieldPath = \"c.b.a\";\n    avroField.init( 0 );\n    RowMeta rowMeta = new RowMeta();\n    ValueMetaInterface valueMeta = new ValueMetaString();\n    valueMeta.setName( avroField.m_fieldName );\n    avroField.m_kettleType = valueMeta.getTypeDesc();\n    rowMeta.addValueMeta( valueMeta );\n    AvroInputData.AvroArrayExpansion avroArrayExpansion =\n      new AvroInputData.AvroArrayExpansion( Arrays.asList( avroField ) );\n    avroArrayExpansion.m_outputRowMeta = rowMeta;\n    avroArrayExpansion.m_expansionPath = \"c\";\n    avroArrayExpansion.init();\n    avroArrayExpansion.reset( new Variables() );\n    avroArrayExpansion.convertToKettleValues( new HashMap<Utf8, Object>(), null, null, null, true );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/avroinput/AvroInputMetaAvroFieldTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport com.fasterxml.jackson.databind.node.IntNode;\nimport com.fasterxml.jackson.databind.node.LongNode;\nimport org.apache.avro.AvroRuntimeException;\nimport org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.util.Utf8;\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.row.value.ValueMetaBigNumber;\nimport org.pentaho.di.core.row.value.ValueMetaBinary;\nimport org.pentaho.di.core.row.value.ValueMetaBoolean;\nimport org.pentaho.di.core.row.value.ValueMetaDate;\nimport org.pentaho.di.core.row.value.ValueMetaInteger;\nimport org.pentaho.di.core.row.value.ValueMetaNumber;\nimport org.pentaho.di.core.row.value.ValueMetaPluginType;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.variables.VariableSpace;\n\nimport java.io.UnsupportedEncodingException;\nimport java.math.BigDecimal;\nimport java.net.InetAddress;\nimport java.net.UnknownHostException;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.Date;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/21/15.\n */\npublic class AvroInputMetaAvroFieldTest {\n  private AvroInputMeta.AvroField avroField;\n  private VariableSpace variableSpace;\n  private Map<String, String> variableSpaceMap;\n\n  @BeforeClass\n  public static void before() throws KettlePluginException {\n    PluginRegistry.addPluginType( ValueMetaPluginType.getInstance() );\n    PluginRegistry.init( false );\n  }\n\n  @Before\n  public void setup() throws KettleException {\n    avroField = new AvroInputMeta.AvroField();\n    variableSpace = mock( VariableSpace.class );\n    variableSpaceMap = new HashMap<String, String>();\n    when( variableSpace.environmentSubstitute( anyString() ) ).thenAnswer( new Answer<String>() {\n      @Override public String answer( InvocationOnMock invocation ) throws Throwable {\n        Object key = invocation.getArguments()[ 0 ];\n        String result = variableSpaceMap.get( key );\n        if ( result == null ) {\n          return String.valueOf( key );\n        }\n        return result;\n      }\n    } );\n    avroField.m_fieldPath = \"testFieldPath\";\n  }\n\n  @Test( expected = KettleException.class )\n  public void testInitNoPathSet() throws KettleException {\n    avroField = new AvroInputMeta.AvroField();\n    avroField.init( 0 );\n  }\n\n  @Test\n  public void testGetKettleValueBigNumber() throws KettleException {\n    avroField.m_kettleType = \"BigNumber\";\n    avroField.init( 0 );\n    BigDecimal bigDecimal = new BigDecimal( 10 );\n    BigDecimal expected = new ValueMetaBigNumber().getBigNumber( bigDecimal );\n    assertEquals( expected, avroField.getKettleValue( bigDecimal ) );\n  }\n\n  @Test\n  public void testGetKettleValueBinary() throws KettleException {\n    avroField.m_kettleType = \"Binary\";\n    avroField.init( 0 );\n    byte[] bytes = new byte[] { 0, 2, 3 };\n    assertEquals( new ValueMetaBinary().getBinary( bytes ), avroField.getKettleValue( bytes ) );\n  }\n\n  @Test\n  public void testGetKettleValueBoolean() throws KettleException {\n    avroField.m_kettleType = \"Boolean\";\n    avroField.init( 0 );\n    boolean value = false;\n    Boolean expected = new ValueMetaBoolean().getBoolean( value );\n    assertEquals( expected, avroField.getKettleValue( value ) );\n\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.BOOLEAN );\n    assertEquals( expected, avroField.getPrimitive( value, schema ) );\n  }\n\n  @Test\n  public void testGetKettleValueDate() throws KettleException {\n    avroField.m_kettleType = \"Date\";\n    avroField.init( 0 );\n    Date date = new Date();\n    assertEquals( new ValueMetaDate().getDate( date ), avroField.getKettleValue( date ) );\n  }\n\n  @Test\n  public void testGetKettleValueInteger() throws KettleException {\n    avroField.m_kettleType = \"Integer\";\n    avroField.init( 0 );\n    long num = 12;\n    Long expected = new ValueMetaInteger().getInteger( num );\n    assertEquals( expected, avroField.getKettleValue( num ) );\n\n\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.INT );\n    assertEquals( expected, avroField.getPrimitive( Long.valueOf( num ).intValue(), schema ) );\n  }\n\n  @Test\n  public void testGetKettleValueNumber() throws KettleException {\n    avroField.m_kettleType = \"Number\";\n    avroField.init( 0 );\n    Double number = 2.5;\n    Double expected = new ValueMetaNumber().getNumber( number );\n    assertEquals( expected, avroField.getKettleValue( number ) );\n\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.FLOAT );\n    assertEquals( expected, avroField.getPrimitive( number.floatValue(), schema ) );\n  }\n\n  @Test\n  public void testGetKettleValueString() throws KettleException {\n    avroField.m_kettleType = \"String\";\n    avroField.init( 0 );\n    String string = \"value: 2.54\";\n    assertEquals( new ValueMetaString().getString( string ), avroField.getKettleValue( string ) );\n  }\n\n  @Test\n  public void testGetKettleValueInternetAddress() throws KettleException, UnknownHostException {\n    avroField.m_kettleType = \"Internet Address\";\n    avroField.init( 0 );\n    InetAddress inetAddress = InetAddress.getByName( \"192.168.1.1\" );\n    assertNull( avroField.getKettleValue( inetAddress ) );\n\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.RECORD );\n    assertNull( avroField.getPrimitive( inetAddress, schema ) );\n  }\n\n  @Test\n  public void testGetPrimitiveNull() throws KettleException {\n    avroField.init( 0 );\n    assertNull( avroField.getPrimitive( null, null ) );\n  }\n\n  @Test\n  public void testGetPrimitiveFixed() throws KettleException, UnsupportedEncodingException {\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.FIXED );\n    GenericData.Fixed fixed = new GenericData.Fixed( schema );\n    String testBytes = \"testBytes\";\n    fixed.bytes( testBytes.getBytes( \"UTF-8\" ) );\n    assertEquals( testBytes, new String( (byte[]) avroField.getPrimitive( fixed, schema ), \"UTF-8\" ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueMapNull() throws KettleException {\n    assertNull( avroField.convertToKettleValue( (Map<Utf8, Object>) null, null, null, false ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueMapExceptionWithNoTempParts() throws KettleException {\n    avroField.init( 0 );\n    avroField.convertToKettleValue( Collections.<Utf8, Object>emptyMap(), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueMapExceptionWithMalformedPath() throws KettleException {\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    avroField.convertToKettleValue( Collections.<Utf8, Object>emptyMap(), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test\n  public void testConvertToKettleValueMapNullValue() throws KettleException {\n    avroField.m_fieldPath = \"[key]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    assertNull( avroField.convertToKettleValue( new HashMap<Utf8, Object>(), mock( Schema.class ), mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueMapStringValue() throws KettleException {\n    avroField.m_kettleType = \"String\";\n    avroField.m_fieldPath = \"[key]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Map<Utf8, Object> map = new HashMap<Utf8, Object>();\n    String testString = \"testString\";\n    map.put( new Utf8( \"key\" ), testString );\n    Schema schema = mock( Schema.class );\n    Schema valueSchema = mock( Schema.class );\n    when( schema.getValueType() ).thenReturn( valueSchema );\n    when( valueSchema.getType() ).thenReturn( Schema.Type.STRING );\n    assertEquals( testString, avroField.convertToKettleValue( map, schema, mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueArrayNull() throws KettleException {\n    assertNull( avroField.convertToKettleValue( (GenericData.Array) null, null, mock( Schema.class ), false ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueArrayExceptionWithNoTempParts() throws KettleException {\n    avroField.init( 0 );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    avroField.convertToKettleValue( new GenericData.Array( 0, schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueArrayWithMalformedPath() throws KettleException {\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    avroField.convertToKettleValue( new GenericData.Array( 0, schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueArrayNumberFormatException() throws KettleException {\n    avroField.m_fieldPath = \"[key]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    avroField.convertToKettleValue( new GenericData.Array( 0, schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test\n  public void testConvertToKettleValueArrayIndexLessThanZero() throws KettleException {\n    avroField.m_fieldPath = \"[-1]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    assertNull( avroField.convertToKettleValue( new GenericData.Array( 0, schema ), mock( Schema.class ), mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueArrayIndexTooLarge() throws KettleException {\n    avroField.m_fieldPath = \"[1]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    assertNull( avroField.convertToKettleValue( new GenericData.Array( 1, schema ), mock( Schema.class ), mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueArrayNullElement() throws KettleException {\n    avroField.m_fieldPath = \"[0]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    GenericData.Array array = new GenericData.Array( 1, schema );\n    array.add( null );\n    assertNull( avroField.convertToKettleValue( array, mock( Schema.class ), mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueArrayStringValue() throws KettleException {\n    avroField.m_kettleType = \"String\";\n    avroField.m_fieldPath = \"[0]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    String testString = \"testString\";\n    Schema schema = mock( Schema.class );\n    Schema valueSchema = mock( Schema.class );\n    when( schema.getElementType() ).thenReturn( valueSchema );\n    when( schema.getType() ).thenReturn( Schema.Type.ARRAY );\n    when( valueSchema.getType() ).thenReturn( Schema.Type.STRING );\n    GenericData.Array array = new GenericData.Array( 1, schema );\n    array.add( testString );\n    assertEquals( testString, avroField.convertToKettleValue( array, schema, mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testConvertToKettleValueRecordNull() throws KettleException {\n    assertNull( avroField.convertToKettleValue( (GenericData.Record) null, null, mock( Schema.class ), false ) );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueRecordExceptionWithNoTempParts() throws KettleException {\n    avroField.init( 0 );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.RECORD );\n    avroField.convertToKettleValue( new GenericData.Record( schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testConvertToKettleValueRecordWithMalformedPath() throws KettleException {\n    avroField.m_fieldPath = \"[0]\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.RECORD );\n    avroField.convertToKettleValue( new GenericData.Record( schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test( expected = AvroRuntimeException.class )\n  public void testConvertToKettleValueRecordNullSchema() throws KettleException {\n    avroField.m_fieldPath = \"key\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    Schema schema = mock( Schema.class );\n    when( schema.getType() ).thenReturn( Schema.Type.RECORD );\n    assertNull( avroField.convertToKettleValue( new GenericData.Record( schema ), mock( Schema.class ), mock( Schema.class ), true ) );\n    avroField.convertToKettleValue( new GenericData.Record( schema ), mock( Schema.class ), mock( Schema.class ), false );\n  }\n\n  @Test\n  public void testConvertToKettleValueRecordStringValue() throws KettleException {\n    avroField.m_kettleType = \"String\";\n    avroField.m_fieldPath = \"key\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    String testString = \"testString\";\n    Schema schema = mock( Schema.class );\n    Schema.Field field = mock( Schema.Field.class );\n    when( schema.getFields() ).thenReturn( new ArrayList<>( Arrays.asList( field ) ) );\n    when( schema.getField( avroField.m_fieldPath ) ).thenReturn( field );\n    when( schema.getType() ).thenReturn( Schema.Type.RECORD );\n    Schema fieldSchema = mock( Schema.class );\n    when( field.schema() ).thenReturn( fieldSchema );\n    when( fieldSchema.getType() ).thenReturn( Schema.Type.STRING );\n    GenericData.Record record = new GenericData.Record( schema );\n    record.put( avroField.m_fieldPath, testString );\n    assertEquals( testString, avroField.convertToKettleValue( record, schema, mock( Schema.class ), false ) );\n  }\n\n  @Test\n  public void testGetDefaultValueFromDefaultSchemaIfNull() throws KettleException {\n    avroField.m_kettleType = \"Integer\";\n    avroField.m_fieldPath = \"key\";\n    avroField.init( 0 );\n    avroField.reset( variableSpace );\n    IntNode node = new IntNode( 5 );\n    Schema schemaToUse = mock( Schema.class );\n    Schema defaultSchema = mock( Schema.class );\n    Schema fieldSchema = mock( Schema.class );\n    Schema.Field field = mock( Schema.Field.class );\n    GenericData.Record record = mock( GenericData.Record.class );\n    when( record.get( avroField.m_fieldPath ) ).thenReturn( null );\n    when( defaultSchema.getField( avroField.m_fieldPath ) ).thenReturn( field );\n    when( field.defaultVal() ).thenReturn( node );\n    when( field.schema() ).thenReturn( fieldSchema );\n    when( fieldSchema.getType() ).thenReturn( Schema.Type.INT );\n    assertEquals( 5L, avroField.convertToKettleValue( record, schemaToUse, defaultSchema, true ) );\n  }\n\n  @Test\n  public void testGetPrimitiveFromConvertNode() throws KettleException {\n    avroField.m_kettleType = \"Integer\";\n    avroField.init( 0 );\n    Schema schema = mock( Schema.class );\n    LongNode node = new LongNode( 22 );\n    when( schema.getType() ).thenReturn( Schema.Type.LONG );\n    assertEquals( 22L,  avroField.getPrimitive( node, schema ) );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/avroinput/AvroInputMetaLookupFieldTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.stubbing.Answer;\nimport org.pentaho.di.core.exception.KettleValueException;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaString;\nimport org.pentaho.di.core.variables.VariableSpace;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/21/15.\n */\npublic class AvroInputMetaLookupFieldTest {\n\n  private AvroInputMeta.LookupField lookupField;\n  private RowMetaInterface rowMetaInterface;\n  private VariableSpace variableSpace;\n  private Map<String, String> variableSpaceMap;\n  private ValueMetaInterface valueMetaInterface;\n\n  @Before\n  public void setup() {\n    lookupField = new AvroInputMeta.LookupField();\n    lookupField.m_fieldName = \"testFieldName\";\n    lookupField.m_variableName = \"testVariableName\";\n    rowMetaInterface = mock( RowMetaInterface.class );\n    when( rowMetaInterface.indexOfValue( lookupField.m_fieldName ) ).thenReturn( 0 );\n    valueMetaInterface = new ValueMetaString();\n    when( rowMetaInterface.getValueMeta( 0 ) ).thenReturn( valueMetaInterface );\n    when( rowMetaInterface.indexOfValue( lookupField.m_fieldName ) ).thenReturn( 0 );\n    variableSpace = mock( VariableSpace.class );\n    variableSpaceMap = new HashMap<>();\n    when( variableSpace.environmentSubstitute( anyString() ) ).thenAnswer( new Answer<String>() {\n      @Override public String answer( InvocationOnMock invocation ) throws Throwable {\n        return variableSpaceMap.get( invocation.getArguments()[ 0 ] );\n      }\n    } );\n  }\n\n  @Test\n  public void testInitNullRowMeta() {\n    assertFalse( lookupField.init( null, null ) );\n  }\n\n  @Test\n  public void testInitFieldMissing() {\n    assertFalse( lookupField.init( new RowMeta(), null ) );\n  }\n\n  @Test\n  public void testInitVariableNameEmpty() {\n    lookupField.m_variableName = \"\";\n    assertFalse( lookupField.init( rowMetaInterface, null ) );\n  }\n\n  @Test\n  public void testInit() {\n    assertTrue( lookupField.init( rowMetaInterface, variableSpace ) );\n  }\n\n  @Test\n  public void setVariableInvalid() {\n    lookupField.init( null, null );\n    lookupField.setVariable( variableSpace, new Object[ 0 ] );\n    verifyNoMoreInteractions( variableSpace );\n  }\n\n  @Test\n  public void testSetVariableNull() {\n    lookupField.init( rowMetaInterface, variableSpace );\n    lookupField.setVariable( variableSpace, new Object[] { null } );\n    verify( variableSpace ).setVariable( lookupField.m_cleansedVariableName, \"null\" );\n  }\n\n  @Test\n  public void testSetVariableExceptionToNull() throws KettleValueException {\n    valueMetaInterface = mock( ValueMetaInterface.class );\n    rowMetaInterface = mock( RowMetaInterface.class );\n    when( rowMetaInterface.indexOfValue( lookupField.m_variableName ) ).thenReturn( 0 );\n    when( rowMetaInterface.getValueMeta( 0 ) ).thenReturn( valueMetaInterface );\n    Object o = new Object();\n    when( valueMetaInterface.isNull( o ) ).thenThrow( new KettleValueException() );\n    lookupField.init( rowMetaInterface, variableSpace );\n    lookupField.setVariable( variableSpace, new Object[] { o } );\n    verify( variableSpace ).setVariable( lookupField.m_cleansedVariableName, \"null\" );\n  }\n\n  @Test\n  public void testSetVariableNotNull() {\n    lookupField.init( rowMetaInterface, variableSpace );\n    String test = \"test\";\n    lookupField.setVariable( variableSpace, new Object[] { test } );\n    verify( variableSpace ).setVariable( lookupField.m_cleansedVariableName, test );\n  }\n\n  @Test\n  public void testSetVariableNullDefault() {\n    lookupField.init( rowMetaInterface, variableSpace );\n    lookupField.m_resolvedDefaultValue = \"resdef\";\n    lookupField.setVariable( variableSpace, new Object[] { null } );\n    verify( variableSpace ).setVariable( lookupField.m_cleansedVariableName, lookupField.m_resolvedDefaultValue );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/avroinput/AvroInputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LoggingObjectInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaPluginType;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.trans.step.StepMeta;\nimport org.pentaho.di.trans.steps.loadsave.LoadSaveTester;\nimport org.pentaho.di.trans.steps.loadsave.validator.FieldLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.ListLoadSaveValidator;\nimport org.pentaho.di.trans.steps.loadsave.validator.StringLoadSaveValidator;\nimport org.pentaho.di.trans.steps.mock.StepMockHelper;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/21/15.\n */\npublic class AvroInputMetaTest {\n  private AvroInputMeta avroInputMeta;\n\n  @BeforeClass\n  public static void before() throws KettlePluginException {\n    PluginRegistry.addPluginType( ValueMetaPluginType.getInstance() );\n    PluginRegistry.init( false );\n  }\n\n  @Before\n  public void setup() {\n    avroInputMeta = new AvroInputMeta();\n  }\n\n  @Test\n  public void testLoadSave() throws KettleException {\n    List<String> commonAttributes = new ArrayList<>();\n    commonAttributes.add( \"avroInField\" );\n    commonAttributes.add( \"avroFieldName\" );\n    commonAttributes.add( \"schemaInField\" );\n    commonAttributes.add( \"schemaFieldName\" );\n    commonAttributes.add( \"schemaInFieldIsPath\" );\n    commonAttributes.add( \"cacheSchemasInMemory\" );\n    commonAttributes.add( \"filename\" );\n    commonAttributes.add( \"schemaFilename\" );\n    commonAttributes.add( \"avroIsJsonEncoded\" );\n    commonAttributes.add( \"avroFields\" );\n    commonAttributes.add( \"lookupFields\" );\n    commonAttributes.add( \"dontComplainAboutMissingFields\" );\n\n    Map<String, FieldLoadSaveValidator<?>> fieldLoadSaveValidatorTypeMap = new HashMap<>();\n\n    Map<String, FieldLoadSaveValidator<?>> fieldLoadSaveValidatorAttributeMap = new HashMap<>();\n    fieldLoadSaveValidatorAttributeMap.put( \"avroFields\", new ListLoadSaveValidator<AvroInputMeta.AvroField>(\n      new FieldLoadSaveValidator<AvroInputMeta.AvroField>() {\n        private final StringLoadSaveValidator stringLoadSaveValidator = new StringLoadSaveValidator();\n        private final ListLoadSaveValidator<String> stringListLoadSaveValidator =\n          new ListLoadSaveValidator<>( stringLoadSaveValidator );\n\n        @Override public AvroInputMeta.AvroField getTestObject() {\n          AvroInputMeta.AvroField avroField = new AvroInputMeta.AvroField();\n          avroField.m_kettleType = stringLoadSaveValidator.getTestObject();\n          avroField.m_fieldPath = stringLoadSaveValidator.getTestObject();\n          avroField.m_fieldName = stringLoadSaveValidator.getTestObject();\n          avroField.m_indexedVals = stringListLoadSaveValidator.getTestObject();\n          return avroField;\n        }\n\n        @Override public boolean validateTestObject( AvroInputMeta.AvroField avroField, Object o ) {\n          if ( !( o instanceof AvroInputMeta.AvroField ) ) {\n            return false;\n          }\n          AvroInputMeta.AvroField avroField2 = (AvroInputMeta.AvroField) o;\n          return stringLoadSaveValidator.validateTestObject( avroField.m_kettleType, avroField2.m_kettleType )\n            && stringLoadSaveValidator.validateTestObject( avroField.m_fieldPath, avroField2.m_fieldPath )\n            && stringLoadSaveValidator.validateTestObject( avroField.m_fieldName, avroField2.m_fieldName )\n            && stringListLoadSaveValidator.validateTestObject( avroField.m_indexedVals, avroField2.m_indexedVals );\n        }\n      } ) );\n    fieldLoadSaveValidatorAttributeMap.put( \"lookupFields\", new ListLoadSaveValidator<AvroInputMeta.LookupField>(\n      new FieldLoadSaveValidator<AvroInputMeta.LookupField>() {\n        private final StringLoadSaveValidator stringLoadSaveValidator = new StringLoadSaveValidator();\n\n        @Override public AvroInputMeta.LookupField getTestObject() {\n          AvroInputMeta.LookupField lookupField = new AvroInputMeta.LookupField();\n          lookupField.m_fieldName = stringLoadSaveValidator.getTestObject();\n          lookupField.m_variableName = stringLoadSaveValidator.getTestObject();\n          lookupField.m_defaultValue = stringLoadSaveValidator.getTestObject();\n          return lookupField;\n        }\n\n        @Override public boolean validateTestObject( AvroInputMeta.LookupField lookupField, Object o ) {\n          if ( !( o instanceof AvroInputMeta.LookupField ) ) {\n            return false;\n          }\n          AvroInputMeta.LookupField lookupField2 = (AvroInputMeta.LookupField) o;\n          return stringLoadSaveValidator.validateTestObject( lookupField.m_fieldName, lookupField2.m_fieldName )\n            && stringLoadSaveValidator.validateTestObject( lookupField.m_variableName, lookupField2.m_variableName )\n            && stringLoadSaveValidator.validateTestObject( lookupField.m_defaultValue, lookupField2.m_defaultValue );\n        }\n      } ) );\n\n    LoadSaveTester<AvroInputMeta> avroInputMetaLoadSaveTester =\n      new LoadSaveTester<AvroInputMeta>( AvroInputMeta.class, commonAttributes, new HashMap<String, String>(),\n        new HashMap<String, String>(), fieldLoadSaveValidatorAttributeMap, fieldLoadSaveValidatorTypeMap );\n\n    avroInputMetaLoadSaveTester.testSerialization();\n  }\n\n  @Test\n  public void testGetStepData() {\n    assertTrue( avroInputMeta.getStepData() instanceof AvroInputData );\n  }\n\n  @Test\n  public void testGetStep() {\n    StepMockHelper<AvroInputMeta, AvroInputData> stepMockHelper =\n      new StepMockHelper<AvroInputMeta, AvroInputData>( \"avro\", AvroInputMeta.class, AvroInputData.class );\n    when( stepMockHelper.logChannelInterfaceFactory.create( any(), any( LoggingObjectInterface.class ) ) )\n      .thenReturn( mock( LogChannelInterface.class ) );\n    assertTrue( avroInputMeta\n      .getStep( stepMockHelper.stepMeta, stepMockHelper.stepDataInterface, 0, stepMockHelper.transMeta,\n        stepMockHelper.trans ) instanceof AvroInput );\n  }\n\n  @Test\n  public void testIndexedValsList() {\n    String val1 = \"val1\";\n    String val2 = \"val2\";\n    String val3 = \"val3\";\n    List<String> indexedValsList = AvroInputMeta.indexedValsList( val1 + \" , \" + val2 + \",\" + val3 + \" \" );\n    assertEquals( new ArrayList<>( Arrays.asList( val1, val2, val3 ) ), indexedValsList );\n    assertEquals( val1 + \",\" + val2 + \",\" + val3, AvroInputMeta.indexedValsList( indexedValsList ) );\n  }\n\n  @Test\n  public void testSetDefault() {\n    // Doesn't do anything\n    avroInputMeta.setDefault();\n  }\n\n  @Test\n  public void testGetDialogClassName() {\n    assertEquals( AvroInputDialog.class.getCanonicalName(), avroInputMeta.getDialogClassName() );\n  }\n\n  @Test\n  public void testSupportsErrorHandling() {\n    assertTrue( avroInputMeta.supportsErrorHandling() );\n  }\n\n  @Test\n  public void testGetFieldsMFields() throws KettleStepException {\n    AvroInputMeta.AvroField avroField = new AvroInputMeta.AvroField();\n    avroField.m_fieldName = \"testFieldName\";\n    avroField.m_kettleType = \"String\";\n    String testOrigin = \"testOrigin\";\n    String abc = \"abc\";\n    avroField.m_indexedVals = new ArrayList<>( Arrays.asList( abc ) );\n    avroInputMeta.setAvroFields( new ArrayList<>( Arrays.asList( avroField ) ) );\n    RowMetaInterface rowMeta = mock( RowMetaInterface.class );\n    avroInputMeta.getFields( DefaultBowl.getInstance(), rowMeta, testOrigin, new RowMetaInterface[ 0 ],\n      mock( StepMeta.class ), mock( VariableSpace.class ) );\n    ArgumentCaptor<ValueMetaInterface> valueMetaInterfaceArgumentCaptor = ArgumentCaptor.forClass( ValueMetaInterface.class );\n    verify( rowMeta ).addValueMeta( valueMetaInterfaceArgumentCaptor.capture() );\n    ValueMetaInterface valueMetaInterface = valueMetaInterfaceArgumentCaptor.getValue();\n    assertEquals( avroField.m_fieldName, valueMetaInterface.getName() );\n    assertEquals( testOrigin, valueMetaInterface.getOrigin() );\n    assertEquals( ValueMetaInterface.TYPE_STRING, valueMetaInterface.getType() );\n    assertArrayEquals( new Object[] { abc }, valueMetaInterface.getIndex() );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/avroinput/AvroInputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.avroinput;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertTrue;\nimport static org.junit.Assert.fail;\nimport static org.mockito.Mockito.mock;\nimport static org.pentaho.di.trans.steps.avroinput.AvroInputData.checkFieldPaths;\nimport static org.pentaho.di.trans.steps.avroinput.AvroInputData.getLeafFields;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\nimport org.apache.avro.AvroRuntimeException;\nimport org.apache.avro.Schema;\nimport org.apache.avro.generic.GenericData;\nimport org.apache.avro.generic.GenericDatumReader;\nimport org.apache.avro.io.Decoder;\nimport org.apache.avro.io.DecoderFactory;\nimport org.apache.avro.util.Utf8;\nimport org.junit.Test;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.row.RowMeta;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMeta;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.core.row.value.ValueMetaPluginType;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.trans.steps.avroinput.AvroInputData.AvroArrayExpansion;\n\n/**\n * Unit tests for AvroInput. Tests basic path handling logic and map expansion mechanism.\n * \n * @author Mark Hall (mhall{[at]}pentaho{[dot]}com)\n * @version $Revision: $\n */\npublic class AvroInputTest {\n\n  protected static String s_schemaTopLevelRecordManyFields = \"{\" + \"\\\"type\\\": \\\"record\\\",\" + \"\\\"name\\\": \\\"Test\\\",\"\n      + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"field1\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field2\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"field3\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field4\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"field5\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field6\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"field7\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field8\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"field9\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field10\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"field11\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"field12\\\", \\\"type\\\": \\\"string\\\"}\" + \"]\" + \"}\";\n\n  protected static String[] s_jsonDataTopLevelRecordManyFields =\n      new String[] { \"{\\\"field1\\\":\\\"value1\\\",\\\"field2\\\":\\\"value2\\\",\\\"field3\\\":\\\"value3\\\",\\\"field4\\\":\\\"value4\\\",\"\n            + \"\\\"field5\\\":\\\"value5\\\",\\\"field6\\\":\\\"value6\\\",\\\"field7\\\":\\\"value7\\\",\\\"field8\\\":\\\"value8\\\",\"\n            + \"\\\"field9\\\":\\\"value9\\\",\\\"field10\\\":\\\"value10\\\",\\\"field11\\\":\\\"value11\\\",\\\"field12\\\":\\\"value12\\\"}\" };\n\n  protected static String s_schemaTopLevelRecord = \"{\" + \"\\\"type\\\": \\\"record\\\",\" + \"\\\"name\\\": \\\"Person\\\",\"\n      + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\";\n\n  protected static String s_schemaTopLevelRecord2 = \"{\" + \"\\\"type\\\": \\\"record\\\",\" + \"\\\"name\\\": \\\"Person\\\",\"\n      + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": \\\"string\\\"},\" + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"nickname\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\";\n\n  protected static String[] s_jsonDataTopLevelRecord = new String[] {\n      \"{\\\"name\\\":\\\"bob\\\",\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}\",\n      \"{\\\"name\\\":\\\"fred\\\",\\\"age\\\":25,\\\"emails\\\":[\\\"hi there bob\\\",\\\"good to see you!\\\",\\\"Yarghhh!\\\"]}\",\n      \"{\\\"name\\\":\\\"zaphod\\\",\\\"age\\\":254,\\\"emails\\\":[\\\"I'm from beetlejuice\\\",\\\"yeah yeah yeah\\\"]}\" };\n\n  protected static String[] s_jsonDataTopLevelRecord2 =\n      new String[] {\n          \"{\\\"name\\\":\\\"bob\\\",\\\"age\\\":20,\\\"nickname\\\":\\\"goofy\\\",\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}\",\n          \"{\\\"name\\\":\\\"fred\\\",\\\"age\\\":25,\\\"nickname\\\":\\\"mickey\\\",\\\"emails\\\":[\\\"hi there bob\\\",\\\"good to see you!\\\",\\\"Yarghhh!\\\"]}\",\n          \"{\\\"name\\\":\\\"zaphod\\\",\\\"age\\\":254,\\\"nickname\\\":\\\"donald\\\",\\\"emails\\\":[\\\"I'm from beetlejuice\\\",\\\"yeah yeah yeah\\\"]}\" };\n\n  protected static String s_schemaTopLevelRecordWithUnion = \"{\" + \"\\\"type\\\": \\\"record\\\",\" + \"\\\"name\\\": \\\"Person\\\",\"\n      + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": [\\\"string\\\", \\\"null\\\"]},\"\n      + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\";\n\n  protected static String[] s_jsonDataTopLevelRecordWithUnion = new String[] {\n      \"{\\\"name\\\":{\\\"string\\\":\\\"bob\\\"},\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}\",\n      \"{\\\"name\\\":{\\\"string\\\":\\\"fred\\\"},\\\"age\\\":25,\\\"emails\\\":[\\\"hi there bob\\\",\\\"good to see you!\\\",\\\"Yarghhh!\\\"]}\",\n      \"{\\\"name\\\":null,\\\"age\\\":254,\\\"emails\\\":[\\\"I'm from beetlejuice\\\",\\\"yeah yeah yeah\\\"]}\" };\n\n  protected static String s_schemaTopLevelRecordWithMultiTypeUnion = \"{\" + \"\\\"type\\\": \\\"record\\\",\"\n      + \"\\\"name\\\": \\\"Person\\\",\" + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": [\\\"string\\\", \\\"int\\\", \\\"null\\\"]},\"\n      + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\";\n\n  protected static String[] s_jsonDataTopLevelRecordWithMultiTypeUnion = new String[] {\n      \"{\\\"name\\\":{\\\"string\\\":\\\"bob\\\"},\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}\",\n      \"{\\\"name\\\":{\\\"int\\\":42},\\\"age\\\":25,\\\"emails\\\":[\\\"hi there bob\\\",\\\"good to see you!\\\",\\\"Yarghhh!\\\"]}\",\n      \"{\\\"name\\\":null,\\\"age\\\":254,\\\"emails\\\":[\\\"I'm from beetlejuice\\\",\\\"yeah yeah yeah\\\"]}\" };\n\n  protected static String s_schemaTopLevelMap = \"{\" + \"\\\"type\\\": \\\"map\\\",\" + \"\\\"values\\\":{\" + \"\\\"type\\\": \\\"record\\\",\"\n      + \"\\\"name\\\":\\\"person\\\",\" + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": \\\"string\\\"},\"\n      + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\" + \"}\";\n\n  protected static String s_jsonDataTopLevelMap =\n      \"{\\\"bob\\\":{\\\"name\\\":\\\"bob\\\",\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]},\"\n          + \"\\\"fred\\\":{\\\"name\\\":\\\"fred\\\",\\\"age\\\":25,\\\"emails\\\":[\\\"hi there bob\\\",\\\"good to see you!\\\",\\\"Yarghhh!\\\"]},\"\n          + \"\\\"zaphod\\\":{\\\"name\\\":\\\"zaphod\\\",\\\"age\\\":254,\\\"emails\\\":[\\\"I'm from beetlejuice\\\",\\\"yeah yeah yeah\\\"]}}\";\n\n  protected static String s_schemaTopLevelRecordWithFixedType = \"{\" + \"\\\"type\\\": \\\"record\\\",\" + \"\\\"name\\\": \\\"Person\\\",\"\n      + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": {\\\"type\\\": \\\"fixed\\\", \\\"size\\\": 2, \\\"name\\\": \\\"myfixed\\\"}},\"\n      + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}\";\n\n  protected static String[] s_jsonDataTopLevelRecordWithFixedType =\n      new String[] { \"{\\\"name\\\":\\\"\\\\uFFFF\\\\uFFFF\\\",\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}\" };\n\n  protected static String s_schemaTopLevelEnumWithNamedType = \"[\"\n      + \"{\\\"type\\\": \\\"fixed\\\", \\\"size\\\": 2, \\\"name\\\": \\\"myfixed\\\"},\" + \"{\\\"type\\\": \\\"record\\\",\"\n      + \"\\\"name\\\": \\\"Person\\\",\" + \"\\\"fields\\\": [\" + \"{\\\"name\\\": \\\"name\\\", \\\"type\\\": \\\"myfixed\\\"},\"\n      + \"{\\\"name\\\": \\\"age\\\", \\\"type\\\": \\\"int\\\"},\"\n      + \"{\\\"name\\\": \\\"emails\\\", \\\"type\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": \\\"string\\\"}}\" + \"]\" + \"}]\";\n\n  protected static String[] s_jsonDataTopLevelUnion =\n      new String[] { \"{\\\"Person\\\": {\\\"name\\\":\\\"\\\\uFFFF\\\\uFFFF\\\",\\\"age\\\":20,\\\"emails\\\":[\\\"here is an email\\\",\\\"and another one\\\"]}}\" };\n\n  static {\n    try {\n      ValueMetaPluginType.getInstance().searchPlugins();\n    } catch ( KettlePluginException ex ) {\n      ex.printStackTrace();\n    }\n  }\n\n  @Test\n  public void testGetLeafFieldsFromSchema() throws KettleException {\n\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n    List<AvroInputMeta.AvroField> leafFields = getLeafFields( schema );\n\n    assertTrue( leafFields.size() == 3 );\n  }\n\n  @Test\n  public void testGetSimpleTopLevelRecordFieldsInteger() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.age\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_INTEGER );\n\n    Long[] actualVals = new Long[] { 20L, 25L, 254L };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      assertTrue( result != null );\n      assertTrue( result instanceof Long );\n      assertEquals( result, actualVals[i++] );\n    }\n  }\n\n  @Test\n  public void testGetSimpleTopLevelRecordFieldsString() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    String[] actualVals = new String[] { \"bob\", \"fred\", \"zaphod\" };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      assertTrue( result != null );\n      assertTrue( result instanceof String );\n      assertEquals( result.toString(), actualVals[i++] );\n    }\n  }\n\n  @Test\n  public void testTopLevelRecordWithFixedType() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecordWithFixedType );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_BINARY );\n\n    // String[] actualVals = new String[] { \"bob\", \"fred\", \"zaphod\" };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecordWithFixedType ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      assertTrue( result != null );\n      assertTrue( result instanceof byte[] );\n      assertEquals( ( (byte[]) result ).length, 2 );\n      assertEquals( new String( (byte[]) result ), \"??\" );\n    }\n  }\n\n  @Test\n  public void testSchemaWithTopLevelUnionAndNamedType() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelEnumWithNamedType );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericDatumReader reader = new GenericDatumReader( schema );\n    Schema firstRec = schema.getTypes().get( 1 );\n    GenericData.Record topLevel = new GenericData.Record( firstRec );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_BINARY );\n\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelUnion ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      // invoke the conversion using the schema of the record just read\n      Object result = field.convertToKettleValue( topLevel, topLevel.getSchema(), mock( Schema.class ), false );\n\n      assertTrue( result != null );\n      assertTrue( result instanceof byte[] );\n      assertEquals( ( (byte[]) result ).length, 2 );\n      assertEquals( new String( (byte[]) result ), \"??\" );\n    }\n  }\n\n  @Test\n  public void testDecodeUsingSchemaInIncomingField() throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    int count = 0;\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      incomingKettleRow[0] = row;\n      incomingKettleRow[1] = s_schemaTopLevelRecord;\n\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      assertTrue( result != null );\n      assertTrue( result.length == 1 ); // one output row\n      assertEquals( result[0][2].toString(), expectedNames[count] );\n      count++;\n    }\n  }\n\n  @Test\n  public void testDecodeUsingSchemaInIncomingFieldIncompatibleSchema() throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    int count = 0;\n    String row = s_jsonDataTopLevelRecord[0];\n    incomingKettleRow[0] = row;\n    incomingKettleRow[1] = s_schemaTopLevelRecord2;\n\n    try {\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      fail( \"Was expecting an exception as the schema supplied is incompatible with the data\" );\n    } catch ( Exception ex ) {\n      //expected\n    }\n  }\n\n  @Test\n  public void testDecodeUsingSchemaInIncomingFieldCacheSchemas() throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_cacheSchemas = true;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    int count = 0;\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      incomingKettleRow[0] = row;\n      incomingKettleRow[1] = s_schemaTopLevelRecord;\n\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      assertTrue( result != null );\n      assertTrue( result.length == 1 ); // one output row\n      assertEquals( result[0][2].toString(), expectedNames[count] );\n      count++;\n    }\n\n    // should be just one entry in the cache since all rows have the same schema\n    assertTrue( data.m_schemaCache != null );\n    assertTrue( data.m_schemaCache.size() == 1 );\n  }\n\n  @Test\n  public void testDecodeUsingSchemaInIncomingFieldTwoDifferentSchemas() throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    for ( int i = 0; i < 3; i++ ) {\n      String row = null;\n      String schema = null;\n\n      // first schema for first and last row\n      if ( i == 0 || i == 2 ) {\n        row = s_jsonDataTopLevelRecord[i];\n        schema = s_schemaTopLevelRecord;\n      } else {\n        row = s_jsonDataTopLevelRecord2[i];\n        schema = s_schemaTopLevelRecord2;\n      }\n\n      incomingKettleRow[0] = row;\n      incomingKettleRow[1] = schema;\n\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      assertTrue( result != null );\n      assertTrue( result.length == 1 ); // one output row\n      assertEquals( result[0][2].toString(), expectedNames[i] );\n    }\n  }\n\n  // PPP-5046 -  Higher versions of avro throws exception when schema field is empty [GenericData#get(String key)].\n  @Test( expected = AvroRuntimeException.class )\n  public void testDecodeUsingSchemaInIncomingFieldTwoDifferentSchemasDoComplainAboutMissingField()\n    throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    AvroInputMeta.AvroField field2 = new AvroInputMeta.AvroField();\n    field2.m_fieldName = \"test2\";\n    field2.m_fieldPath = \"$.nickname\";\n    field2.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field2 );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( field2.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_dontComplainAboutMissingFields = true;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    for ( int i = 0; i < 3; i++ ) {\n      String row = null;\n      String schema = null;\n\n      // first schema for first and last row\n      if ( i == 0 || i == 2 ) {\n        row = s_jsonDataTopLevelRecord[i];\n        schema = s_schemaTopLevelRecord;\n      } else {\n        row = s_jsonDataTopLevelRecord2[i];\n        schema = s_schemaTopLevelRecord2;\n      }\n\n      incomingKettleRow[0] = row;\n      incomingKettleRow[1] = schema;\n\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      assertTrue( result != null );\n      assertTrue( result.length == 1 ); // one output row\n      assertEquals( result[0][2].toString(), expectedNames[i] );\n\n      if ( i == 1 ) {\n        assertEquals( result[0][3].toString(), \"mickey\" );\n      } else {\n        // nickname field does not exist in the first schema - result should be\n        // null\n        assertTrue( result[0][3] == null );\n      }\n    }\n  }\n\n  @Test\n  public void testDecodeUsingSchemaInIncomingFieldFallbackToDefaultSchema() throws KettleException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema defaultSchema = parser.parse( s_schemaTopLevelRecord );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( defaultSchema );\n    GenericDatumReader reader = new GenericDatumReader( defaultSchema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    paths.add( field );\n\n    Object[] incomingKettleRow = new Object[2];\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n    vm = new ValueMeta();\n    vm.setName( \"SchemaToUse\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    vm = new ValueMeta();\n    vm.setName( field.m_fieldName );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    AvroInputData data = new AvroInputData();\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 2;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = defaultSchema;\n    data.m_defaultSchema = defaultSchema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_defaultDatumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.m_schemaInField = true;\n    data.m_schemaFieldIndex = 1;\n    data.m_log = new LogChannel( this );\n    data.init();\n\n    int count = 0;\n    String[] expectedNames = { \"bob\", \"fred\", \"zaphod\" };\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      incomingKettleRow[0] = row;\n\n      // no incoming schema for row 2 - should successfully fall back to the\n      // default schema\n      if ( count == 1 ) {\n        incomingKettleRow[1] = null;\n      } else {\n        incomingKettleRow[1] = s_schemaTopLevelRecord;\n      }\n\n      Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n      assertTrue( result != null );\n      assertTrue( result.length == 1 ); // one output row\n      assertEquals( result[0][2].toString(), expectedNames[count] );\n      count++;\n    }\n  }\n\n  @Test\n  public void testConvertToKettleRowManyFields() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecordManyFields );\n\n    // Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    List<AvroInputMeta.AvroField> paths = new ArrayList<AvroInputMeta.AvroField>();\n    for ( int i = 1; i <= 12; i++ ) {\n      AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n      field.m_fieldName = \"test\" + i;\n      field.m_fieldPath = \"$.field\" + i;\n      field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n      paths.add( field );\n    }\n\n    String incomingAvro = s_jsonDataTopLevelRecordManyFields[0];\n    Object[] incomingKettleRow = new Object[1];\n    incomingKettleRow[0] = incomingAvro;\n    // decoder = factory.jsonDecoder(schema, row);\n\n    AvroInputData data = new AvroInputData();\n\n    VariableSpace space = new Variables();\n    RowMetaInterface outputMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta();\n    vm.setName( \"IncomingAvro\" );\n    vm.setOrigin( \"Dummy\" );\n    vm.setType( ValueMetaInterface.TYPE_STRING );\n    outputMeta.addValueMeta( vm );\n\n    for ( int i = 0; i < 12; i++ ) {\n      AvroInputMeta.AvroField f = paths.get( i );\n      vm = new ValueMeta();\n      vm.setName( f.m_fieldName );\n      vm.setOrigin( \"Dummy\" );\n      vm.setType( ValueMetaInterface.TYPE_STRING );\n      outputMeta.addValueMeta( vm );\n    }\n\n    data.m_normalFields = paths;\n    data.m_decodingFromField = true;\n    data.m_jsonEncoded = true;\n    data.m_newFieldOffset = 1;\n    data.m_inStream = null;\n    data.m_fieldToDecodeIndex = 0;\n    data.m_schemaToUse = schema;\n    data.m_topLevelRecord = topLevel;\n    data.m_factory = factory;\n    data.m_datumReader = reader;\n    data.m_outputRowMeta = outputMeta;\n    data.init();\n\n    Object[][] result = data.avroObjectToKettle( DefaultBowl.getInstance(), incomingKettleRow, space );\n\n    assertTrue( result != null );\n    assertTrue( result.length == 1 ); // one output row\n\n    int count = 0;\n    for ( int i = 0; i < result[0].length; i++ ) {\n      if ( result[0][i] != null ) {\n        count++;\n      }\n    }\n\n    // should be 13 output fields (incoming + 12 decoded) in over allocated\n    // kettle output row\n    assertTrue( count == 13 );\n  }\n\n  @Test\n  public void testGetNonExistentFieldFromTopLevelRecord() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.nonExistent.notThere\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    decoder = factory.jsonDecoder( schema, s_jsonDataTopLevelRecord[0] );\n    reader.read( topLevel, decoder );\n\n    field.init( 0 );\n    field.reset( new Variables() );\n\n    try {\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n      fail( \"Was expecting an exception as $.nonExistent.notThere does not exist in the schma\" );\n    } catch ( Exception ex ) {\n      assertTrue( ex.getMessage().contains( \"Field nonExistent does not seem to exist in the schema!\" ) );\n    }\n  }\n\n  @Test\n  public void testGetTopLevelRecordArrayElement() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.emails[1]\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    String[] actualVals = new String[] { \"and another one\", \"good to see you!\", \"yeah yeah yeah\" };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      assertTrue( result != null );\n      assertTrue( result instanceof String );\n      assertEquals( result.toString(), actualVals[i++] );\n    }\n  }\n\n  @Test\n  public void testGetTopLevelRecordPositiveIndexOutOfBoundsArrayElement() throws KettleException, IOException {\n\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecord );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.emails[4]\"; // non existent in all records\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    // no exception is thrown in this case - the step just outputs null for the\n    // corresponding field\n    for ( String row : s_jsonDataTopLevelRecord ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n      assertTrue( result == null );\n    }\n  }\n\n  @Test\n  public void testGetTopLevelMapSimpleRecordField() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelMap );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    Map<Utf8, Object> topLevel = new HashMap<Utf8, Object>();\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$[bob].age\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_INTEGER );\n\n    decoder = factory.jsonDecoder( schema, s_jsonDataTopLevelMap );\n    reader.read( topLevel, decoder );\n\n    field.init( 0 ); // output index isn't needed for the test\n    field.reset( new Variables() );\n\n    Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n    assertTrue( result != null );\n    assertTrue( result instanceof Long );\n    assertEquals( result, new Long( 20 ) );\n  }\n\n  // test getting an array element from an array in a record that is itself\n  // stored in a\n  // top-level map\n  @Test\n  public void testGetTopLevelMapArrayElementFromRecord() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelMap );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    Map<Utf8, Object> topLevel = new HashMap<Utf8, Object>();\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$[bob].emails[0]\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    decoder = factory.jsonDecoder( schema, s_jsonDataTopLevelMap );\n    reader.read( topLevel, decoder );\n\n    field.init( 0 ); // output index isn't needed for the test\n    field.reset( new Variables() );\n\n    Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n    assertTrue( result != null );\n    assertTrue( result instanceof String );\n    assertEquals( result, \"here is an email\" );\n  }\n\n  @Test\n  public void testGetNonExistentTopLevelMapEntry() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelMap );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    Map<Utf8, Object> topLevel = new HashMap<Utf8, Object>();\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$[noddy].emails[0]\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    decoder = factory.jsonDecoder( schema, s_jsonDataTopLevelMap );\n    reader.read( topLevel, decoder );\n\n    field.init( 0 ); // output index isn't needed for the test\n    field.reset( new Variables() );\n\n    Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n    assertTrue( result == null );\n  }\n\n  @Test\n  public void testUnionHandling() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecordWithUnion );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    String[] actualVals = new String[] { \"bob\", \"fred\", null };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecordWithUnion ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      if ( i != 2 ) {\n        assertTrue( result != null );\n        assertTrue( result instanceof String );\n        assertEquals( result.toString(), actualVals[i++] );\n      } else {\n        assertTrue( result == null );\n      }\n    }\n  }\n\n  @Test\n  public void testMultiTypeUnionHandling() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelRecordWithMultiTypeUnion );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    GenericData.Record topLevel = new GenericData.Record( schema );\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$.name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n\n    String[] actualVals = new String[] { \"bob\", \"42\", null };\n    int i = 0;\n    for ( String row : s_jsonDataTopLevelRecordWithMultiTypeUnion ) {\n      decoder = factory.jsonDecoder( schema, row );\n      reader.read( topLevel, decoder );\n\n      field.init( 0 ); // output index isn't needed for the test\n      field.reset( new Variables() );\n\n      Object result = field.convertToKettleValue( topLevel, schema, mock( Schema.class ), false );\n\n      if ( i != 2 ) {\n        assertTrue( result != null );\n        assertTrue( result instanceof String );\n        assertEquals( result.toString(), actualVals[i++] );\n      } else {\n        assertTrue( result == null );\n      }\n    }\n  }\n\n  @Test\n  public void testMapExpansion() throws KettleException, IOException {\n    Schema.Parser parser = new Schema.Parser();\n    Schema schema = parser.parse( s_schemaTopLevelMap );\n\n    Decoder decoder;\n    DecoderFactory factory = new DecoderFactory();\n\n    Map<Utf8, Object> topLevel = new HashMap<Utf8, Object>();\n    GenericDatumReader reader = new GenericDatumReader( schema );\n\n    AvroInputMeta.AvroField field = new AvroInputMeta.AvroField();\n    field.m_fieldName = \"test\";\n    field.m_fieldPath = \"$[*].name\";\n    field.m_kettleType = ValueMeta.getTypeDesc( ValueMetaInterface.TYPE_STRING );\n    List<AvroInputMeta.AvroField> normalFields = new ArrayList<AvroInputMeta.AvroField>();\n    normalFields.add( field );\n    RowMetaInterface rowMeta = new RowMeta();\n    ValueMetaInterface vm = new ValueMeta( \"test\", ValueMetaInterface.TYPE_STRING );\n    rowMeta.addValueMeta( vm );\n\n    AvroArrayExpansion expansion = checkFieldPaths( normalFields, rowMeta );\n    expansion.init();\n    expansion.reset( new Variables() );\n\n    decoder = factory.jsonDecoder( schema, s_jsonDataTopLevelMap );\n    reader.read( topLevel, decoder );\n\n    Object[][] result = expansion.convertToKettleValues( topLevel, schema, mock( Schema.class ), new Variables(), false );\n\n    assertTrue( result != null );\n    assertTrue( result.length == 3 );\n\n    List<String> expectedNames = new ArrayList<String>();\n    expectedNames.add( \"zaphod\" );\n    expectedNames.add( \"bob\" );\n    expectedNames.add( \"fred\" );\n\n    for ( int i = 0; i < result.length; i++ ) {\n      assertTrue( result[i][0] != null );\n      assertTrue( expectedNames.contains( result[i][0] ) );\n    }\n  }\n\n  @Test\n  public void testLookupFieldInitializationNoRowMetaAvailable() {\n    AvroInputMeta.LookupField lf = new AvroInputMeta.LookupField();\n\n    lf.m_fieldName = \"TestField\";\n    lf.m_variableName = \"TestVar\";\n    VariableSpace space = new Variables();\n\n    assert ( lf.init( null, space ) == false );\n  }\n\n  public static void main( String[] args ) {\n    try {\n      AvroInputTest test = new AvroInputTest();\n      test.testSchemaWithTopLevelUnionAndNamedType();\n    } catch ( Exception ex ) {\n      ex.printStackTrace();\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/couchdbinput/CouchDbInputMetaTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.couchdbinput;\n\nimport org.junit.Before;\nimport org.junit.BeforeClass;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleStepException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LoggingObjectInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.row.RowMetaInterface;\nimport org.pentaho.di.core.row.ValueMetaInterface;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.trans.steps.loadsave.LoadSaveTester;\nimport org.pentaho.di.trans.steps.mock.StepMockHelper;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.Mockito.doThrow;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/28/15.\n */\npublic class CouchDbInputMetaTest {\n  private CouchDbInputMeta couchDbInputMeta;\n\n  @BeforeClass\n  public static void beforeClass() throws KettleException {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n  }\n\n  @Before\n  public void setup() {\n    couchDbInputMeta = new CouchDbInputMeta();\n  }\n\n  @Test\n  public void testLoadSave() throws KettleException {\n    List<String> commonAttributes = new ArrayList<>();\n    commonAttributes.add( \"hostname\" );\n    commonAttributes.add( \"port\" );\n    commonAttributes.add( \"dbName\" );\n    commonAttributes.add( \"designDocument\" );\n    commonAttributes.add( \"viewName\" );\n    commonAttributes.add( \"authenticationUser\" );\n    commonAttributes.add( \"authenticationPassword\" );\n\n    LoadSaveTester<CouchDbInputMeta> couchDbInputLoadSaveTester =\n      new LoadSaveTester<CouchDbInputMeta>( CouchDbInputMeta.class, commonAttributes );\n\n    couchDbInputLoadSaveTester.testSerialization();\n  }\n\n  @Test\n  public void testClone() {\n    String testHostname = \"testHostname\";\n    couchDbInputMeta.setHostname( testHostname );\n    assertEquals( testHostname, ( (CouchDbInputMeta) couchDbInputMeta.clone() ).getHostname() );\n  }\n\n  @Test\n  public void testSetDefault() {\n    assertNull( couchDbInputMeta.getHostname() );\n    assertNull( couchDbInputMeta.getPort() );\n    assertNull( couchDbInputMeta.getDbName() );\n    assertNull( couchDbInputMeta.getViewName() );\n    couchDbInputMeta.setDefault();\n    assertEquals( CouchDbInputMeta.DEFAULT_HOSTNAME, couchDbInputMeta.getHostname() );\n    assertEquals( CouchDbInputMeta.DEFAULT_PORT, couchDbInputMeta.getPort() );\n    assertEquals( CouchDbInputMeta.DEFAULT_DB_NAME, couchDbInputMeta.getDbName() );\n    assertEquals( CouchDbInputMeta.DEFAULT_VIEW_NAME, couchDbInputMeta.getViewName() );\n  }\n\n  @Test\n  public void testGetFields() throws KettleStepException {\n    RowMetaInterface rowMetaInterface = mock( RowMetaInterface.class );\n    String testOrigin = \"testOrigin\";\n    couchDbInputMeta.getFields( DefaultBowl.getInstance(), rowMetaInterface, testOrigin, null, null, null, null, null );\n    ArgumentCaptor<ValueMetaInterface> valueMetaInterfaceArgumentCaptor =\n      ArgumentCaptor.forClass( ValueMetaInterface.class );\n    verify( rowMetaInterface ).addValueMeta( valueMetaInterfaceArgumentCaptor.capture() );\n    ValueMetaInterface valueMetaInterface = valueMetaInterfaceArgumentCaptor.getValue();\n    assertEquals( CouchDbInputMeta.VALUE_META_NAME, valueMetaInterface.getName() );\n    assertEquals( ValueMetaInterface.TYPE_STRING, valueMetaInterface.getType() );\n    assertEquals( testOrigin, valueMetaInterface.getOrigin() );\n  }\n\n  @Test( expected = KettleXMLException.class )\n  public void testLoadXmlException() throws KettleXMLException {\n    Node node = mock( Node.class );\n    when( node.getChildNodes() ).thenThrow( new RuntimeException() );\n    couchDbInputMeta.loadXML( node, null, (IMetaStore) null );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testReadRepException() throws KettleException {\n    Repository repository = mock( Repository.class );\n    ObjectId objectId = mock( ObjectId.class );\n    when( repository.getStepAttributeString( objectId, \"hostname\" ) ).thenThrow( new RuntimeException() );\n    couchDbInputMeta.readRep( repository, null, objectId, null );\n  }\n\n  @Test( expected = KettleException.class )\n  public void testSaveRepException() throws KettleException {\n    couchDbInputMeta.setDefault();\n    Repository repository = mock( Repository.class );\n    ObjectId transId = mock( ObjectId.class );\n    ObjectId stepId = mock( ObjectId.class );\n    doThrow( new RuntimeException() ).when( repository )\n      .saveStepAttribute( transId, stepId, \"hostname\", CouchDbInputMeta.DEFAULT_HOSTNAME );\n    couchDbInputMeta.saveRep( repository, null, transId, stepId );\n  }\n\n  @Test\n  public void testGetStep() {\n    StepMockHelper<CouchDbInputMeta, CouchDbInputData> stepMockHelper =\n      new StepMockHelper<CouchDbInputMeta, CouchDbInputData>( \"testName\", CouchDbInputMeta.class,\n        CouchDbInputData.class );\n    when( stepMockHelper.logChannelInterfaceFactory.create( any(), any( LoggingObjectInterface.class ) ) )\n      .thenReturn( mock( LogChannelInterface.class ) );\n    assertTrue( couchDbInputMeta\n      .getStep( stepMockHelper.stepMeta, stepMockHelper.stepDataInterface, 0, stepMockHelper.transMeta,\n        stepMockHelper.trans ) instanceof CouchDbInput );\n  }\n\n  @Test\n  public void testGetStepData() {\n    assertTrue( couchDbInputMeta.getStepData() instanceof CouchDbInputData );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/trans/steps/couchdbinput/CouchDbInputTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.trans.steps.couchdbinput;\n\nimport org.apache.http.HttpEntity;\nimport org.apache.http.HttpResponse;\nimport org.apache.http.StatusLine;\nimport org.apache.http.client.HttpClient;\nimport org.apache.http.client.methods.HttpGet;\nimport org.apache.http.client.methods.HttpUriRequest;\nimport org.apache.http.protocol.HttpContext;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.encryption.TwoWayPasswordEncoderPluginType;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.logging.LoggingObjectInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.trans.steps.mock.StepMockHelper;\n\nimport java.io.IOException;\n\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.any;\nimport static org.mockito.Mockito.anyString;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 10/28/15.\n */\npublic class CouchDbInputTest {\n  private String testName;\n  private StepMockHelper stepMockHelper;\n  private CouchDbInput couchDbInput;\n  private CouchDbInput.HttpClientFactory httpClientFactory;\n  private CouchDbInput.GetMethodFactory getMethodFactory;\n\n  @Before\n  public void setup() throws KettleException {\n    PluginRegistry.addPluginType( TwoWayPasswordEncoderPluginType.getInstance() );\n    PluginRegistry.init( false );\n    Encr.init( \"Kettle\" );\n    testName = \"testName\";\n    stepMockHelper = new StepMockHelper( testName, CouchDbInputMeta.class, CouchDbInputData.class );\n    when( stepMockHelper.logChannelInterfaceFactory.create( any(), any( LoggingObjectInterface.class ) ) )\n      .thenReturn( mock( LogChannelInterface.class ) );\n    httpClientFactory = mock( CouchDbInput.HttpClientFactory.class );\n    getMethodFactory = mock( CouchDbInput.GetMethodFactory.class );\n    couchDbInput =\n      spy( new CouchDbInput( stepMockHelper.stepMeta, stepMockHelper.stepDataInterface, 0, stepMockHelper.transMeta,\n        stepMockHelper.trans ) );\n  }\n\n  @Test\n  public void testInitException() {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"testDoc\";\n    final String testView = \"testView\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n\n    CouchDbInput.HttpClientFactory httpClientFactory = mock( CouchDbInput.HttpClientFactory.class );\n    CouchDbInput.GetMethodFactory getMethodFactory = mock( CouchDbInput.GetMethodFactory.class );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    when( httpClientFactory.createHttpClient() ).thenThrow( new RuntimeException() );\n    assertFalse( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n  }\n\n  @Test\n  public void testInit() throws IOException {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"testDoc\";\n    final String testView = \"testView\";\n    final String testUser = \"testUser\";\n    final String testPassword = \"testPassword\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n    when( couchDbInputMeta.getAuthenticationUser() ).thenReturn( testUser );\n    when( couchDbInputMeta.getAuthenticationPassword() ).thenReturn( testPassword );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    HttpClient httpClient = mock( HttpClient.class );\n    doReturn( httpClient ).when( couchDbInput ).createHttpClient( anyString(), anyString() );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    HttpEntity httpEntity = mock(HttpEntity.class);\n    doReturn( httpEntity ).when( httpResponseMock ).getEntity();\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( httpResponseMock ).when( httpClient ).execute( any( HttpUriRequest.class ), any( HttpContext.class ) );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 200 ).when( statusLineMock ).getStatusCode();\n    assertTrue( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n    verify( couchDbInput ).createHttpClient( \"testUser\", \"testPassword\" );\n    verify( couchDbInput ).getHttpClientContext( \"testHostname\", Integer.parseInt( testPort ) );\n  }\n\n  @Test\n  public void testInitNoDesignDoc() throws IOException {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"\";\n    final String testView = \"testView\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    HttpClient httpClient = mock( HttpClient.class );\n    doReturn( httpClient ).when( couchDbInput ).createHttpClient( anyString(), anyString() );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 200 ).when( statusLineMock ).getStatusCode();\n\n    assertFalse( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n  }\n\n  @Test\n  public void testInitNoView() throws IOException {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"testDoc\";\n    final String testView = \"\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    HttpClient httpClient = mock( HttpClient.class );\n    doReturn( httpClient ).when( couchDbInput ).createHttpClient( anyString(), anyString() );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 200 ).when( statusLineMock ).getStatusCode();\n\n    assertFalse( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n  }\n\n  @Test\n  public void testInit199() throws IOException {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"testDoc\";\n    final String testView = \"testView\";\n    final String testUser = \"testUser\";\n    final String testPassword = \"testPassword\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n    when( couchDbInputMeta.getAuthenticationUser() ).thenReturn( testUser );\n    when( couchDbInputMeta.getAuthenticationPassword() ).thenReturn( testPassword );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    HttpClient httpClient = mock( HttpClient.class );\n    doReturn( httpClient ).when( couchDbInput ).createHttpClient( anyString(), anyString() );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 199 ).when( statusLineMock ).getStatusCode();\n\n    //when( getMethod.getResponseBodyAsStream() ).thenReturn( new ByteArrayInputStream( \"fail\".getBytes() ) );\n\n    assertFalse( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n  }\n\n  @Test\n  public void testInit300() throws IOException {\n    CouchDbInputMeta couchDbInputMeta = (CouchDbInputMeta) stepMockHelper.initStepMetaInterface;\n    CouchDbInputData couchDbInputData = (CouchDbInputData) stepMockHelper.initStepDataInterface;\n\n    final String testHostname = \"testHostname\";\n    final String testPort = \"9999\";\n    final String testDbName = \"testDbName\";\n    final String testDoc = \"testDoc\";\n    final String testView = \"testView\";\n    final String testUser = \"testUser\";\n    final String testPassword = \"testPassword\";\n\n    when( couchDbInputMeta.getHostname() ).thenReturn( testHostname );\n    when( couchDbInputMeta.getPort() ).thenReturn( testPort );\n    when( couchDbInputMeta.getDbName() ).thenReturn( testDbName );\n    when( couchDbInputMeta.getDesignDocument() ).thenReturn( testDoc );\n    when( couchDbInputMeta.getViewName() ).thenReturn( testView );\n    when( couchDbInputMeta.getAuthenticationUser() ).thenReturn( testUser );\n    when( couchDbInputMeta.getAuthenticationPassword() ).thenReturn( testPassword );\n\n    HttpGet getMethod = mock( HttpGet.class );\n    when( getMethodFactory.create( CouchDbInput\n      .buildUrl( testHostname, Const.toInt( testPort, 5984 ), testDbName, testDoc, testView ) ) ).thenReturn(\n      getMethod );\n\n    HttpClient httpClient = mock( HttpClient.class );\n    doReturn( httpClient ).when( couchDbInput ).createHttpClient( anyString(), anyString() );\n    HttpResponse httpResponseMock = mock(HttpResponse.class);\n    StatusLine statusLineMock = mock(StatusLine.class);\n    doReturn( httpResponseMock ).when( httpClient ).execute( any() );\n    doReturn( statusLineMock ).when( httpResponseMock ).getStatusLine();\n    doReturn( 199 ).when( statusLineMock ).getStatusCode();\n\n    assertFalse( couchDbInput.init( couchDbInputMeta, couchDbInputData ) );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/ui/core/namedcluster/NamedClusterUIHelperTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.core.namedcluster;\n\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.mock;\n\n/**\n * Created by bryan on 8/31/15.\n */\npublic class NamedClusterUIHelperTest {\n  @Test(timeout = 10000)\n  public void testNamedClusterFactoryHolder() {\n    final NamedClusterUIFactory namedClusterUIFactory = mock( NamedClusterUIFactory.class );\n    final NamedClusterUIHelper.NamedClusterUIFactoryHolder namedClusterUIFactoryHolder = new NamedClusterUIHelper\n      .NamedClusterUIFactoryHolder();\n    new Thread( new Runnable() {\n      @Override public void run() {\n        try {\n          Thread.sleep( 300 );\n          namedClusterUIFactoryHolder.setNamedClusterUIFactory( namedClusterUIFactory );\n        } catch ( InterruptedException e ) {\n          e.printStackTrace();\n        }\n      }\n    } ).start();\n    assertEquals( namedClusterUIFactory, namedClusterUIFactoryHolder.getNamedClusterUIFactory() );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/di/ui/vfs/VfsFileChooserHelperTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.di.ui.vfs;\n\nimport static org.junit.Assert.assertArrayEquals;\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNotNull;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertSame;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.Mockito.mock;\n\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Shell;\nimport org.junit.Test;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\npublic class VfsFileChooserHelperTest {\n\n  private static final boolean TEST_SHOW_FILE_SCHEME_VALUE = false;\n  private static final String S3_TEST_RESTRICTION = \"s3\";\n  private static final String[] TEST_RESTRICTION_ARRAY = { \"resttiction1\", \"resttiction2\", null, \"resttiction3\" };\n  private static final String DEFAULT_SCHEME_VALUE = \"file\";\n  private static final String TEST_SCHEME_VALUE = \"test scheme\";\n\n  private Shell shellMock = mock( Shell.class );\n  private VfsFileChooserDialog fcDialogMock = mock( VfsFileChooserDialog.class );\n  private VariableSpace varSpMock = mock( VariableSpace.class );\n  private FileSystemOptions fsOptions = new FileSystemOptions();\n\n  @Test\n  public void testSetSchemeRestriction() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    helper.setSchemeRestriction( S3_TEST_RESTRICTION );\n    assertEquals( 1, helper.getSchemeRestrictions().length );\n    assertEquals( S3_TEST_RESTRICTION, helper.getSchemeRestrictions()[0] );\n  }\n\n  @Test\n  public void testGetSchemeRestrictionReturnsNull_ForDefaultEmptyRestriction() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertNotNull( helper.getSchemeRestrictions() );\n    assertEquals( 0, helper.getSchemeRestrictions().length );\n    assertNull( helper.getSchemeRestriction() );\n  }\n\n  @Test\n  public void testGetSchemeRestrictionReturnsRestriction_ForNotEmptyRestriction() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertNotNull( helper.getSchemeRestrictions() );\n    assertEquals( 0, helper.getSchemeRestrictions().length );\n    helper.setSchemeRestriction( S3_TEST_RESTRICTION );\n    assertEquals( S3_TEST_RESTRICTION, helper.getSchemeRestriction() );\n  }\n\n  @Test\n  public void testConstructorWithParams() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertNotNull( helper );\n    assertSame( fcDialogMock, helper.getFileChooserDialog() );\n    assertSame( shellMock, helper.getShell() );\n    assertSame( varSpMock, helper.getVariableSpace() );\n    assertSame( fsOptions, helper.getFileSystemOptions() );\n    assertEquals( DEFAULT_SCHEME_VALUE, helper.getDefaultScheme() );\n    assertTrue( helper.showFileScheme() );\n    assertNotNull( helper.getSchemeRestrictions() );\n    assertEquals( 0, helper.getSchemeRestrictions().length );\n  }\n\n  @Test\n  public void testConstructorWithParamsWithoutFileSystemOptions() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock );\n    assertNotNull( helper );\n    assertSame( fcDialogMock, helper.getFileChooserDialog() );\n    assertSame( shellMock, helper.getShell() );\n    assertSame( varSpMock, helper.getVariableSpace() );\n    assertNotNull( helper.getFileSystemOptions() );\n    assertEquals( DEFAULT_SCHEME_VALUE, helper.getDefaultScheme() );\n    assertTrue( helper.showFileScheme() );\n    assertNotNull( helper.getSchemeRestrictions() );\n    assertEquals( 0, helper.getSchemeRestrictions().length );\n  }\n\n  @Test\n  public void testSetSchemeRestrictions_ForArrayOfRestrictionStrings() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    helper.setSchemeRestrictions( TEST_RESTRICTION_ARRAY );\n    assertEquals( TEST_RESTRICTION_ARRAY.length, helper.getSchemeRestrictions().length );\n    assertArrayEquals( TEST_RESTRICTION_ARRAY, helper.getSchemeRestrictions() );\n  }\n\n  @Test\n  public void testSetDefaultScheme() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertEquals( DEFAULT_SCHEME_VALUE, helper.getDefaultScheme() );\n    helper.setDefaultScheme( TEST_SCHEME_VALUE );\n    assertEquals( TEST_SCHEME_VALUE, helper.getDefaultScheme() );\n  }\n\n  @Test\n  public void testSetShowFileScheme() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertTrue( helper.showFileScheme() );\n    helper.setShowFileScheme( TEST_SHOW_FILE_SCHEME_VALUE );\n    assertEquals( TEST_SHOW_FILE_SCHEME_VALUE, helper.showFileScheme() );\n  }\n\n  @Test\n  public void testSetVariableSpace() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    VariableSpace oneMoreVarSpMock = mock( VariableSpace.class );\n    assertSame( varSpMock, helper.getVariableSpace() );\n    helper.setVariableSpace( oneMoreVarSpMock );\n    assertSame( oneMoreVarSpMock, helper.getVariableSpace() );\n  }\n\n  @Test\n  public void testSetFileSystemOptions() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    FileSystemOptions oneMoreFsOptions = new FileSystemOptions();\n    assertSame( fsOptions, helper.getFileSystemOptions() );\n    helper.setFileSystemOptions( oneMoreFsOptions );\n    assertSame( oneMoreFsOptions, helper.getFileSystemOptions() );\n  }\n\n  @Test\n  public void testReturnsUserAuthenticatedFileObjects() {\n    VfsFileChooserHelper helper = new VfsFileChooserHelper( shellMock, fcDialogMock, varSpMock, fsOptions );\n    assertFalse( helper.returnsUserAuthenticatedFileObjects() );\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/hadoop/PluginPropertiesUtilTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop;\n\nimport static org.junit.Assert.*;\n\nimport org.junit.Test;\n\npublic class PluginPropertiesUtilTest {\n\n  @Test\n  public void getVersion() {\n    // This test will only success if using classes produced by the ant build\n    PluginPropertiesUtil util = new PluginPropertiesUtil();\n    assertNotNull(\n      \"Should never be null\",\n      util.getVersion() );\n  }\n\n  @Test\n  public void testGetVersionFromNonDefaultLocation() {\n    PluginPropertiesUtil ppu = new PluginPropertiesUtil( \"test-version.properties\" );\n    String version = ppu.getVersion();\n    assertEquals( \"X.Y.Z-TEST\", version );\n  }\n\n  @Test\n  public void testGetVersionFromNonExistingLocation() {\n    PluginPropertiesUtil ppu = new PluginPropertiesUtil( \"non-existing-version.properties\" );\n    String version = ppu.getVersion();\n    assertEquals( \"@VERSION@\", version );\n  }\n\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/hadoop/PropertiesConfigurationPropertiesTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.hadoop;\n\nimport org.apache.commons.configuration.PropertiesConfiguration;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.invocation.InvocationOnMock;\nimport org.mockito.stubbing.Answer;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.io.PrintStream;\nimport java.io.PrintWriter;\nimport java.io.Reader;\nimport java.io.Writer;\nimport java.util.Arrays;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.HashSet;\nimport java.util.Iterator;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.stream.Collectors;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.junit.Assert.assertNull;\nimport static org.junit.Assert.assertTrue;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.verifyNoMoreInteractions;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by bryan on 8/7/15.\n */\npublic class PropertiesConfigurationPropertiesTest {\n  private PropertiesConfiguration propertiesConfiguration;\n  private PropertiesConfigurationProperties propertiesConfigurationProperties;\n\n  @Before\n  public void setup() {\n    propertiesConfiguration = mock( PropertiesConfiguration.class );\n    propertiesConfigurationProperties = new PropertiesConfigurationProperties( propertiesConfiguration );\n  }\n\n  @Test\n  public void testGetProperty() {\n    String key = \"key\";\n    String value = \"value\";\n    when( propertiesConfiguration.getString( key, null ) ).thenReturn( value );\n    assertEquals( value, propertiesConfigurationProperties.getProperty( key ) );\n  }\n\n  @Test\n  public void testGetPropertyDefault() {\n    String key = \"key\";\n    String defaultValue = \"default\";\n    String value = \"value\";\n    when( propertiesConfiguration.getString( key, defaultValue ) ).thenReturn( value );\n    assertEquals( value, propertiesConfigurationProperties.getProperty( key, defaultValue ) );\n  }\n\n  @Test\n  public void testGetString() {\n    String key = \"key\";\n    String value = \"value\";\n    when( propertiesConfiguration.getProperty( key ) ).thenReturn( value );\n    assertEquals( value, propertiesConfigurationProperties.get( key ) );\n  }\n\n  @Test\n  public void testGetObject() {\n    assertNull( propertiesConfigurationProperties.get( new Object() ) );\n    verifyNoMoreInteractions( propertiesConfiguration );\n  }\n\n  @Test\n  public void testGetNull() {\n    String value = \"value\";\n    when( propertiesConfiguration.getProperty( null ) ).thenReturn( value );\n    assertEquals( value, propertiesConfigurationProperties.get( null ) );\n  }\n\n  @Test\n  public void testSetProperty() {\n    String key = \"key\";\n    String value = \"value\";\n    String previous = \"prev\";\n    when( propertiesConfiguration.getProperty( key ) ).thenReturn( previous );\n    assertEquals( previous, propertiesConfigurationProperties.put( key, value ) );\n    verify( propertiesConfiguration ).setProperty( key, value );\n  }\n\n  @Test( expected = IllegalArgumentException.class )\n  public void testSetPropertyNullKey() {\n    propertiesConfigurationProperties.setProperty( null, \"value\" );\n  }\n\n  @Test\n  public void stringPropertyNames() {\n    final Set<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n    assertEquals( names, new HashSet<String>( propertiesConfigurationProperties.stringPropertyNames() ) );\n  }\n\n  @Test\n  public void testKeySet() {\n    final Set<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n    assertEquals( new HashSet<Object>( names ), new HashSet<>( propertiesConfigurationProperties.keySet() ) );\n  }\n\n  @Test\n  public void testEntrySet() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    sourceMap.put( \"c\", \"d\" );\n    mockToMap( sourceMap );\n    assertEquals( sourceMap.entrySet(), propertiesConfigurationProperties.entrySet() );\n  }\n\n  @Test\n  public void testSize() {\n    final Set<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n    assertEquals( names.size(), propertiesConfigurationProperties.size() );\n  }\n\n  @Test\n  public void testIsEmptyTrue() {\n    final Set<String> names = new HashSet<>();\n    mockKeys( names );\n    assertTrue( propertiesConfigurationProperties.isEmpty() );\n  }\n\n  @Test\n  public void testIsEmptyFalse() {\n    final Set<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n    assertFalse( propertiesConfigurationProperties.isEmpty() );\n  }\n\n  @Test\n  public void testKeys() {\n    HashSet<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n    assertEquals( names, new HashSet<>( Collections.list( propertiesConfigurationProperties.keys() ) ) );\n  }\n\n  @Test\n  public void testElements() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    sourceMap.put( \"c\", \"d\" );\n    mockToMap( sourceMap );\n    assertEquals( new HashSet<>( sourceMap.values() ),\n      new HashSet<>( Collections.list( propertiesConfigurationProperties.elements() ) ) );\n  }\n\n  @Test\n  public void testContainsTrue() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    String d = \"d\";\n    sourceMap.put( \"c\", d );\n    mockToMap( sourceMap );\n    assertTrue( propertiesConfigurationProperties.contains( d ) );\n    assertTrue( propertiesConfigurationProperties.containsValue( d ) );\n  }\n\n  @Test\n  public void testContainsFalse() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    sourceMap.put( \"c\", \"d\" );\n    mockToMap( sourceMap );\n    assertFalse( propertiesConfigurationProperties.containsValue( \"E\" ) );\n  }\n\n  @Test\n  public void testPropertyNames() {\n    HashSet<String> names = new HashSet<>( Arrays.asList( \"a\", \"b\", \"c\" ) );\n    mockKeys( names );\n\n    HashSet<String> propertyNames = Collections.list( propertiesConfigurationProperties.propertyNames() ).stream()\n      .map( Object::toString )\n      .collect( Collectors.toCollection( HashSet::new ) );\n    assertEquals( names, propertyNames );\n  }\n\n  @Test\n  public void testValues() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    sourceMap.put( \"c\", \"d\" );\n    mockToMap( sourceMap );\n    assertEquals( new HashSet<>( sourceMap.values() ),\n      new HashSet<Object>( propertiesConfigurationProperties.values() ) );\n  }\n\n  @Test\n  public void testContainsKey() {\n    String a = \"a\";\n    HashSet<String> names = new HashSet<>( Arrays.asList( a, \"b\", \"c\" ) );\n    mockKeys( names );\n    assertTrue( propertiesConfigurationProperties.containsKey( a ) );\n    assertFalse( propertiesConfigurationProperties.containsKey( \"d\" ) );\n    assertFalse( propertiesConfigurationProperties.containsKey( new Object() ) );\n    assertFalse( propertiesConfigurationProperties.containsKey( null ) );\n  }\n\n  @Test\n  public void testRemove() {\n    String key = \"key\";\n    String prev = \"prev\";\n    when( propertiesConfiguration.getProperty( key ) ).thenReturn( prev );\n    assertEquals( prev, propertiesConfigurationProperties.remove( key ) );\n    verify( propertiesConfiguration ).clearProperty( key );\n  }\n\n  @Test\n  public void testRemoveObject() {\n    assertNull( propertiesConfigurationProperties.remove( new Object() ) );\n    verifyNoMoreInteractions( propertiesConfiguration );\n  }\n\n  @Test\n  public void testPutAll() {\n    Map<String, Object> sourceMap = new HashMap<>();\n    sourceMap.put( \"a\", \"b\" );\n    sourceMap.put( \"c\", \"d\" );\n    propertiesConfigurationProperties.putAll( sourceMap );\n    for ( Map.Entry<String, Object> stringObjectEntry : sourceMap.entrySet() ) {\n      verify( propertiesConfiguration ).setProperty( stringObjectEntry.getKey(), stringObjectEntry.getValue() );\n    }\n  }\n\n  @Test\n  public void testClear() {\n    propertiesConfigurationProperties.clear();\n    verify( propertiesConfiguration ).clear();\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testLoadReader() throws IOException {\n    propertiesConfigurationProperties.load( mock( Reader.class ) );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testLoadInputStream() throws IOException {\n    propertiesConfigurationProperties.load( mock( InputStream.class ) );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testSaveOutputStream() throws IOException {\n    propertiesConfigurationProperties.save( mock( OutputStream.class ), \"\" );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testStoreWriter() throws IOException {\n    propertiesConfigurationProperties.store( mock( Writer.class ), \"\" );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testStoreOutputStream() throws IOException {\n    propertiesConfigurationProperties.store( mock( OutputStream.class ), \"\" );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testStoreToXmlNoEncoding() throws IOException {\n    propertiesConfigurationProperties.storeToXML( mock( OutputStream.class ), \"\" );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testStoreToXmlEncoding() throws IOException {\n    propertiesConfigurationProperties.storeToXML( mock( OutputStream.class ), \"\", \"UTF-8\" );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testListPrintStream() throws IOException {\n    propertiesConfigurationProperties.list( mock( PrintStream.class ) );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testListPrintWriter() throws IOException {\n    propertiesConfigurationProperties.list( mock( PrintWriter.class ) );\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testHash() throws IOException {\n    propertiesConfigurationProperties.rehash();\n  }\n\n  @Test( expected = UnsupportedOperationException.class )\n  public void testClone() throws IOException {\n    propertiesConfigurationProperties.clone();\n  }\n\n  private void mockKeys( final Set<String> keys ) {\n    when( propertiesConfiguration.getKeys() ).thenAnswer( new Answer<Iterator<String>>() {\n      @Override public Iterator<String> answer( InvocationOnMock invocation ) throws Throwable {\n        return keys.iterator();\n      }\n    } );\n    when( propertiesConfiguration.containsKey( anyString() ) ).thenAnswer( new Answer<Boolean>() {\n      @Override public Boolean answer( InvocationOnMock invocation ) throws Throwable {\n        return keys.contains( invocation.getArguments()[ 0 ] );\n      }\n    } );\n  }\n\n  private void mockToMap( final Map<String, Object> map ) {\n    mockKeys( map.keySet() );\n    for ( Map.Entry<String, Object> stringObjectEntry : map.entrySet() ) {\n      when( propertiesConfiguration.getProperty( stringObjectEntry.getKey() ) )\n        .thenReturn( stringObjectEntry.getValue() );\n    }\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/util/FileUtil.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.util;\n\nimport java.io.File;\n\n/**\n * Class to provide simple File services.\n * \n * @author sflatley\n */\npublic class FileUtil {\n\n  public static synchronized boolean deleteDir( File dir ) {\n\n    if ( dir.isDirectory() ) {\n      String[] children = dir.list();\n      for ( int i = 0; i < children.length; i++ ) {\n        boolean success = deleteDir( new File( dir, children[i] ) );\n        if ( !success ) {\n          return false;\n        }\n      }\n    } // The directory is now empty so delete it\n    return dir.delete();\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/java/org/pentaho/weblogs/WebLogs.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.weblogs;\n\nimport java.io.File;\nimport java.net.URL;\nimport java.net.URLClassLoader;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.conf.Configured;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.io.Text;\nimport org.apache.hadoop.mapred.FileInputFormat;\nimport org.apache.hadoop.mapred.FileOutputFormat;\nimport org.apache.hadoop.mapred.JobClient;\nimport org.apache.hadoop.mapred.JobConf;\nimport org.apache.hadoop.mapred.Mapper;\nimport org.apache.hadoop.mapred.Reducer;\nimport org.apache.hadoop.util.Tool;\nimport org.apache.hadoop.util.ToolRunner;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.trans.TransConfiguration;\nimport org.pentaho.di.trans.TransExecutionConfiguration;\nimport org.pentaho.di.trans.TransMeta;\n\n/**\n * This is an example Hadoop Map/Reduce application. It reads the text input files, breaks each line into words and\n * counts them. The output is a locally sorted list of words and the count of how often they occurred.\n * \n * To run: bin/hadoop jar build/hadoop-examples.jar wordcount [-m <i>maps</i>] [-r <i>reduces</i>] <i>in-dir</i>\n * <i>out-dir</i>\n */\npublic class WebLogs extends Configured implements Tool {\n\n  private static final String input = \"./junit/weblogs/input/access.log\";\n  private static final String outputFolder = \"./junit/weblogs/output\";\n\n  static int printUsage() {\n    System.out.println( \"Weblogs [-m <maps>] [-r <reduces>] <input> <output>\" );\n    ToolRunner.printGenericCommandUsage( System.out );\n    return -1;\n  }\n\n  /**\n   * The main driver for word count map/reduce program. Invoke this method to submit the map/reduce job.\n   * \n   * @throws IOException\n   *           When there is communication problems with the job tracker.\n   */\n  public int run( String[] args ) throws Exception {\n\n    JobConf conf = new JobConf( getConf(), WebLogs.class );\n    conf.setJobName( \"wordcount\" );\n    conf.set( \"debug\", \"true\" );\n    conf.setWorkingDirectory( new Path( \"./\" ) );\n    FileInputFormat.setInputPaths( conf, new Path( args[0] ) );\n    FileOutputFormat.setOutputPath( conf, new Path( args[1] ) );\n\n    // these are set so the job is run in the same\n    // JVM as the debugger - we are not submitting\n    // to MR Node.\n    conf.set( \"mapred.job.tracker\", \"local\" );\n    conf.set( \"fs.default.name\", \"local\" );\n\n    // The mapper, reducer and combiner classes.\n    File jar = new File( \"./dist/pentaho-big-data-plugin-TRUNK-SNAPSHOT.jar\" );\n    URLClassLoader loader = new URLClassLoader( new URL[] { jar.toURI().toURL() } );\n    conf.setMapperClass( (Class<? extends Mapper>) loader.loadClass( \"org.pentaho.hadoop.mapreduce.GenericTransMap\" ) );\n    // conf.setCombinerClass((Class<? extends Reducer>)\n    // loader.loadClass(\"org.pentaho.hadoop.mapreduce.GenericTransReduce\"));\n    conf.setReducerClass( (Class<? extends Reducer>) loader\n        .loadClass( \"org.pentaho.hadoop.mapreduce.GenericTransReduce\" ) );\n\n    TransExecutionConfiguration transExecConfig = new TransExecutionConfiguration();\n\n    TransMeta mapperTransMeta = new TransMeta( DefaultBowl.getInstance(), \"./samples/jobs/hadoop/weblogs-mapper.ktr\" );\n    TransConfiguration mapperTransConfig = new TransConfiguration( mapperTransMeta, transExecConfig );\n    conf.set( \"transformation-map-xml\", mapperTransConfig.getXML() );\n\n    TransMeta reducerTransMeta = new TransMeta( DefaultBowl.getInstance(), \"./samples/jobs/hadoop/weblogs-reducer.ktr\" );\n    TransConfiguration reducerTransConfig = new TransConfiguration( reducerTransMeta, transExecConfig );\n    conf.set( \"transformation-reduce-xml\", reducerTransConfig.getXML() );\n\n    // transformation data interface\n    conf.set( \"transformation-map-input-stepname\", \"Injector\" );\n    conf.set( \"transformation-map-output-stepname\", \"Output\" );\n    conf.set( \"transformation-reduce-input-stepname\", \"Injector\" );\n    conf.set( \"transformation-reduce-output-stepname\", \"Output\" );\n    conf.setOutputKeyClass( Text.class );\n    conf.setOutputValueClass( Text.class );\n\n    FileInputFormat.setInputPaths( conf, new Path( args[0] ) );\n    FileOutputFormat.setOutputPath( conf, new Path( args[1] ) );\n\n    List<String> other_args = new ArrayList<String>();\n    for ( int i = 0; i < args.length; ++i ) {\n      try {\n        if ( \"-m\".equals( args[i] ) ) {\n          conf.setNumMapTasks( Integer.parseInt( args[++i] ) );\n        } else if ( \"-r\".equals( args[i] ) ) {\n          conf.setNumReduceTasks( Integer.parseInt( args[++i] ) );\n        } else {\n          other_args.add( args[i] );\n        }\n      } catch ( NumberFormatException except ) {\n        System.out.println( \"ERROR: Integer expected instead of \" + args[i] );\n        return printUsage();\n      } catch ( ArrayIndexOutOfBoundsException except ) {\n        System.out.println( \"ERROR: Required parameter missing from \" + args[i - 1] );\n        return printUsage();\n      }\n    }\n    // Make sure there are exactly 2 parameters left.\n    if ( other_args.size() != 2 ) {\n      System.out.println( \"ERROR: Wrong number of parameters: \" + other_args.size() + \" instead of 2.\" );\n      return printUsage();\n    }\n    FileInputFormat.setInputPaths( conf, other_args.get( 0 ) );\n    FileOutputFormat.setOutputPath( conf, new Path( other_args.get( 1 ) ) );\n\n    JobClient.runJob( conf );\n    return 0;\n  }\n\n  public static void main( String[] args ) throws Exception {\n    int res = ToolRunner.run( new Configuration(), new WebLogs(), args );\n    System.exit( res );\n  }\n}\n"
  },
  {
    "path": "legacy/src/test/resources/hadoop-configurations/.gitignore",
    "content": ""
  },
  {
    "path": "legacy/src/test/resources/master.log",
    "content": "all bootstrap actions complete and instance ready"
  },
  {
    "path": "legacy/src/test/resources/plugin.properties",
    "content": "# The Hadoop Configuration to use when communicating with a Hadoop cluster. This is used for all Hadoop client tools\n# including HDFS, Hive, HBase, and Sqoop.\n# For more configuration options specific to the Hadoop configuration choosen\n# here see the config.properties file in that configuration's directory.\nactive.hadoop.configuration=hadoop-20\n\n# Path to the directory that contains the available Hadoop configurations\nhadoop.configurations.path=hadoop-configurations\n\n# Version of Kettle to use from the Kettle HDFS installation directory. This can be set globally here or overridden per job\n# as a User Defined property. If not set we will use the version of Kettle that is used to submit the Pentaho MapReduce job.\npmr.kettle.installation.id=\n\n# Installation path in HDFS for the Pentaho MapReduce Hadoop Distribution\n# The directory structure should follow this structure where {version} can be configured through the Pentaho MapReduce\n# User Defined properties as kettle.runtime.version\n#\n# /opt/pentaho/mapreduce/\n#  +- {version}/\n#  |   +- lib/\n#  |       ..\n#  |       +- kettle-core-{version}.jar\n#  |       +- kettle-engine-{version}.jar\n#  |       ..\n#  |   +- plugins/\n#  |       pentaho-big-data-plugin/\n#  |        .. (additional optional plugins)\n#  +- {another-version}/\n#  |   +- lib/\n#  |       ..\n#  |       +- kettle-core-{version}.jar\n#  |       +- kettle-engine-{version}.jar\n#  |       ..\n#  |   +- plugins/\n#  |       +- pentaho-big-data-plugin/\n#  |           ..\n#  |       +- my-custom-plugin/\n#  |           ..\npmr.kettle.dfs.install.dir=/opt/pentaho/mapreduce\n\n# Enables the use of Hadoop's Distributed Cache to store the Kettle environment required to execute Pentaho MapReduce\n# If this is disabled you must configure all TaskTracker nodes with the Pentaho for Hadoop Distribution\n# @deprecated This is deprecated and is provided as a migration path for existing installations.\npmr.use.distributed.cache=true\n\n# Pentaho MapReduce runtime archive to be preloaded into kettle.hdfs.install.dir/pmr.kettle.installation.id\npmr.libraries.archive.file=pentaho-mapreduce-libraries.zip\n\n# Additional plugins to be copied when Pentaho MapReduce's Kettle Environment does not exist on DFS. This should be a comma-separated\n# list of plugin folder names to copy.\n# e.g. pmr.kettle.additional.plugins=my-test-plugin,steps/DummyPlugin\npmr.kettle.additional.plugins=\n\npmr.create.unique.metastore.dir=true\n"
  },
  {
    "path": "legacy/src/test/resources/s3OutputMetaTest.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<step>\n  <name>S3 File Output</name>\n  <type>S3FileOutputPlugin</type>\n  <description/>\n  <distribute>Y</distribute>\n  <custom_distribution/>\n  <copies>1</copies>\n  <partitioning>\n    <method>none</method>\n    <schema_name/>\n  </partitioning>\n  <separator>&#x3b;</separator>\n  <enclosure>&#x22;</enclosure>\n  <enclosure_forced>N</enclosure_forced>\n  <enclosure_fix_disabled>N</enclosure_fix_disabled>\n  <header>Y</header>\n  <footer>N</footer>\n  <format>DOS</format>\n  <compression>None</compression>\n  <encoding/>\n  <endedLine/>\n  <fileNameInField>N</fileNameInField>\n  <fileNameField/>\n  <create_parent_folder>Y</create_parent_folder>\n  <file>\n    <name>s3&#x3a;&#x2f;&#x2f;s3&#x2f;pdi-13700t&#x2f;test</name>\n    <is_command>N</is_command>\n    <servlet_output>N</servlet_output>\n    <do_not_open_new_file_init>N</do_not_open_new_file_init>\n    <extention/>\n    <append>N</append>\n    <split>N</split>\n    <haspartno>N</haspartno>\n    <add_date>N</add_date>\n    <add_time>N</add_time>\n    <SpecifyFormat>Y</SpecifyFormat>\n    <date_time_format>-yyyy-MM-dd</date_time_format>\n    <add_to_result_filenames>Y</add_to_result_filenames>\n    <pad>N</pad>\n    <fast_dump>N</fast_dump>\n    <splitevery>0</splitevery>\n  </file>\n  <fields>\n  </fields>\n  <access_key>Encrypted 2be98afc86aa7f2e4cb79ce10bef2cf8b</access_key>\n  <secret_key>Encrypted 2be98afc86aa7f2e4cb79ce10bef2cf88</secret_key>\n  <cluster_schema/>\n  <remotesteps>\n    <input></input>\n    <output></output>\n  </remotesteps>\n  <GUI>\n    <xloc>440</xloc>\n    <yloc>72</yloc>\n    <draw>Y</draw>\n  </GUI>\n</step>"
  },
  {
    "path": "legacy/src/test/resources/test-settings.properties",
    "content": "hostname=hadoop-vm1.pentaho.com\nhdfsPort=9000\ntrackerPort=9001\nusername=username\npassword=password"
  },
  {
    "path": "legacy/src/test/resources/test-version.properties",
    "content": "version=X.Y.Z-TEST"
  },
  {
    "path": "legacy/src/test/resources/test.ktr",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<transformation>\n  <info>\n    <name>test</name>\n    <description/>\n    <extended_description/>\n    <trans_version/>\n    <trans_type>Normal</trans_type>\n    <directory>&#47;</directory>\n    <parameters>\n    </parameters>\n    <log>\n<trans-log-table><connection/>\n<schema/>\n<table/>\n<size_limit_lines/>\n<interval/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>TRANSNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>STATUS</id><enabled>Y</enabled><name>STATUS</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name><subject/></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name><subject/></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name><subject/></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name><subject/></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name><subject/></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name><subject/></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>STARTDATE</id><enabled>Y</enabled><name>STARTDATE</name></field><field><id>ENDDATE</id><enabled>Y</enabled><name>ENDDATE</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>DEPDATE</id><enabled>Y</enabled><name>DEPDATE</name></field><field><id>REPLAYDATE</id><enabled>Y</enabled><name>REPLAYDATE</name></field><field><id>LOG_FIELD</id><enabled>Y</enabled><name>LOG_FIELD</name></field></trans-log-table>\n<perf-log-table><connection/>\n<schema/>\n<table/>\n<interval/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>SEQ_NR</id><enabled>Y</enabled><name>SEQ_NR</name></field><field><id>LOGDATE</id><enabled>Y</enabled><name>LOGDATE</name></field><field><id>TRANSNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>STEPNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>STEP_COPY</id><enabled>Y</enabled><name>STEP_COPY</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>INPUT_BUFFER_ROWS</id><enabled>Y</enabled><name>INPUT_BUFFER_ROWS</name></field><field><id>OUTPUT_BUFFER_ROWS</id><enabled>Y</enabled><name>OUTPUT_BUFFER_ROWS</name></field></perf-log-table>\n<channel-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>LOGGING_OBJECT_TYPE</id><enabled>Y</enabled><name>LOGGING_OBJECT_TYPE</name></field><field><id>OBJECT_NAME</id><enabled>Y</enabled><name>OBJECT_NAME</name></field><field><id>OBJECT_COPY</id><enabled>Y</enabled><name>OBJECT_COPY</name></field><field><id>REPOSITORY_DIRECTORY</id><enabled>Y</enabled><name>REPOSITORY_DIRECTORY</name></field><field><id>FILENAME</id><enabled>Y</enabled><name>FILENAME</name></field><field><id>OBJECT_ID</id><enabled>Y</enabled><name>OBJECT_ID</name></field><field><id>OBJECT_REVISION</id><enabled>Y</enabled><name>OBJECT_REVISION</name></field><field><id>PARENT_CHANNEL_ID</id><enabled>Y</enabled><name>PARENT_CHANNEL_ID</name></field><field><id>ROOT_CHANNEL_ID</id><enabled>Y</enabled><name>ROOT_CHANNEL_ID</name></field></channel-log-table>\n<step-log-table><connection/>\n<schema/>\n<table/>\n<timeout_days/>\n<field><id>ID_BATCH</id><enabled>Y</enabled><name>ID_BATCH</name></field><field><id>CHANNEL_ID</id><enabled>Y</enabled><name>CHANNEL_ID</name></field><field><id>LOG_DATE</id><enabled>Y</enabled><name>LOG_DATE</name></field><field><id>TRANSNAME</id><enabled>Y</enabled><name>TRANSNAME</name></field><field><id>STEPNAME</id><enabled>Y</enabled><name>STEPNAME</name></field><field><id>STEP_COPY</id><enabled>Y</enabled><name>STEP_COPY</name></field><field><id>LINES_READ</id><enabled>Y</enabled><name>LINES_READ</name></field><field><id>LINES_WRITTEN</id><enabled>Y</enabled><name>LINES_WRITTEN</name></field><field><id>LINES_UPDATED</id><enabled>Y</enabled><name>LINES_UPDATED</name></field><field><id>LINES_INPUT</id><enabled>Y</enabled><name>LINES_INPUT</name></field><field><id>LINES_OUTPUT</id><enabled>Y</enabled><name>LINES_OUTPUT</name></field><field><id>LINES_REJECTED</id><enabled>Y</enabled><name>LINES_REJECTED</name></field><field><id>ERRORS</id><enabled>Y</enabled><name>ERRORS</name></field><field><id>LOG_FIELD</id><enabled>N</enabled><name>LOG_FIELD</name></field></step-log-table>\n    </log>\n    <maxdate>\n      <connection/>\n      <table/>\n      <field/>\n      <offset>0.0</offset>\n      <maxdiff>0.0</maxdiff>\n    </maxdate>\n    <size_rowset>10000</size_rowset>\n    <sleep_time_empty>50</sleep_time_empty>\n    <sleep_time_full>50</sleep_time_full>\n    <unique_connections>N</unique_connections>\n    <feedback_shown>Y</feedback_shown>\n    <feedback_size>50000</feedback_size>\n    <using_thread_priorities>Y</using_thread_priorities>\n    <shared_objects_file/>\n    <capture_step_performance>N</capture_step_performance>\n    <step_performance_capturing_delay>1000</step_performance_capturing_delay>\n    <step_performance_capturing_size_limit>100</step_performance_capturing_size_limit>\n    <dependencies>\n    </dependencies>\n    <partitionschemas>\n    </partitionschemas>\n    <slaveservers>\n    </slaveservers>\n    <clusterschemas>\n    </clusterschemas>\n  <modified_user>-</modified_user>\n  <modified_date>2010&#47;07&#47;15 10:12:26.133</modified_date>\n  </info>\n  <notepads>\n  </notepads>\n  <connection>\n    <name>Postgres</name>\n    <server>localhost</server>\n    <type>POSTGRESQL</type>\n    <access>Native</access>\n    <database>test</database>\n    <port>5432</port>\n    <username>postgres</username>\n    <password>Encrypted 2be98afc86aa7f2e4bb16bd64d980aac9</password>\n    <servername/>\n    <data_tablespace/>\n    <index_tablespace/>\n    <attributes>\n      <attribute><code>FORCE_IDENTIFIERS_TO_LOWERCASE</code><attribute>N</attribute></attribute>\n      <attribute><code>FORCE_IDENTIFIERS_TO_UPPERCASE</code><attribute>N</attribute></attribute>\n      <attribute><code>IS_CLUSTERED</code><attribute>N</attribute></attribute>\n      <attribute><code>PORT_NUMBER</code><attribute>5432</attribute></attribute>\n      <attribute><code>QUOTE_ALL_FIELDS</code><attribute>N</attribute></attribute>\n      <attribute><code>SUPPORTS_BOOLEAN_DATA_TYPE</code><attribute>N</attribute></attribute>\n      <attribute><code>USE_POOLING</code><attribute>N</attribute></attribute>\n    </attributes>\n  </connection>\n  <order>\n  <hop> <from>Generate World</from><to>Add constants</to><enabled>Y</enabled> </hop>  <hop> <from>Generate Hello</from><to>Add constants</to><enabled>Y</enabled> </hop>  <hop> <from>Add constants</from><to>Output</to><enabled>Y</enabled> </hop>  </order>\n  <step>\n    <name>Generate Hello</name>\n    <type>RowGenerator</type>\n    <description/>\n    <distribute>Y</distribute>\n    <copies>1</copies>\n         <partitioning>\n           <method>none</method>\n           <schema_name/>\n           </partitioning>\n    <fields>\n      <field>\n        <name>key</name>\n        <type>String</type>\n        <format/>\n        <currency/>\n        <decimal/>\n        <group/>\n        <nullif>Hello</nullif>\n        <length>-1</length>\n        <precision>-1</precision>\n      </field>\n    </fields>\n    <limit>3</limit>\n     <cluster_schema/>\n <remotesteps>   <input>   </input>   <output>   </output> </remotesteps>    <GUI>\n      <xloc>308</xloc>\n      <yloc>210</yloc>\n      <draw>Y</draw>\n      </GUI>\n    </step>\n\n  <step>\n    <name>Output</name>\n    <type>Dummy</type>\n    <description/>\n    <distribute>Y</distribute>\n    <copies>1</copies>\n         <partitioning>\n           <method>none</method>\n           <schema_name/>\n           </partitioning>\n     <cluster_schema/>\n <remotesteps>   <input>   </input>   <output>   </output> </remotesteps>    <GUI>\n      <xloc>783</xloc>\n      <yloc>211</yloc>\n      <draw>Y</draw>\n      </GUI>\n    </step>\n\n  <step>\n    <name>Generate World</name>\n    <type>RowGenerator</type>\n    <description/>\n    <distribute>Y</distribute>\n    <copies>1</copies>\n         <partitioning>\n           <method>none</method>\n           <schema_name/>\n           </partitioning>\n    <fields>\n      <field>\n        <name>key</name>\n        <type>String</type>\n        <format/>\n        <currency/>\n        <decimal/>\n        <group/>\n        <nullif>world</nullif>\n        <length>-1</length>\n        <precision>-1</precision>\n      </field>\n    </fields>\n    <limit>2</limit>\n     <cluster_schema/>\n <remotesteps>   <input>   </input>   <output>   </output> </remotesteps>    <GUI>\n      <xloc>319</xloc>\n      <yloc>348</yloc>\n      <draw>Y</draw>\n      </GUI>\n    </step>\n\n  <step>\n    <name>Add constants</name>\n    <type>Constant</type>\n    <description/>\n    <distribute>Y</distribute>\n    <copies>1</copies>\n         <partitioning>\n           <method>none</method>\n           <schema_name/>\n           </partitioning>\n    <fields>\n      <field>\n        <name>value</name>\n        <type>Integer</type>\n        <format/>\n        <currency/>\n        <decimal/>\n        <group/>\n        <nullif>1</nullif>\n        <length>-1</length>\n        <precision>-1</precision>\n      </field>\n    </fields>\n     <cluster_schema/>\n <remotesteps>   <input>   </input>   <output>   </output> </remotesteps>    <GUI>\n      <xloc>514</xloc>\n      <yloc>209</yloc>\n      <draw>Y</draw>\n      </GUI>\n    </step>\n\n  <step_error_handling>\n  </step_error_handling>\n   <slave-step-copy-partition-distribution>\n</slave-step-copy-partition-distribution>\n   <slave_transformation>N</slave_transformation>\n</transformation>\n"
  },
  {
    "path": "legacy-amazon/assemblies/plugin/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <artifactId>legacy-amazon-assemblies</artifactId>\n    <groupId>pentaho</groupId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pdi-legacy-amazon-plugin</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Legacy Amazon Plugin Distribution</name>\n\n  <properties>\n    <resources.directory>${project.basedir}/src/main/resources</resources.directory>\n    <assembly.dir>${project.build.directory}/assembly</assembly.dir>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pdi-legacy-amazon-core</artifactId>\n      <version>${project.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-ec2</artifactId>\n      <version>${aws-java-sdk.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "legacy-amazon/assemblies/plugin/src/assembly/assembly.xml",
    "content": "<assembly xmlns=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3\"\n          xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n          xsi:schemaLocation=\"http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.3 http://maven.apache.org/xsd/assembly-1.1.3.xsd\">\n  <id>zip</id>\n  <formats>\n    <format>zip</format>\n  </formats>\n\n  <baseDirectory></baseDirectory>\n\n  <fileSets>\n    <fileSet>\n      <directory>${resources.directory}</directory>\n      <outputDirectory>.</outputDirectory>\n      <filtered>true</filtered>\n    </fileSet>\n\n    <!-- the staging dir -->\n    <fileSet>\n      <directory>${assembly.dir}</directory>\n      <outputDirectory>.</outputDirectory>\n    </fileSet>\n  </fileSets>\n\n  <dependencySets>\n    <dependencySet>\n      <outputDirectory>.</outputDirectory>\n      <includes>\n        <include>pentaho:pdi-legacy-amazon-core:jar</include>\n      </includes>\n      <useProjectArtifact>false</useProjectArtifact>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <outputDirectory>.</outputDirectory>\n      <useTransitiveDependencies>false</useTransitiveDependencies>\n      <useProjectArtifact>false</useProjectArtifact>\n      <includes>\n        <include>pentaho:pdi-legacy-amazon-core:jar</include>\n      </includes>\n      <excludes>\n        <exclude>com.amazonaws:aws-java-sdk-ec2:jar</exclude>\n      </excludes>\n    </dependencySet>\n    <dependencySet>\n      <scope>runtime</scope>\n      <useProjectArtifact>false</useProjectArtifact>\n      <outputDirectory>lib</outputDirectory>\n      <excludes>\n        <exclude>pentaho:pdi-legacy-amazon-core:*</exclude>\n        <exclude>*:commons-lang:*</exclude>\n        <exclude>*:log4j:*</exclude>\n        <exclude>*:slf4j-api:*</exclude>\n      </excludes>\n    </dependencySet>\n  </dependencySets>\n</assembly>"
  },
  {
    "path": "legacy-amazon/assemblies/plugin/src/main/resources/version.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<version branch='TRUNK'>${project.version}</version>"
  },
  {
    "path": "legacy-amazon/assemblies/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-legacy-amazon</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>legacy-amazon-assemblies</artifactId>\n  <packaging>pom</packaging>\n\n  <name>PDI Legacy Amazon Plugin Assemblies</name>\n\n  <modules>\n    <module>plugin</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "legacy-amazon/core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-legacy-amazon</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <artifactId>pdi-legacy-amazon-core</artifactId>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n  <properties>\n    <common.version>3.3.0-v20070426</common.version>\n    <jets3t.version>0.9.4</jets3t.version>\n    <hadoop.version>0.20.2</hadoop.version>\n    <hbase.version>0.90.3</hbase.version>\n    <xml-apis.version>2.0.2</xml-apis.version>\n    <commands.version>3.3.0-I20070605-0010</commands.version>\n    <publish-sonar-phase>site</publish-sonar-phase>\n    <high-scale-lib.version>1.1.2</high-scale-lib.version>\n    <jline.version>0.9.94</jline.version>\n    <easymock.version>3.0</easymock.version>\n    <aws-java-sdk.version>1.12.791</aws-java-sdk.version>\n  </properties>\n\n  <!-- VERIFY THESE IMPORTS THAT WERE IN THE BUILD SECTION WHEN THE PLUGIN WAS OSGI. ARE THEY NEEDED?\n            <Import-Package>\n              org.apache.avro.*;resolution:=optional,org.slf4j.*,org.apache.logging.log4j.*\n            </Import-Package>\n            <Export-Package>\n              org.pentaho.di.core.namedcluster,\n              org.pentaho.di.core.namedcluster.model\n            </Export-Package>\n  -->\n\n  <build>\n    <resources>\n      <resource>\n        <filtering>true</filtering>\n        <directory>src/main/resources</directory>\n        <includes>\n          <include>**/*.properties</include>\n        </includes>\n      </resource>\n      <resource>\n        <filtering>false</filtering>\n        <directory>src/main/resources</directory>\n        <excludes>\n          <exclude>**/*.properties</exclude>\n        </excludes>\n      </resource>\n    </resources>\n    <testResources>\n      <testResource>\n        <filtering>true</filtering>\n        <directory>src/test/resources</directory>\n        <includes>\n          <include>**/*.properties</include>\n          <include>**/*.log</include>\n        </includes>\n      </testResource>\n    </testResources>\n    <plugins>\n      <plugin>\n        <artifactId>maven-surefire-plugin</artifactId>\n        <version>${maven-surefire-plugin.version}</version>\n        <configuration>\n          <testFailureIgnore>true</testFailureIgnore>\n        </configuration>\n      </plugin>\n    </plugins>\n  </build>\n\n  <dependencies>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-big-data-legacy-core</artifactId>\n      <version>${project.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.fasterxml.jackson.core</groupId>\n      <artifactId>jackson-databind</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-iam</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-core</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.fasterxml.jackson.dataformat</groupId>\n          <artifactId>jackson-dataformat-cbor</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>software.amazon.ion</groupId>\n          <artifactId>ion-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-emr</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-pricing</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-ec2</artifactId>\n      <version>${aws-java-sdk.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.amazonaws</groupId>\n      <artifactId>aws-java-sdk-s3</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>aws-java-sdk-kms</artifactId>\n        </exclusion>\n        <exclusion>\n          <groupId>com.amazonaws</groupId>\n          <artifactId>jmespath-java</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>joda-time</groupId>\n      <artifactId>joda-time</artifactId>\n      <version>${dependency.joda-time.revision}</version>\n      <exclusions>\n        <exclusion>\n          <artifactId>*</artifactId>\n          <groupId>*</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.core</groupId>\n      <artifactId>commands</artifactId>\n      <version>${commands.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.eclipse.equinox</groupId>\n      <artifactId>common</artifactId>\n      <version>${common.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>com.github.stephenc.high-scale-lib</groupId>\n      <artifactId>high-scale-lib</artifactId>\n      <version>${high-scale-lib.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpclient</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.httpcomponents</groupId>\n      <artifactId>httpcore</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>net.java.dev.jets3t</groupId>\n      <artifactId>jets3t</artifactId>\n      <version>${jets3t.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>jline</groupId>\n      <artifactId>jline</artifactId>\n      <version>${jline.version}</version>\n    </dependency>\n    <dependency>\n      <groupId>org.slf4j</groupId>\n      <artifactId>slf4j-api</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-ui-swt</artifactId>\n      <version>${pdi.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <version>${guava.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>commons-configuration</groupId>\n      <artifactId>commons-configuration</artifactId>\n      <version>${dependency.commons-configuration.revision}</version>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <version>${dependency.junit.revision}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.mockito</groupId>\n      <artifactId>mockito-core</artifactId>\n      <version>${mockito.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-engine</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-core</artifactId>\n      <version>${hadoop.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.jboss.shrinkwrap</groupId>\n      <artifactId>shrinkwrap-impl-base</artifactId>\n      <version>1.0.0-alpha-12</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.easymock</groupId>\n      <artifactId>easymock</artifactId>\n      <version>${easymock.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hbase</groupId>\n      <artifactId>hbase</artifactId>\n      <version>${hbase.version}</version>\n      <scope>test</scope>\n      <exclusions>\n        <exclusion>\n          <artifactId>libthrift</artifactId>\n          <groupId>libthrift</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>com.h2database</groupId>\n      <artifactId>h2</artifactId>\n      <version>${h2.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>xml-apis</groupId>\n      <artifactId>xml-apis</artifactId>\n      <version>${xml-apis.version}</version>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api-core</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.pentaho</groupId>\n      <artifactId>shim-api</artifactId>\n      <version>${pentaho-hadoop-shims.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho-kettle</groupId>\n      <artifactId>kettle-core</artifactId>\n      <version>${pdi.version}</version>\n      <classifier>tests</classifier>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-api</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-platform-core</artifactId>\n      <version>${platform.version}</version>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>pentaho</groupId>\n      <artifactId>pentaho-s3-vfs</artifactId>\n      <version>${project.version}</version>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n</project>\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AbstractAmazonJobEntry.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.pentaho.di.job.entry.JobEntryBase;\nimport org.pentaho.di.job.entry.JobEntryInterface;\n\n/**\n * created by: rfellows date: 5/24/12\n */\npublic abstract class AbstractAmazonJobEntry extends JobEntryBase implements Cloneable, JobEntryInterface {\n\n  protected String hadoopJobName;\n  protected String hadoopJobFlowId;\n  protected String accessKey = \"\";\n  protected String secretKey = \"\";\n  protected String sessionToken = \"\";\n  protected String region;\n  protected String ec2Role;\n  protected String emrRole;\n  protected String masterInstanceType;\n  protected String slaveInstanceType;\n  protected String numInstances = \"2\";\n  protected String emrRelease;\n  protected String stagingDir = \"\";\n  protected String cmdLineArgs;\n  protected String ec2SubnetId = \"\";\n  protected boolean alive;\n  protected boolean blocking;\n  protected boolean runOnNewCluster = true;\n  protected String loggingInterval = \"60\"; // 60 seconds default\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    this.hadoopJobName = hadoopJobName;\n  }\n\n  public String getHadoopJobFlowId() {\n    return hadoopJobFlowId;\n  }\n\n  public void setHadoopJobFlowId( String hadoopJobFlowId ) {\n    this.hadoopJobFlowId = hadoopJobFlowId;\n  }\n\n  public void setAccessKey( String accessKey ) {\n    this.accessKey = accessKey;\n  }\n\n  public String getAccessKey() {\n    return accessKey;\n  }\n\n  public void setSecretKey( String secretKey ) {\n    this.secretKey = secretKey;\n  }\n\n  public String getSecretKey() {\n    return secretKey;\n  }\n\n  public void setSessionToken( String sessionToken ) {\n    this.sessionToken = sessionToken;\n  }\n\n  public String getSessionToken() {\n    return sessionToken;\n  }\n\n  public String getStagingDir() {\n    return stagingDir;\n  }\n\n  public void setStagingDir( String stagingDir ) {\n    this.stagingDir = stagingDir;\n  }\n\n  public String getRegion() {\n    return region;\n  }\n\n  public void setRegion( String region ) {\n    this.region = region;\n  }\n\n  public String getEc2Role() {\n    return ec2Role;\n  }\n\n  public void setEc2Role( String ec2Role ) {\n    this.ec2Role = ec2Role;\n  }\n\n  public String getEmrRole() {\n    return emrRole;\n  }\n\n  public void setEmrRole( String emrRole ) {\n    this.emrRole = emrRole;\n  }\n\n  public String getMasterInstanceType() {\n    return masterInstanceType;\n  }\n\n  public void setMasterInstanceType( String masterInstanceType ) {\n    this.masterInstanceType = masterInstanceType;\n  }\n\n  public String getSlaveInstanceType() {\n    return slaveInstanceType;\n  }\n\n  public void setSlaveInstanceType( String slaveInstanceType ) {\n    this.slaveInstanceType = slaveInstanceType;\n  }\n\n  public String getNumInstances() {\n    return numInstances;\n  }\n\n  public void setNumInstances( String numInstances ) {\n    this.numInstances = numInstances;\n  }\n\n  public String getEmrRelease() {\n    return emrRelease;\n  }\n\n  public void setEmrRelease( String emrRelease ) {\n    this.emrRelease = emrRelease;\n  }\n\n  public String getCmdLineArgs() {\n    return cmdLineArgs;\n  }\n\n  public void setCmdLineArgs( String cmdLineArgs ) {\n    this.cmdLineArgs = cmdLineArgs;\n  }\n\n  public String getEc2SubnetId() {\n    return ec2SubnetId;\n  }\n\n  public void setEc2SubnetId( String ec2SubnetId ) {\n    this.ec2SubnetId = ec2SubnetId;\n  }\n\n  public boolean getAlive() {\n    return alive;\n  }\n\n  public void setAlive( boolean isAlive ) {\n    this.alive = isAlive;\n  }\n\n  public boolean getBlocking() {\n    return blocking;\n  }\n\n  public void setBlocking( boolean blocking ) {\n    this.blocking = blocking;\n  }\n\n  public boolean isRunOnNewCluster() {\n    return runOnNewCluster;\n  }\n\n  public void setRunOnNewCluster( boolean runOnNewCluster ) {\n    this.runOnNewCluster = runOnNewCluster;\n  }\n\n  public String getLoggingInterval() {\n    return loggingInterval;\n  }\n\n  public void setLoggingInterval( String loggingInterval ) {\n    this.loggingInterval = loggingInterval;\n  }\n\n  public String getAWSSecretKey() {\n    return environmentSubstitute( secretKey );\n  }\n\n  public String getAWSAccessKeyId() {\n    return environmentSubstitute( accessKey );\n  }\n\n  public String getAWSSessionToken() {\n    return environmentSubstitute( sessionToken );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AbstractAmazonJobEntryDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryDialogInterface;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.job.entry.JobEntryDialog;\nimport org.pentaho.di.ui.spoon.XulSpoonSettingsManager;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.XulRunner;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.binding.DefaultBindingFactory;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.swt.SwtXulLoader;\nimport org.pentaho.ui.xul.swt.SwtXulRunner;\n\nimport java.util.Collections;\nimport java.util.Enumeration;\nimport java.util.ResourceBundle;\n\n/**\n * Base functionality for a XUL-based Amazon job entry dialog\n */\npublic abstract class AbstractAmazonJobEntryDialog<E extends AbstractAmazonJobEntry> extends JobEntryDialog\n  implements JobEntryDialogInterface {\n\n\n  protected ResourceBundle bundle = new ResourceBundle() {\n    @Override\n    protected Object handleGetObject( String key ) {\n      return BaseMessages.getString( getMessagesClass(), key );\n    }\n\n    @Override\n    public Enumeration<String> getKeys() {\n      // getKeys method not used\n      return Collections.emptyEnumeration();\n    }\n  };\n\n  private AbstractAmazonJobExecutorController controller;\n\n  @SuppressWarnings( \"unchecked\" )\n  protected AbstractAmazonJobEntryDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException {\n    super( parent, jobEntry, rep, jobMeta );\n    init( (E) jobEntry );\n  }\n\n  /**\n   * @return the name of the class to use to look up localized messages\n   */\n  protected abstract Class<?> getMessagesClass();\n\n  /**\n   * @return the file name for the XUL document to load for this dialog\n   */\n  protected abstract String getXulFile();\n\n  /**\n   * Create the controller for this dialog\n   *\n   * @param container      XUL DOM container loaded from the file path returned by {@link #getXulFile()}\n   * @param jobEntry       Job entry this dialog supports\n   * @param bindingFactory Binding factory to create bindings with\n   * @return Controller capable of handling requests for this dialog\n   */\n  protected abstract AbstractAmazonJobExecutorController createController( XulDomContainer container,\n                                                                           AbstractAmazonJobEntry jobEntry,\n                                                                           BindingFactory bindingFactory );\n\n  /**\n   * Initialize this dialog for the job entry instance provided.\n   *\n   * @param jobEntry The job entry this dialog supports.\n   */\n  protected void init( AbstractAmazonJobEntry jobEntry ) throws XulException {\n    SwtXulLoader swtXulLoader = new SwtXulLoader();\n    // Register the settings manager so dialog position and size is restored\n    swtXulLoader.setSettingsManager( XulSpoonSettingsManager.getInstance() );\n    swtXulLoader.registerClassLoader( getClass().getClassLoader() );\n    // Register Kettle's variable text box so we can reference it from XUL\n    swtXulLoader.register( \"VARIABLETEXTBOX\", ExtTextbox.class.getName() );\n    swtXulLoader.setOuterContext( shell );\n\n    // Load the XUL document with the dialog defined in it\n    XulDomContainer container = swtXulLoader.loadXul( getXulFile(), bundle );\n\n    // Create the controller with a default binding factory for the document we just loaded\n    BindingFactory bf = new DefaultBindingFactory();\n    bf.setDocument( container.getDocumentRoot() );\n    controller = createController( container, jobEntry, bf );\n    container.addEventHandler( controller );\n    setDialogSize();\n\n    // Load up the SWT-XUL runtime and initialize it with our container\n    final XulRunner runner = new SwtXulRunner();\n    runner.addContainer( container );\n    runner.initialize();\n  }\n\n  @Override\n  public JobEntryInterface open() {\n    return controller.open();\n  }\n\n  private void setDialogSize() {\n    XulSpoonSettingsManager settingsManager = XulSpoonSettingsManager.getInstance();\n    XulDialog dialog = controller.getDialog();\n\n    if ( Const.isWindows() ) {\n      settingsManager.storeSetting( controller.getDialogElementId() + \".Width\", String.valueOf( dialog.getWidth() ) );\n      settingsManager.storeSetting( controller.getDialogElementId() + \".Height\", String.valueOf( dialog.getHeight() ) );\n    } else {\n      settingsManager.storeSetting( controller.getDialogElementId() + \".Width\",\n        String.valueOf( dialog.getAttributeValue( \"linuxWidth\" ) ) );\n      settingsManager.storeSetting( controller.getDialogElementId() + \".Height\",\n        String.valueOf( dialog.getAttributeValue( \"linuxHeight\" ) ) );\n    }\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AbstractAmazonJobExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.auth.StaticUserAuthenticator;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemConfigBuilder;\nimport org.apache.logging.log4j.Level;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.core.Appender;\nimport org.pentaho.amazon.client.ClientFactoriesManager;\nimport org.pentaho.amazon.client.ClientType;\nimport org.pentaho.amazon.client.api.Ec2Client;\nimport org.pentaho.amazon.client.api.EmrClient;\nimport org.pentaho.amazon.client.api.S3Client;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.Result;\nimport org.pentaho.di.core.ResultFile;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.logging.LogLevel;\nimport org.pentaho.di.core.logging.log4j.Log4jKettleLayout;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.platform.api.util.LogUtil;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.io.OutputStreamWriter;\nimport java.nio.charset.StandardCharsets;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.EnumMap;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\n/**\n * Created by Aliaksandr_Zhuk on 1/31/2018.\n */\npublic abstract class AbstractAmazonJobExecutor extends AbstractAmazonJobEntry {\n\n  private static Class<?> PKG = AbstractAmazonJobExecutor.class;\n\n  private Appender appender = null;\n  private S3Client s3Client;\n  protected EmrClient emrClient;\n  protected String key;\n  protected int numInsts = 2;\n  FileObject file;\n  /**\n   * Maps Kettle LogLevels to Log4j Levels\n   */\n  public static final Map<LogLevel, Level> LOG_LEVEL_MAP;\n\n  static {\n    EnumMap<LogLevel, Level> map = new EnumMap<>( LogLevel.class );\n    map.put( LogLevel.BASIC, Level.INFO );\n    map.put( LogLevel.MINIMAL, Level.INFO );\n    map.put( LogLevel.DEBUG, Level.DEBUG );\n    map.put( LogLevel.ERROR, Level.ERROR );\n    map.put( LogLevel.DETAILED, Level.INFO );\n    map.put( LogLevel.ROWLEVEL, Level.DEBUG );\n    map.put( LogLevel.NOTHING, Level.OFF );\n    LOG_LEVEL_MAP = Collections.unmodifiableMap( map );\n  }\n\n  private Level getLog4jLevel( LogLevel level ) {\n    Level log4jLevel = LOG_LEVEL_MAP.get( level );\n    return log4jLevel != null ? log4jLevel : Level.INFO;\n  }\n\n  public void setupLogFile() {\n    String logFileName = \"pdi-\" + this.getName();\n    try {\n      file = KettleVFS.getInstance( parentJobMeta.getBowl() )\n        .createTempFile( logFileName, \".log\", System.getProperty( \"java.io.tmpdir\" ) );\n      appender =  LogUtil.makeAppender( logFileName,\n              new OutputStreamWriter( KettleVFS.getInstance( parentJobMeta.getBowl() ).getOutputStream( file, true ),\n                StandardCharsets.UTF_8 ), new Log4jKettleLayout( StandardCharsets.UTF_8, true ) );\n      LogUtil.addAppender( appender, LogManager.getLogger( \"org.pentaho.di.job.Job\" ), getLog4jLevel( parentJob.getLogLevel() ) );\n    } catch ( Exception e ) {\n      logError( BaseMessages.getString( PKG,\n        \"AbstractAmazonJobExecutor.FailedToOpenLogFile\", logFileName, e.toString() ) );\n      logError( Const.getStackTracker( e ) );\n    }\n  }\n\n  public String getStagingBucketName() throws FileSystemException, KettleException {\n\n    String bucketName = \"\";\n\n    String pathToStagingDir = getS3FileObjectPath();\n    bucketName = pathToStagingDir.substring( 1, pathToStagingDir.length() ).split( \"/\" )[ 0 ];\n\n    return bucketName;\n  }\n\n  @VisibleForTesting\n  protected String getS3FileObjectPath() throws FileSystemException, KettleFileException {\n    FileSystemOptions opts = new FileSystemOptions();\n    DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator( opts,\n      new StaticUserAuthenticator( null, getAWSAccessKeyId(), getAWSSecretKey() ) );\n    FileObject stagingDirFileObject = KettleVFS.getInstance( parentJobMeta.getBowl() )\n      .getFileObject( stagingDir, getVariables(), opts );\n\n    return stagingDirFileObject.getName().getPath();\n  }\n\n  @VisibleForTesting\n  protected String getKeyFromS3StagingDir() throws KettleFileException, FileSystemException {\n\n    String pathToStagingDir = getS3FileObjectPath();\n    StringBuilder sb = new StringBuilder( pathToStagingDir );\n\n    sb.replace( 0, 1, \"\" );\n    if ( sb.indexOf( \"/\" ) == -1 ) {\n      return null;\n    }\n    sb.replace( 0, sb.indexOf( \"/\" ) + 1, \"\" );\n\n    if ( sb.length() > 0 ) {\n      return sb.toString();\n    } else {\n      return null;\n    }\n  }\n\n  protected void setS3BucketKey( FileObject stagingFile ) throws KettleFileException, FileSystemException {\n\n    String keyFromStagingDir = getKeyFromS3StagingDir();\n\n    if ( keyFromStagingDir == null ) {\n      keyFromStagingDir = \"\";\n    }\n\n    StringBuilder sb = new StringBuilder( keyFromStagingDir );\n    if ( sb.length() > 0 ) {\n      sb.append( \"/\" );\n    }\n    sb.append( stagingFile.getName().getBaseName() );\n    key = sb.toString();\n  }\n\n  public String getStagingS3BucketUrl( String stagingBucketName ) {\n    return \"s3://\" + stagingBucketName;\n  }\n\n  public String getStagingS3FileUrl( String stagingBucketName ) {\n    return \"s3://\" + stagingBucketName + \"/\" + key;\n  }\n\n  public String buildFilename( String filename ) {\n    filename = environmentSubstitute( filename );\n    return filename;\n  }\n\n  public abstract File createStagingFile() throws IOException, KettleException;\n\n  public abstract String getStepBootstrapActions();\n\n  public abstract String getMainClass() throws Exception;\n\n  public abstract String getStepType();\n\n  /**\n   * Retrieves a list of available VPC subnets from AWS for the configured region and credentials.\n   * This method can be called from UI controllers to populate subnet dropdown lists.\n   * \n   * @param accessKey AWS access key ID\n   * @param secretKey AWS secret access key\n   * @param sessionToken AWS session token (can be null or empty)\n   * @param region AWS region (e.g., \"us-east-1\")\n   * @return List of SubnetInfo objects containing subnet details, or empty list if error occurs\n   */\n  public static List<Ec2Client.SubnetInfo> getAvailableSubnets( String accessKey, String secretKey, \n                                                                 String sessionToken, String region ) {\n    try {\n      ClientFactoriesManager manager = ClientFactoriesManager.getInstance();\n      Ec2Client ec2Client = manager.createClient( accessKey, secretKey, sessionToken, region, ClientType.EC2 );\n      return ec2Client.getAvailableSubnets();\n    } catch ( Exception e ) {\n      // Return empty list if there's an error\n      return new ArrayList<>();\n    }\n  }\n\n  private void runNewJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl ) throws Exception {\n    emrClient\n      .runJobFlow( stagingS3FileUrl, stagingS3BucketUrl, getStepType(), getMainClass(), getStepBootstrapActions(),\n        this );\n  }\n\n  private void addStepToExistingJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl ) throws Exception {\n    emrClient.addStepToExistingJobFlow( stagingS3FileUrl, stagingS3BucketUrl, getStepType(), getMainClass(), this );\n  }\n\n  private void logError( String stagingBucketName, String stepId ) {\n    logError( s3Client.readStepLogsFromS3( stagingBucketName, hadoopJobFlowId, stepId ) );\n  }\n\n  private void initAmazonClients() {\n    ClientFactoriesManager manager = ClientFactoriesManager.getInstance();\n    s3Client = manager\n      .createClient( getAWSAccessKeyId(), getAWSSecretKey(), getSessionToken(), region, ClientType.S3 );\n    emrClient = manager\n      .createClient( getAWSAccessKeyId(), getAWSSecretKey(), getSessionToken(), region, ClientType.EMR );\n  }\n\n  @Override\n  public Result execute( Result result, int arg1 ) throws KettleException {\n\n    setupLogFile(  );\n\n    try {\n      initAmazonClients();\n\n      String stagingBucketName = getStagingBucketName();\n      String stagingS3BucketUrl = getStagingS3BucketUrl( stagingBucketName );\n\n      s3Client.createBucketIfNotExists( stagingBucketName );\n\n      File tmpFile = createStagingFile();\n\n      // delete old jar if needed\n      try {\n        s3Client.deleteObjectFromBucket( stagingBucketName, key );\n      } catch ( Exception ex ) {\n        logError( Const.getStackTracker( ex ) );\n      }\n\n      // put jar in s3 staging bucket\n      s3Client.putObjectInBucket( stagingBucketName, key, tmpFile );\n      String stagingS3FileUrl = getStagingS3FileUrl( stagingBucketName );\n\n      if ( runOnNewCluster ) {\n        // Determine the instances for Hadoop cluster.\n        String numInstancesS = environmentSubstitute( numInstances );\n        try {\n          numInsts = Integer.parseInt( numInstancesS );\n        } catch ( NumberFormatException e ) {\n          logError( BaseMessages\n            .getString( PKG, \"AbstractAmazonJobExecutor.InstanceNumber.Error\", numInstancesS ) );\n        }\n        runNewJobFlow( stagingS3FileUrl, stagingS3BucketUrl );\n        hadoopJobFlowId = emrClient.getHadoopJobFlowId();\n      } else {\n        addStepToExistingJobFlow( stagingS3FileUrl, stagingS3BucketUrl );\n      }\n      // Set a logging interval.\n      String loggingIntervalS = environmentSubstitute( loggingInterval );\n      int logIntv = 10;\n      try {\n        logIntv = Integer.parseInt( loggingIntervalS );\n      } catch ( NumberFormatException ex ) {\n        logError( BaseMessages.getString( PKG,\n          \"AbstractAmazonJobExecutor.LoggingInterval.Error\", loggingIntervalS ) );\n      }\n      // monitor and log if intended.\n      if ( blocking ) {\n        try {\n          if ( log.isBasic() ) {\n            while ( emrClient.isRunning() ) {\n\n              if ( isJobStoppedByUser() ) {\n                setResultError( result );\n                break;\n              }\n\n              if ( emrClient.getCurrentClusterState() == null || emrClient.getCurrentClusterState().isEmpty() ) {\n                break;\n              }\n              logBasic( hadoopJobName\n                + \" \" + BaseMessages\n                .getString( PKG, \"AbstractAmazonJobExecutor.JobFlowExecutionStatus\", hadoopJobFlowId )\n                + emrClient.getCurrentClusterState() + \" \" );\n\n              logBasic( hadoopJobName\n                + \" \" + BaseMessages\n                .getString( PKG, \"AbstractAmazonJobExecutor.JobFlowStepStatus\", emrClient.getStepId() )\n                + emrClient.getCurrentStepState() + \" \" );\n\n              try {\n                Thread.sleep( logIntv * 1000 );\n              } catch ( InterruptedException ie ) {\n                logError( Const.getStackTracker( ie ) );\n              }\n            }\n\n            if ( emrClient.isClusterTerminated() && emrClient.isStepNotSuccess() ) {\n              setResultError( result );\n              logError( hadoopJobName\n                + \" \" + BaseMessages\n                .getString( PKG, \"AbstractAmazonJobExecutor.JobFlowExecutionStatus\", hadoopJobFlowId )\n                + emrClient.getCurrentClusterState() );\n            }\n\n            if ( emrClient.isStepNotSuccess() ) {\n              setResultError( result );\n              logBasic( hadoopJobName\n                + \" \" + BaseMessages\n                .getString( PKG, \"AbstractAmazonJobExecutor.JobFlowStepStatus\", emrClient.getStepId() )\n                + emrClient.getCurrentStepState() + \" \" );\n\n              if ( emrClient.isStepFailed() ) {\n                logError( emrClient.getJobFlowLogUri(), emrClient.getStepId() );\n              }\n            }\n          }\n        } catch ( Exception e ) {\n          logError( e.getMessage(), e );\n        }\n      }\n    } catch ( Throwable t ) {\n      t.printStackTrace();\n      setResultError( result );\n      logError( t.getMessage(), t );\n    }\n\n    if ( appender != null ) {\n      LogUtil.removeAppender(appender, LogManager.getLogger());\n      ResultFile resultFile =\n        new ResultFile( ResultFile.FILE_TYPE_LOG, file, parentJob.getJobname(), getName() );\n      result.getResultFiles().put( resultFile.getFile().toString(), resultFile );\n    }\n    return result;\n  }\n\n  private boolean isJobStoppedByUser() {\n    if ( getParentJob().isInterrupted() || getParentJob().isStopped() ) {\n      return emrClient.stopSteps();\n    }\n    return false;\n  }\n\n  private void setResultError( Result result ) {\n    result.setStopped( true );\n    result.setNrErrors( 1 );\n    result.setResult( false );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AbstractAmazonJobExecutorController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.amazon;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.base.Strings;\nimport org.apache.commons.lang.exception.ExceptionUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.apache.commons.vfs2.auth.StaticUserAuthenticator;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemConfigBuilder;\nimport org.eclipse.swt.SWT;\nimport org.eclipse.swt.widgets.Composite;\nimport org.eclipse.swt.widgets.MessageBox;\nimport org.eclipse.swt.widgets.Shell;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.amazon.client.ClientFactoriesManager;\nimport org.pentaho.amazon.client.ClientType;\nimport org.pentaho.amazon.client.api.AimClient;\nimport org.pentaho.amazon.client.api.Ec2Client;\nimport org.pentaho.amazon.client.api.PricingClient;\nimport org.pentaho.amazon.client.api.S3Client;\nimport org.pentaho.amazon.s3.S3VfsFileChooserHelper;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleFileException;\nimport org.pentaho.di.core.plugins.JobEntryPluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.variables.Variables;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.ui.core.PropsUI;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.di.ui.core.gui.WindowProperty;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.di.ui.util.HelpUtils;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingConvertor;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.components.XulButton;\nimport org.pentaho.ui.xul.components.XulMenuList;\nimport org.pentaho.ui.xul.components.XulRadio;\nimport org.pentaho.ui.xul.components.XulRadioGroup;\nimport org.pentaho.ui.xul.components.XulTextbox;\nimport org.pentaho.ui.xul.containers.XulDeck;\nimport org.pentaho.ui.xul.containers.XulDialog;\nimport org.pentaho.ui.xul.impl.AbstractXulEventHandler;\nimport org.pentaho.ui.xul.swt.tags.SwtDialog;\nimport org.pentaho.ui.xul.util.AbstractModelList;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.io.IOException;\nimport java.lang.reflect.InvocationTargetException;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\npublic abstract class AbstractAmazonJobExecutorController extends AbstractXulEventHandler {\n\n  private static final Class<?> PKG = AbstractAmazonJobExecutorController.class;\n\n  /* property change names */\n  public static final String JOB_ENTRY_NAME = \"jobEntryName\";\n  public static final String HADOOP_JOB_NAME = \"hadoopJobName\";\n  public static final String HADOOP_JOB_FLOW_ID = \"hadoopJobFlowId\";\n  public static final String JAR_URL = \"jarUrl\";\n  public static final String ACCESS_KEY = \"accessKey\";\n  public static final String SECRET_KEY = \"secretKey\";\n  public static final String SESSION_TOKEN = \"sessionToken\";\n  public static final String STAGING_DIR = \"stagingDir\";\n  public static final String STAGING_DIR_FILE = \"stagingDirFile\";\n  public static final String NUM_INSTANCES = \"numInstances\";\n  public static final String REGION = \"region\";\n  public static final String MASTER_INSTANCE_TYPE = \"masterInstanceType\";\n  public static final String SLAVE_INSTANCE_TYPE = \"slaveInstanceType\";\n  public static final String EC2_ROLE = \"ec2Role\";\n  public static final String EMR_ROLE = \"emrRole\";\n  public static final String EC2_SUBNET_ID = \"ec2SubnetId\";\n  public static final String CMD_LINE_ARGS = \"commandLineArgs\";\n  public static final String BLOCKING = \"blocking\";\n  public static final String RUN_ON_NEW_CLUSTER = \"runOnNewCluster\";\n  public static final String LOGGING_INTERVAL = \"loggingInterval\";\n  public static final String ALIVE = \"alive\";\n\n  /* XUL Element id's */\n  public static final String XUL_JOBENTRY_NAME = \"jobentry-name\";\n  public static final String XUL_JOBENTRY_HADOOPJOB_NAME = \"jobentry-hadoopjob-name\";\n  public static final String XUL_ACCESS_KEY = \"access-key\";\n  public static final String XUL_SECRET_KEY = \"secret-key\";\n  public static final String XUL_SESSION_TOKEN = \"session-token\";\n  public static final String XUL_EMR_SETTINGS = \"emr-settings\";\n  public static final String XUL_REGION = \"region\";\n  public static final String XUL_EC2_ROLE = \"ec2-role\";\n  public static final String XUL_EMR_ROLE = \"emr-role\";\n  public static final String XUL_EC2_SUBNET_ID = \"ec2-subnet-id\";\n  public static final String XUL_MASTER_INSTANCE_TYPE = \"master-instance-type\";\n  public static final String XUL_SLAVE_INSTANCE_TYPE = \"slave-instance-type\";\n  public static final String XUL_EMR_RELEASE = \"emr-release\";\n  public static final String XUL_JOBENTRY_HADOOPJOB_FLOW_ID = \"jobentry-hadoopjob-flow-id\";\n  public static final String XUL_S3_STAGING_DIRECTORY = \"s3-staging-directory\";\n  public static final String XUL_COMMAND_LINE_ARGUMENTS = \"command-line-arguments\";\n  public static final String XUL_NUM_INSTANCES = \"num-instances\";\n  public static final String XUL_BLOCKING = \"blocking\";\n  public static final String XUL_LOGGING_INTERVAL1 = \"logging-interval\";\n  public static final String XUL_ALIVE = \"alive\";\n  public static final String XUL_AMAZON_EMR_JOB_ENTRY_DIALOG = \"amazon-emr-job-entry-dialog\";\n  public static final String XUL_AMAZON_EMR_ERROR_DIALOG = \"amazon-emr-error-dialog\";\n  public static final String XUL_AMAZON_EMR_ERROR_MESSAGE = \"amazon-emr-error-message\";\n  public static final String XUL_CLUSTER_TAB = \"cluster-tab\";\n  public static final String XUL_NEW_CLUSTER_DECK = \"new-cluster\";\n  public static final String XUL_EXISTING_CLUSTER_DECK = \"existing-cluster\";\n  public static final String XUL_CLUSTER_MODE = \"cluster-mode\";\n\n  private static final String EC2_DEFAULT_ROLE = \"EMR_EC2_DefaultRole\";\n  private static final String EMR_DEFAULT_ROLE = \"EMR_DefaultRole\";\n\n  private static final String DISABLED_FLAG = \"disabled\";\n  private static final String BOOLEAN_TO_STR_CONVERSION_ERROR = \"Boolean to String conversion is not supported\";\n\n  protected static final String[] XUL_EMR_MENU_ID_ARRAY\n    = {XUL_EC2_ROLE, XUL_EMR_ROLE, XUL_MASTER_INSTANCE_TYPE, XUL_SLAVE_INSTANCE_TYPE, XUL_EMR_RELEASE,\n    XUL_EC2_SUBNET_ID};\n\n  protected String jobEntryName;\n  protected String hadoopJobName;\n  protected String hadoopJobFlowId;\n  protected String accessKey = \"\";\n  protected String secretKey = \"\";\n  protected String sessionToken = \"\";\n\n  protected String stagingDir = \"\";\n  protected FileObject stagingDirFile = null;\n  protected String jarUrl = \"\";\n  protected boolean alive = false;\n\n  protected String numInstances = \"2\";\n  protected String masterInstanceType;\n  protected String slaveInstanceType;\n\n  protected String region;\n  protected String emrRelease;\n\n  protected String ec2Role;\n  protected String emrRole;\n  protected String ec2SubnetId;\n\n  protected String commandLineArgs;\n  protected boolean blocking;\n  protected boolean runOnNewCluster;\n  protected String loggingInterval = \"60\"; // 60 seconds\n\n  protected VfsFileChooserDialog fileChooserDialog;\n  protected S3VfsFileChooserHelper helper;\n\n  // Generically typed fields\n  protected AbstractAmazonJobEntry jobEntry; // AbstractJobEntry<BlockableJobConfig>\n\n  // common fields\n  protected XulDomContainer container;\n  protected BindingFactory bindingFactory;\n  protected List<Binding> bindings;\n\n  private AbstractModelList<String> masterInstanceTypes;\n  private AbstractModelList<String> slaveInstanceTypes;\n  private AbstractModelList<String> regions;\n  private AbstractModelList<String> ec2Roles;\n  private AbstractModelList<String> emrRoles;\n  private AbstractModelList<String> releases;\n  private AbstractModelList<String> ec2Subnets;\n\n  protected boolean suppressEventHandling = false;\n\n  public AbstractAmazonJobExecutorController( XulDomContainer container, AbstractAmazonJobEntry jobEntry,\n                                              BindingFactory bindingFactory ) {\n\n    this.jobEntry = jobEntry;\n    this.container = container;\n    this.bindingFactory = bindingFactory;\n\n    regions = new AbstractModelList<>();\n    ec2Roles = new AbstractModelList<>();\n    emrRoles = new AbstractModelList<>();\n    masterInstanceTypes = new AbstractModelList<>();\n    slaveInstanceTypes = new AbstractModelList<>();\n    releases = new AbstractModelList<>();\n    ec2Subnets = new AbstractModelList<>();\n    bindings = new ArrayList<>();\n  }\n\n  public AbstractAmazonJobExecutorController() {\n  }\n\n  protected void initializeEmrSettingsGroupMenuFields() {\n    populateRegions();\n    populateEc2Roles();\n    populateEmrRoles();\n    populateMasterInstanceTypes();\n    populateSlaveInstanceTypes();\n    populateReleases();\n  }\n\n  protected void initializeTextFields() {\n\n    XulTextbox numInstances = ( XulTextbox ) container.getDocumentRoot().getElementById( XUL_NUM_INSTANCES );\n    numInstances.setValue( getNumInstances() );\n    XulTextbox loggingInterval\n      = ( XulTextbox ) container.getDocumentRoot().getElementById( XUL_LOGGING_INTERVAL1 );\n    loggingInterval.setValue( getLoggingInterval() );\n\n    ExtTextbox tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_ACCESS_KEY );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_SECRET_KEY );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_SESSION_TOKEN );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_JOBENTRY_HADOOPJOB_NAME );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_JOBENTRY_HADOOPJOB_FLOW_ID );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_S3_STAGING_DIRECTORY );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_COMMAND_LINE_ARGUMENTS );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_NUM_INSTANCES );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_LOGGING_INTERVAL1 );\n    tempBox.setVariableSpace( getVariableSpace() );\n  }\n\n  protected void createBindings() {\n\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n\n    bindings.add( bindingFactory.createBinding( regions, \"children\", XUL_REGION, \"elements\" ) );\n    bindingFactory.createBinding( XUL_REGION, \"selectedIndex\", this, \"selectedRegion\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return regions.get( index );\n        }\n\n        public Integer targetToSource( final String str ) {\n          return regions.indexOf( str );\n        }\n      } );\n\n    bindings.add( bindingFactory.createBinding( ec2Roles, \"children\", XUL_EC2_ROLE, \"elements\" ) );\n    bindingFactory.createBinding( XUL_EC2_ROLE, \"selectedIndex\", this, \"selectedEc2Role\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return ec2Roles.get( index );\n        }\n\n        public Integer targetToSource( final String role ) {\n          return ec2Roles.indexOf( role );\n        }\n      } );\n\n    bindings.add( bindingFactory.createBinding( emrRoles, \"children\", XUL_EMR_ROLE, \"elements\" ) );\n    bindingFactory.createBinding( XUL_EMR_ROLE, \"selectedIndex\", this, \"selectedEmrRole\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return emrRoles.get( index );\n        }\n\n        public Integer targetToSource( final String role ) {\n          return emrRoles.indexOf( role );\n        }\n      } );\n\n    bindings\n      .add( bindingFactory.createBinding( masterInstanceTypes, \"children\", XUL_MASTER_INSTANCE_TYPE, \"elements\" ) );\n    bindingFactory.createBinding( XUL_MASTER_INSTANCE_TYPE, \"selectedIndex\", this, \"selectedMasterInstanceType\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return masterInstanceTypes.get( index );\n        }\n\n        public Integer targetToSource( final String str ) {\n          return masterInstanceTypes.indexOf( str );\n        }\n      } );\n\n    bindings.add( bindingFactory.createBinding( slaveInstanceTypes, \"children\", XUL_SLAVE_INSTANCE_TYPE, \"elements\" ) );\n    bindingFactory.createBinding( XUL_SLAVE_INSTANCE_TYPE, \"selectedIndex\", this, \"selectedSlaveInstanceType\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return slaveInstanceTypes.get( index );\n        }\n\n        public Integer targetToSource( final String str ) {\n          return slaveInstanceTypes.indexOf( str );\n        }\n      } );\n\n    bindings.add( bindingFactory.createBinding( releases, \"children\", XUL_EMR_RELEASE, \"elements\" ) );\n    bindingFactory.createBinding( XUL_EMR_RELEASE, \"selectedIndex\", this, \"emrRelease\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return releases.get( index );\n        }\n\n        public Integer targetToSource( final String str ) {\n          return releases.indexOf( str );\n        }\n      } );\n\n    bindings.add( bindingFactory.createBinding( ec2Subnets, \"children\", XUL_EC2_SUBNET_ID, \"elements\" ) );\n    bindingFactory.createBinding( XUL_EC2_SUBNET_ID, \"selectedIndex\", this, \"selectedEc2SubnetId\",\n      new BindingConvertor<Integer, String>() {\n        public String sourceToTarget( final Integer index ) {\n          if ( index == -1 ) {\n            return null;\n          }\n          return ec2Subnets.get( index );\n        }\n\n        public Integer targetToSource( final String str ) {\n          return ec2Subnets.indexOf( str );\n        }\n      } );\n\n    bindingFactory.createBinding( XUL_JOBENTRY_NAME, \"value\", this, JOB_ENTRY_NAME );\n    bindingFactory.createBinding( XUL_JOBENTRY_HADOOPJOB_NAME, \"value\", this, HADOOP_JOB_NAME );\n    bindingFactory.createBinding( XUL_JOBENTRY_HADOOPJOB_FLOW_ID, \"value\", this, HADOOP_JOB_FLOW_ID );\n    bindingFactory.createBinding( XUL_ACCESS_KEY, \"value\", this, ACCESS_KEY );\n    bindingFactory.createBinding( XUL_SECRET_KEY, \"value\", this, SECRET_KEY );\n    bindingFactory.createBinding( XUL_SESSION_TOKEN, \"value\", this, SESSION_TOKEN );\n    bindingFactory.createBinding( XUL_S3_STAGING_DIRECTORY, \"value\", this, STAGING_DIR );\n    bindingFactory.createBinding( XUL_COMMAND_LINE_ARGUMENTS, \"value\", this, CMD_LINE_ARGS );\n    bindingFactory.createBinding( XUL_NUM_INSTANCES, \"value\", this, NUM_INSTANCES );\n    bindingFactory.createBinding( XUL_ALIVE, \"selected\", this, ALIVE );\n    bindingFactory.createBinding( XUL_BLOCKING, \"selected\", this, BLOCKING );\n    bindingFactory.createBinding( XUL_LOGGING_INTERVAL1, \"value\", this, LOGGING_INTERVAL );\n\n    bindingFactory.setBindingType( Binding.Type.ONE_WAY );\n    bindingFactory\n      .createBinding( XUL_ACCESS_KEY, \"value\", XUL_EMR_SETTINGS, DISABLED_FLAG, secretKeyIsEmpty( container ) );\n    bindingFactory\n      .createBinding( XUL_SECRET_KEY, \"value\", XUL_EMR_SETTINGS, DISABLED_FLAG, accessKeyIsEmpty( container ) );\n    bindingFactory\n      .createBinding( XUL_SESSION_TOKEN, \"value\", XUL_EMR_SETTINGS, DISABLED_FLAG, sessionTokenIsEmpty( container ) );\n  }\n\n  private static void disableAwsConnection( XulDomContainer container ) {\n    XulButton connectButton = ( XulButton ) container.getDocumentRoot().getElementById( XUL_EMR_SETTINGS );\n    connectButton.setDisabled( disableConnectButton( container ) );\n  }\n\n  public void updateClusterState() {\n    XulRadioGroup clusterModes\n      = ( XulRadioGroup ) getXulDomContainer().getDocumentRoot().getElementById( XUL_CLUSTER_MODE );\n    XulRadio newClusterMode = ( XulRadio ) clusterModes.getFirstChild();\n\n    XulDeck clusterModeTab = ( XulDeck ) getXulDomContainer().getDocumentRoot().getElementById( XUL_CLUSTER_TAB );\n    disableAwsConnection( getXulDomContainer() );\n\n    if ( newClusterMode.isSelected() ) {\n      this.runOnNewCluster = true;\n      clusterModeTab.setSelectedIndex( 0 );\n    } else {\n      fixFocusLostOnTab();\n      clusterModeTab.setSelectedIndex( 1 );\n      this.runOnNewCluster = false;\n    }\n  }\n\n  //need for mac os to avoid getting NullPointerException after switching to Existing tab\n  private void fixFocusLostOnTab() {\n    XulTextbox jobEntryName = ( XulTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_JOBENTRY_NAME );\n    jobEntryName.setFocus();\n  }\n\n  private static String getTextBoxValueById( XulDomContainer container, String xulTextBoxName ) {\n    ExtTextbox textbox = ( ExtTextbox ) container.getDocumentRoot().getElementById( xulTextBoxName );\n    return textbox.getValue();\n  }\n\n  private static boolean disableConnectButton( XulDomContainer container ) {\n    String secretKeyValue = getTextBoxValueById( container, XUL_SECRET_KEY );\n    String accessKeyValue = getTextBoxValueById( container, XUL_ACCESS_KEY );\n    XulRadio existingClusterMode = ( XulRadio ) container.getDocumentRoot().getElementById( XUL_EXISTING_CLUSTER_DECK );\n\n    if ( existingClusterMode.isSelected() ) {\n      return true;\n    }\n\n    if ( Strings.isNullOrEmpty( accessKeyValue ) || Strings.isNullOrEmpty( secretKeyValue ) ) {\n      return true;\n    }\n\n    return false;\n  }\n\n  private static BindingConvertor<String, Boolean> accessKeyIsEmpty( XulDomContainer container ) {\n    return new BindingConvertor<String, Boolean>() {\n      @Override\n      public Boolean sourceToTarget( String value ) {\n        return disableConnectButton( container );\n      }\n\n      @Override\n      public String targetToSource( Boolean value ) {\n        throw new AbstractMethodError( BOOLEAN_TO_STR_CONVERSION_ERROR );\n      }\n    };\n  }\n\n  private static BindingConvertor<String, Boolean> secretKeyIsEmpty( XulDomContainer container ) {\n    return new BindingConvertor<String, Boolean>() {\n      @Override\n      public Boolean sourceToTarget( String value ) {\n        return disableConnectButton( container );\n      }\n\n      @Override\n      public String targetToSource( Boolean value ) {\n        throw new AbstractMethodError( BOOLEAN_TO_STR_CONVERSION_ERROR );\n      }\n    };\n  }\n\n  private static BindingConvertor<String, Boolean> sessionTokenIsEmpty( XulDomContainer container ) {\n    return new BindingConvertor<String, Boolean>() {\n      @Override\n      public Boolean sourceToTarget( String value ) {\n        return disableConnectButton( container );\n      }\n\n      @Override\n      public String targetToSource( Boolean value ) {\n        throw new AbstractMethodError( BOOLEAN_TO_STR_CONVERSION_ERROR );\n      }\n    };\n  }\n\n  protected AbstractModelList<String> populateEc2Roles() {\n    String ec2Role = getJobEntry().getEc2Role();\n    if ( ec2Role != null ) {\n      ec2Roles.add( ec2Role );\n    }\n    return ec2Roles;\n  }\n\n  protected AbstractModelList<String> populateEmrRoles() {\n    String emrRole = getJobEntry().getEmrRole();\n    if ( emrRole != null ) {\n      emrRoles.add( emrRole );\n    }\n    return emrRoles;\n  }\n\n  protected AbstractModelList<String> populateMasterInstanceTypes() {\n    String masterInstanceType = getJobEntry().getMasterInstanceType();\n    if ( masterInstanceType != null ) {\n      masterInstanceTypes.add( masterInstanceType );\n    }\n    return masterInstanceTypes;\n  }\n\n  protected AbstractModelList<String> populateSlaveInstanceTypes() {\n    String slaveInstanceType = getJobEntry().getSlaveInstanceType();\n    if ( slaveInstanceType != null ) {\n      slaveInstanceTypes.add( slaveInstanceType );\n    }\n    return slaveInstanceTypes;\n  }\n\n  protected void setRolesFromAmazonAccount( AimClient amiClient ) throws Exception {\n    setEc2RolesFromAmazonAccount( amiClient );\n    setEmrRolesFromAmazonAccount( amiClient );\n  }\n\n  @VisibleForTesting\n  protected void setEc2RolesFromAmazonAccount( AimClient amiClient ) {\n    AbstractModelList<String> ec2List;\n    ec2List = amiClient.getEc2RolesFromAmazonAccount();\n\n    if ( ec2List.isEmpty() ) {\n      ec2List.add( EC2_DEFAULT_ROLE );\n    }\n\n    ec2Roles.clear();\n    ec2Roles.addAll( ec2List );\n  }\n\n  @VisibleForTesting\n  protected void setEmrRolesFromAmazonAccount( AimClient amiClient ) {\n    AbstractModelList<String> emrList;\n    emrList = amiClient.getEmrRolesFromAmazonAccount();\n\n    if ( emrList.isEmpty() ) {\n      emrList.add( EMR_DEFAULT_ROLE );\n    }\n\n    emrRoles.clear();\n    emrRoles.addAll( emrList );\n  }\n\n  protected List<String> populateInstanceTypesForSelectedRegion( PricingClient pricingClient ) throws Exception {\n\n    List<String> instanceTypes = null;\n\n    try {\n      instanceTypes = pricingClient.populateInstanceTypesForSelectedRegion();\n    } catch ( IOException e ) {\n      e.printStackTrace();\n      showErrorDialog(\n        BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.JobEntry.Instance.error.title\" ),\n        e.getMessage() );\n    }\n\n    masterInstanceTypes.clear();\n    slaveInstanceTypes.clear();\n\n    if ( instanceTypes != null ) {\n      masterInstanceTypes.addAll( instanceTypes );\n      slaveInstanceTypes.addAll( instanceTypes );\n    }\n\n    return instanceTypes;\n  }\n\n  protected AbstractModelList<String> populateRegions() {\n    regions.clear();\n    regions\n      = Arrays.stream( AmazonRegion.values() ).map( v -> v.getHumanReadableRegion() )\n      .collect( Collectors.toCollection( AbstractModelList<String>::new ) );\n\n    String region = getJobEntry().getRegion();\n\n    if ( region == null && regions.size() > 0 ) {\n      getJobEntry().setRegion( regions.get( 0 ) );\n    }\n\n    return regions;\n  }\n\n  protected AbstractModelList<String> populateReleases() {\n    releases.clear();\n    releases\n      = Arrays.stream( AmazonEmrReleases.values() ).map( v -> v.getEmrRelease() )\n      .collect( Collectors.toCollection( AbstractModelList<String>::new ) );\n\n    String emrRelease = getJobEntry().getEmrRelease();\n\n    if ( emrRelease != null && !releases.contains( emrRelease ) ) {\n      releases.add( 0, emrRelease );\n    }\n\n    if ( emrRelease == null && releases.size() > 0 ) {\n      getJobEntry().setEmrRelease( releases.get( 0 ) );\n    }\n    return releases;\n  }\n\n  public AbstractModelList<String> getReleases() {\n    return releases;\n  }\n\n  private XulMenuList<String> getXulMenu( String elementMenuId ) {\n    return ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( elementMenuId );\n  }\n\n  private void setXulMenuDisabled( String elementMenuId, boolean isDisable ) {\n    XulMenuList<String> xulMenu = getXulMenu( elementMenuId );\n    xulMenu.setDisabled( isDisable );\n  }\n\n  protected void setXulMenusDisabled( boolean isDisable ) {\n    Arrays.stream( XUL_EMR_MENU_ID_ARRAY ).forEach( ( e ) -> setXulMenuDisabled( e, isDisable ) );\n  }\n\n  protected void setSelectedItemForEachMenu() {\n    getXulMenu( XUL_EC2_ROLE ).setSelectedItem( getJobEntry().getEc2Role() );\n    getXulMenu( XUL_EMR_ROLE ).setSelectedItem( getJobEntry().getEmrRole() );\n    getXulMenu( XUL_MASTER_INSTANCE_TYPE ).setSelectedItem( getJobEntry().getMasterInstanceType() );\n    getXulMenu( XUL_SLAVE_INSTANCE_TYPE ).setSelectedItem( getJobEntry().getSlaveInstanceType() );\n  }\n\n  private List<String> initRolesAndTypes( String accessKey, String secretKey, String sessionToken ) throws Exception {\n\n    ClientFactoriesManager manager = ClientFactoriesManager.getInstance();\n\n    AimClient aimClient = manager\n      .createClient( accessKey, secretKey, sessionToken, getJobEntry().getRegion(), ClientType.AIM );\n\n    setRolesFromAmazonAccount( aimClient );\n\n    PricingClient pricingClient = manager\n      .createClient( accessKey, secretKey, sessionToken, getJobEntry().getRegion(), ClientType.PRICING );\n\n    return populateInstanceTypesForSelectedRegion( pricingClient );\n  }\n\n  public void getEmrSettings() {\n\n    ExtTextbox accessKeyBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_ACCESS_KEY );\n\n    ExtTextbox secretKeyBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_SECRET_KEY );\n\n    ExtTextbox sessionTokenBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_SESSION_TOKEN );\n\n    XulButton connectButton = ( XulButton ) getXulDomContainer().getDocumentRoot().getElementById( XUL_EMR_SETTINGS );\n\n    connectButton.setDisabled( true );\n\n    String ec2Role = getJobEntry().getEc2Role();\n    String emrRole = getJobEntry().getEmrRole();\n    String masterSelectedInstanceType = getJobEntry().getMasterInstanceType();\n    String slaveSelectedInstanceType = getJobEntry().getSlaveInstanceType();\n    String errorMessage;\n\n    try {\n\n      List<String> instanceTypes = initRolesAndTypes( accessKeyBox.getValue(), secretKeyBox.getValue(), sessionTokenBox.getValue() );\n\n      if ( ec2Role != null && ec2Roles.contains( ec2Role ) ) {\n        this.ec2Role = ec2Role;\n      }\n\n      if ( emrRole != null && emrRoles.contains( emrRole ) ) {\n        this.emrRole = emrRole;\n      }\n\n      if ( masterSelectedInstanceType != null && instanceTypes.contains( masterSelectedInstanceType ) ) {\n        this.masterInstanceType = masterSelectedInstanceType;\n      }\n\n      if ( slaveSelectedInstanceType != null && instanceTypes.contains( slaveSelectedInstanceType ) ) {\n        this.slaveInstanceType = slaveSelectedInstanceType;\n      }\n\n      setXulMenusDisabled( false );\n\n      XulTextbox numInstances = ( ExtTextbox ) container.getDocumentRoot().getElementById( XUL_NUM_INSTANCES );\n      numInstances.setDisabled( false );\n\n      setSelectedItemForEachMenu();\n\n      // Load subnets after successful connection\n      loadSubnetsInternal( accessKeyBox.getValue(), secretKeyBox.getValue(), sessionTokenBox.getValue() );\n\n    } catch ( Exception e ) {\n      e.printStackTrace();\n      errorMessage = e.getMessage() == null ? ExceptionUtils.getStackTrace( e ) : e.getMessage();\n      showErrorDialog(\n        BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.JobEntry.Connection.error.title\" ),\n        errorMessage );\n    } finally {\n      connectButton.setDisabled( false );\n    }\n  }\n\n  /**\n   * Internal method to load subnets. Called automatically after successful\n   * connection.\n   */\n  private void loadSubnetsInternal( String accessKey, String secretKey, String sessionToken ) {\n    try {\n      String region = getJobEntry().getRegion();\n\n      if ( StringUtil.isEmpty( accessKey ) || StringUtil.isEmpty( secretKey ) || StringUtil.isEmpty( region ) ) {\n        return; // Silently return if credentials are incomplete\n      }\n\n      ClientFactoriesManager manager = ClientFactoriesManager.getInstance();\n      Ec2Client ec2Client = manager.createClient( accessKey, secretKey, sessionToken, region, ClientType.EC2 );\n\n      List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n      ec2Subnets.clear();\n      if ( subnets != null && !subnets.isEmpty() ) {\n        for ( Ec2Client.SubnetInfo subnet : subnets ) {\n          ec2Subnets.add( subnet.getDisplayString() );\n        }\n\n        // If there's a previously selected subnet, try to select it\n        String currentSubnet = getJobEntry().getEc2SubnetId();\n        if ( currentSubnet != null && !currentSubnet.isEmpty() ) {\n          // Try to find matching subnet by ID\n          for ( Ec2Client.SubnetInfo subnet : subnets ) {\n            if ( currentSubnet.equals( subnet.getSubnetId() )\n              || ec2Subnets.contains( currentSubnet ) ) {\n              this.ec2SubnetId = currentSubnet;\n              break;\n            }\n          }\n        }\n      }\n\n    } catch ( Exception e ) {\n      // Log error but don't show dialog to avoid interrupting the connection flow\n      e.printStackTrace();\n    }\n  }\n\n  public AbstractModelList<String> getEc2Roles() {\n    return ec2Roles;\n  }\n\n  public AbstractModelList<String> getEmrRoles() {\n    return emrRoles;\n  }\n\n  public AbstractModelList<String> getMasterInstanceTypes() {\n    return masterInstanceTypes;\n  }\n\n  public AbstractModelList<String> getSlaveInstanceTypes() {\n    return slaveInstanceTypes;\n  }\n\n  public AbstractModelList<String> getRegions() {\n    return regions;\n  }\n\n  public AbstractModelList<String> getEc2Subnets() {\n    return ec2Subnets;\n  }\n\n  public void setBindings( List<Binding> bindings ) {\n    this.bindings = bindings;\n  }\n\n  public void accept() {\n    syncModel( Spoon.getInstance().getExecutionBowl() );\n\n    String validationErrors = buildValidationErrorMessages();\n\n    if ( !StringUtil.isEmpty( validationErrors ) ) {\n      openErrorDialog( BaseMessages.getString( PKG, \"Dialog.Error\" ), validationErrors );\n      return;\n    }\n\n    configureJobEntry();\n\n    cancel();\n  }\n\n  protected void syncModel( Bowl bowl ) {\n    XulMenuList<String> tempMenu\n      = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_REGION );\n    this.region = tempMenu.getValue();\n    tempMenu = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_EC2_ROLE );\n    this.ec2Role = tempMenu.getValue();\n    tempMenu = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_EMR_ROLE );\n    this.emrRole = tempMenu.getValue();\n    tempMenu = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_MASTER_INSTANCE_TYPE );\n    this.masterInstanceType = tempMenu.getValue();\n    tempMenu = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_SLAVE_INSTANCE_TYPE );\n    this.slaveInstanceType = tempMenu.getValue();\n    tempMenu = ( XulMenuList<String> ) getXulDomContainer().getDocumentRoot().getElementById( XUL_EMR_RELEASE );\n    this.emrRelease = tempMenu.getValue();\n    ExtTextbox tempBox\n      = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_JOBENTRY_HADOOPJOB_NAME );\n    this.hadoopJobName = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_ACCESS_KEY );\n    this.accessKey = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_SECRET_KEY );\n    this.secretKey = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_SESSION_TOKEN );\n    this.sessionToken = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_JOBENTRY_HADOOPJOB_FLOW_ID );\n    this.hadoopJobFlowId = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_S3_STAGING_DIRECTORY );\n    this.stagingDir = ( ( Text ) tempBox.getTextControl() ).getText();\n    try {\n      this.stagingDirFile = resolveFile( bowl, this.stagingDir );\n    } catch ( Exception e ) {\n      this.stagingDirFile = null;\n    }\n\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_COMMAND_LINE_ARGUMENTS );\n    this.commandLineArgs = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_NUM_INSTANCES );\n    this.numInstances = ( ( Text ) tempBox.getTextControl() ).getText();\n    tempBox = ( ExtTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_LOGGING_INTERVAL1 );\n    this.loggingInterval = ( ( Text ) tempBox.getTextControl() ).getText();\n  }\n\n  public List<String> getValidationWarnings() {\n    List<String> warnings = new ArrayList<String>();\n\n    if ( StringUtil.isEmpty( getJobEntryName() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.JobEntryName.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getAccessKey() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.AccessKey.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getSecretKey() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.SecretKey.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getRegion() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.Region.Error\" ) );\n    }\n\n    warnings.addAll( collectClusterWarnings() );\n\n    if ( StringUtil.isEmpty( getHadoopJobName() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.JobFlowName.Error\" ) );\n    }\n\n    String s3Protocol = S3Client.SCHEME + \"://\";\n    String sdir = getVariableSpace().environmentSubstitute( stagingDir );\n    if ( StringUtil.isEmpty( getStagingDir() ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.StagingDir.Error\" ) );\n    } else if ( !sdir.startsWith( s3Protocol ) ) {\n      warnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.StagingDir.Error\" ) );\n    }\n\n    return warnings;\n  }\n\n  private List<String> collectClusterWarnings() {\n\n    List<String> newClusterWarnings = new ArrayList<>();\n    List<String> existingClusterWarnings = new ArrayList<>();\n\n    if ( StringUtil.isEmpty( getEc2Role() ) ) {\n      newClusterWarnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.Ec2Role.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getEmrRole() ) ) {\n      newClusterWarnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.EmrRole.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getMasterInstanceType() ) ) {\n      newClusterWarnings\n        .add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.MasterInstanceType.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getSlaveInstanceType() ) ) {\n      newClusterWarnings\n        .add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.SlaveInstanceType.Error\" ) );\n    }\n    if ( StringUtil.isEmpty( getEmrRelease() ) ) {\n      newClusterWarnings.add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.EmrRelease.Error\" ) );\n    }\n\n    if ( StringUtil.isEmpty( getHadoopJobFlowId() ) ) {\n      existingClusterWarnings\n        .add( BaseMessages.getString( PKG, \"AbstractAmazonJobExecutorController.JobFlowId.Error\" ) );\n    }\n\n    if ( isRunOnNewCluster() ) {\n      return newClusterWarnings;\n    }\n\n    return existingClusterWarnings;\n  }\n\n  protected String buildValidationErrorMessages() {\n    StringBuilder sb = new StringBuilder();\n    List<String> warnings = getValidationWarnings();\n\n    if ( !warnings.isEmpty() ) {\n      for ( String warning : warnings ) {\n        sb.append( warning ).append( \"\\n\" );\n      }\n    }\n\n    return sb.toString();\n  }\n\n  protected void configureJobEntry() {\n    // common/simple\n    getJobEntry().setName( getJobEntryName() );\n    getJobEntry().setHadoopJobName( getHadoopJobName() );\n    getJobEntry().setHadoopJobFlowId( getHadoopJobFlowId() );\n    getJobEntry().setAccessKey( getAccessKey() );\n    getJobEntry().setSecretKey( getSecretKey() );\n    getJobEntry().setSessionToken( getSessionToken() );\n    getJobEntry().setStagingDir( getStagingDir() );\n    getJobEntry().setNumInstances( getNumInstances() );\n    getJobEntry().setMasterInstanceType( getMasterInstanceType() );\n    getJobEntry().setSlaveInstanceType( getSlaveInstanceType() );\n    getJobEntry().setRegion( getRegion() );\n    getJobEntry().setEmrRelease( getEmrRelease() );\n    getJobEntry().setEc2Role( getEc2Role() );\n    getJobEntry().setEmrRole( getEmrRole() );\n    getJobEntry().setEc2SubnetId( extractSubnetId( getEc2SubnetId() ) );\n    getJobEntry().setCmdLineArgs( getCommandLineArgs() );\n    getJobEntry().setAlive( isAlive() );\n    getJobEntry().setRunOnNewCluster( isRunOnNewCluster() );\n    getJobEntry().setBlocking( getBlocking() );\n    getJobEntry().setLoggingInterval( getLoggingInterval() );\n\n    getJobEntry().setChanged();\n  }\n\n  public String getSelectedEc2Role() {\n    return this.ec2Role;\n  }\n\n  public String getSelectedEmrRole() {\n    return this.emrRole;\n  }\n\n  public String getSelectedMasterInstanceType() {\n    return this.masterInstanceType;\n  }\n\n  public String getSelectedSlaveInstanceType() {\n    return this.slaveInstanceType;\n  }\n\n  public void setSelectedEc2Role( String selectedEc2Role ) {\n    if ( !suppressEventHandling ) {\n\n      if ( this.ec2Role == null && ec2Roles.contains( EC2_DEFAULT_ROLE ) ) {\n        selectedEc2Role = EC2_DEFAULT_ROLE;\n      }\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedEc2Role\", this.ec2Role, selectedEc2Role );\n        this.ec2Role = selectedEc2Role;\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public void setSelectedEmrRole( String selectedEmrRole ) {\n    if ( !suppressEventHandling ) {\n\n      if ( this.emrRole == null && emrRoles.contains( EMR_DEFAULT_ROLE ) ) {\n        selectedEmrRole = EMR_DEFAULT_ROLE;\n      }\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedEmrRole\", this.emrRole, selectedEmrRole );\n        this.emrRole = selectedEmrRole;\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public void setSelectedMasterInstanceType( String selectedMasterInstanceType ) {\n\n    if ( !suppressEventHandling ) {\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedMasterInstanceType\", this.masterInstanceType, selectedMasterInstanceType );\n        this.masterInstanceType = selectedMasterInstanceType;\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public void setSelectedSlaveInstanceType( String selectedSlaveInstanceType ) {\n\n    if ( !suppressEventHandling ) {\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedSlaveInstanceType\", this.slaveInstanceType, selectedSlaveInstanceType );\n        this.slaveInstanceType = selectedSlaveInstanceType;\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public String getSelectedEc2SubnetId() {\n    return this.ec2SubnetId;\n  }\n\n  public void setSelectedEc2SubnetId( String selectedEc2SubnetId ) {\n    if ( !suppressEventHandling ) {\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedEc2SubnetId\", this.ec2SubnetId, selectedEc2SubnetId );\n        this.ec2SubnetId = selectedEc2SubnetId;\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public String getSelectedRegion() {\n    return getRegion();\n  }\n\n  public void setSelectedRegion( String selectedRegion ) {\n\n    if ( !suppressEventHandling ) {\n\n      if ( !selectedRegion.equals( this.region ) ) {\n        setXulMenusDisabled( true );\n\n        ec2Roles.clear();\n        emrRoles.clear();\n        masterInstanceTypes.clear();\n        slaveInstanceTypes.clear();\n      }\n\n      this.region = selectedRegion;\n\n      getJobEntry().setRegion( this.region );\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"selectedRegion\", null, this.region );\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public String getEc2SubnetId() {\n    return this.ec2SubnetId;\n  }\n\n  public void setEc2SubnetId( String ec2SubnetId ) {\n    String previousValue = this.ec2SubnetId;\n    this.ec2SubnetId = ec2SubnetId;\n    firePropertyChange( \"ec2SubnetId\", previousValue, ec2SubnetId );\n  }\n\n  /**\n   * Extract subnet ID from the display string format: \"name (subnet-xxxxx) -\n   * AZ: xx - CIDR: xx\" If the string is already a subnet ID or doesn't match\n   * the format, return as-is.\n   */\n  private String extractSubnetId( String displayString ) {\n    if ( displayString == null || displayString.isEmpty() ) {\n      return \"\";\n    }\n\n    // If it's already a subnet-id format, return as-is\n    if ( displayString.startsWith( \"subnet-\" ) && !displayString.contains( \"(\" ) ) {\n      return displayString;\n    }\n\n    // Try to extract from display format: \"name (subnet-xxxxx) - AZ: xx - CIDR: xx\"\n    int startIdx = displayString.indexOf( \"(subnet-\" );\n    if ( startIdx != -1 ) {\n      int endIdx = displayString.indexOf( \")\", startIdx );\n      if ( endIdx != -1 ) {\n        return displayString.substring( startIdx + 1, endIdx );\n      }\n    }\n\n    // If no pattern matched, return original (might be a variable)\n    return displayString;\n  }\n\n  protected void initializeEc2RoleSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> ec2RoleMenu = getXulMenu( XUL_EC2_ROLE );\n    String selectedEc2Role = getJobEntry().getEc2Role();\n    ec2RoleMenu.setSelectedItem( selectedEc2Role );\n  }\n\n  protected void initializeEmrRoleSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> emrRoleMenu = getXulMenu( XUL_EMR_ROLE );\n    String selectedEmrRole = getJobEntry().getEmrRole();\n    emrRoleMenu.setSelectedItem( selectedEmrRole );\n  }\n\n  protected void initializeMasterInstanceSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> namedClusterMenu = getXulMenu( XUL_MASTER_INSTANCE_TYPE );\n    String selectedMasterInstanceType = getJobEntry().getMasterInstanceType();\n    namedClusterMenu.setSelectedItem( selectedMasterInstanceType );\n  }\n\n  protected void initializeSlaveInstanceSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> namedClusterMenu = getXulMenu( XUL_SLAVE_INSTANCE_TYPE );\n    String selectedSlaveInstanceType = getJobEntry().getSlaveInstanceType();\n    namedClusterMenu.setSelectedItem( selectedSlaveInstanceType );\n  }\n\n  protected void initializeRegionSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> namedClusterMenu = getXulMenu( XUL_REGION );\n    String selectedRegion = getJobEntry().getRegion();\n    namedClusterMenu.setSelectedItem( selectedRegion );\n  }\n\n  protected void initializeReleaseSelection() {\n    @SuppressWarnings(\"unchecked\")\n    XulMenuList<String> namedClusterMenu = getXulMenu( XUL_EMR_RELEASE );\n    String selectedRelease = getJobEntry().getEmrRelease();\n    namedClusterMenu.setSelectedItem( selectedRelease );\n  }\n\n  protected void initializeEc2SubnetSelection() {\n    String savedSubnetId = getJobEntry().getEc2SubnetId();\n    if ( savedSubnetId != null && !savedSubnetId.isEmpty() ) {\n      this.ec2SubnetId = savedSubnetId;\n    }\n  }\n\n  protected void initializeClusterSelection() {\n    XulRadio newCluster = ( XulRadio ) container.getDocumentRoot().getElementById( XUL_NEW_CLUSTER_DECK );\n    XulRadio existingCluster = ( XulRadio ) container.getDocumentRoot().getElementById( XUL_EXISTING_CLUSTER_DECK );\n\n    newCluster.setSelected( this.runOnNewCluster );\n    existingCluster.setSelected( !this.runOnNewCluster );\n\n    updateClusterState();\n  }\n\n  protected void beforeInit() {\n\n    suppressEventHandling = true;\n\n    if ( getJobEntry() != null ) {\n      setName( getJobEntry().getName() );\n      setJobEntryName( getJobEntry().getName() );\n      setHadoopJobName( getJobEntry().getHadoopJobName() );\n      setHadoopJobFlowId( getJobEntry().getHadoopJobFlowId() );\n      setAccessKey( getJobEntry().getAccessKey() );\n      setSecretKey( getJobEntry().getSecretKey() );\n      setSessionToken( getJobEntry().getSessionToken() );\n      setStagingDir( getJobEntry().getStagingDir() );\n      setNumInstances( getJobEntry().getNumInstances() );\n      setMasterInstanceType( getJobEntry().getMasterInstanceType() );\n      setSlaveInstanceType( getJobEntry().getSlaveInstanceType() );\n      setRegion( getJobEntry().getRegion() );\n      setEmrRelease( getJobEntry().getEmrRelease() );\n      setEc2Role( getJobEntry().getEc2Role() );\n      setEmrRole( getJobEntry().getEmrRole() );\n      setCommandLineArgs( getJobEntry().getCmdLineArgs() );\n      setRunOnNewCluster( getJobEntry().isRunOnNewCluster() );\n      setBlocking( getJobEntry().getBlocking() );\n      setAlive( getJobEntry().getAlive() );\n      setLoggingInterval( getJobEntry().getLoggingInterval() );\n    }\n  }\n\n  protected void afterInit() {\n\n    suppressEventHandling = false;\n\n    initializeRegionSelection();\n    initializeEc2RoleSelection();\n    initializeEmrRoleSelection();\n    initializeMasterInstanceSelection();\n    initializeSlaveInstanceSelection();\n    initializeReleaseSelection();\n    initializeEc2SubnetSelection();\n    initializeClusterSelection();\n  }\n\n  protected abstract String getDialogElementId();\n\n  /**\n   * Look up the dialog reference from the document.\n   *\n   * @return The dialog element referred to by {@link #getDialogElementId()}\n   */\n  protected SwtDialog getDialog() {\n    return ( SwtDialog ) getXulDomContainer().getDocumentRoot().getElementById( getDialogElementId() );\n  }\n\n  /**\n   * @return the shell for the currently visible dialog. This will be used to\n   * display additional dialogs/popups.\n   */\n  protected Shell getShell() {\n    return getDialog().getShell();\n  }\n\n  public JobEntryInterface open() {\n    XulDialog dialog = getDialog();\n    dialog.show();\n    return getJobEntry();\n  }\n\n  /**\n   * Show an error dialog with the title and message provided.\n   * Uses the existing XUL error dialog mechanism to avoid SWT focus management issues.\n   *\n   * @param title   Dialog window title\n   * @param message Dialog message\n   */\n  protected void showErrorDialog( String title, String message ) {\n    // Use the existing XUL error dialog instead of creating new SWT MessageBox\n    // This avoids focus management issues with disposed widgets\n    Shell shell = getShell();\n    if ( shell != null && !shell.isDisposed() ) {\n      shell.getDisplay().asyncExec( () -> {\n        if ( !shell.isDisposed() ) {\n          openErrorDialog( title, message );\n        }\n      } );\n    }\n  }\n\n  public void init() throws XulException, InvocationTargetException {\n\n    beforeInit();\n\n    try {\n      for ( Binding binding : bindings ) {\n        binding.fireSourceChanged();\n      }\n    } finally {\n      afterInit();\n    }\n  }\n\n  public void cancel() {\n    XulDialog xulDialog\n      = ( XulDialog ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_JOB_ENTRY_DIALOG );\n    Shell shell = ( Shell ) xulDialog.getRootObject();\n    if ( !shell.isDisposed() ) {\n      WindowProperty winprop = new WindowProperty( shell );\n      PropsUI.getInstance().setScreen( winprop );\n      ( ( Composite ) xulDialog.getManagedObject() ).dispose();\n      shell.dispose();\n    }\n  }\n\n  public void openErrorDialog( String title, String message ) {\n    XulDialog errorDialog\n      = ( XulDialog ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_ERROR_DIALOG );\n    errorDialog.setTitle( title );\n\n    XulTextbox errorMessage\n      = ( XulTextbox ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_ERROR_MESSAGE );\n    errorMessage.setValue( message );\n\n    errorDialog.show();\n  }\n\n  public void closeErrorDialog() {\n    XulDialog errorDialog\n      = ( XulDialog ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_ERROR_DIALOG );\n    errorDialog.hide();\n  }\n\n  protected VfsFileChooserDialog getFileChooserDialog() throws KettleFileException {\n    if ( this.fileChooserDialog == null ) {\n      FileObject initialFile = null;\n      Spoon spoon = Spoon.getInstance();\n      FileObject defaultInitialFile = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( \"file:///c:/\" );\n\n      VfsFileChooserDialog fileChooserDialog\n        = Spoon.getInstance().getVfsFileChooserDialog( defaultInitialFile, initialFile );\n      this.fileChooserDialog = fileChooserDialog;\n    }\n    return this.fileChooserDialog;\n  }\n\n  protected FileSystemOptions getFileSystemOptions() throws FileSystemException {\n    FileSystemOptions opts = new FileSystemOptions();\n\n    if ( !Const.isEmpty( getAccessKey() ) || !Const.isEmpty( getSecretKey() ) ) {\n      // create a FileSystemOptions with user & password\n      StaticUserAuthenticator userAuthenticator\n        = new StaticUserAuthenticator( null, getVariableSpace().environmentSubstitute( getAccessKey() ),\n        getVariableSpace().environmentSubstitute( getSecretKey() ) );\n\n      DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator( opts, userAuthenticator );\n    }\n    return opts;\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri ) throws KettleException,\n    FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, new FileSystemOptions() );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, new FileSystemOptions(), fileDialogMode );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n                            int fileDialogMode ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, fileDialogMode, false );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n                            int fileDialogMode, boolean showFileScheme ) throws KettleException, FileSystemException {\n    getFileChooserHelper().setShowFileScheme( showFileScheme );\n    return getFileChooserHelper().browse( fileFilters, fileFilterNames, fileUri, opts, fileDialogMode );\n  }\n\n  public void browseS3StagingDir() throws KettleException, FileSystemException {\n    String[] fileFilters = new String[] {\"*.*\"};\n    String[] fileFilterNames = new String[] {\"All\"};\n\n    String stagingDirText = getVariableSpace().environmentSubstitute( stagingDir );\n    FileSystemOptions opts = getFileSystemOptions();\n\n    FileObject selectedFile = browse( fileFilters, fileFilterNames, stagingDirText, opts );\n\n    if ( selectedFile != null ) {\n      setStagingDir( selectedFile.getName().getURI() );\n    }\n  }\n\n  public VariableSpace getVariableSpace() {\n    if ( Spoon.getInstance().getActiveTransformation() != null ) {\n      return Spoon.getInstance().getActiveTransformation();\n    } else if ( Spoon.getInstance().getActiveJob() != null ) {\n      return Spoon.getInstance().getActiveJob();\n    } else {\n      return new Variables();\n    }\n  }\n\n  @Override\n  public String getName() {\n    return \"jobEntryController\";\n  }\n\n  public String getJobEntryName() {\n    return jobEntryName;\n  }\n\n  public void setJobEntryName( String jobEntryName ) {\n    String previousVal = this.jobEntryName;\n    String newVal = jobEntryName;\n\n    this.jobEntryName = jobEntryName;\n    firePropertyChange( JOB_ENTRY_NAME, previousVal, newVal );\n  }\n\n  public String getHadoopJobName() {\n    return hadoopJobName;\n  }\n\n  public void setHadoopJobName( String hadoopJobName ) {\n    String previousVal = this.hadoopJobName;\n    String newVal = hadoopJobName;\n\n    this.hadoopJobName = hadoopJobName;\n    firePropertyChange( HADOOP_JOB_NAME, previousVal, newVal );\n  }\n\n  public String getHadoopJobFlowId() {\n    return hadoopJobFlowId;\n  }\n\n  public void setHadoopJobFlowId( String hadoopJobFlowId ) {\n    String previousVal = this.hadoopJobFlowId;\n    String newVal = hadoopJobFlowId;\n\n    this.hadoopJobFlowId = hadoopJobFlowId;\n    firePropertyChange( HADOOP_JOB_FLOW_ID, previousVal, newVal );\n  }\n\n  public String getAccessKey() {\n    return accessKey;\n  }\n\n  public void setAccessKey( String accessKey ) {\n    String previousVal = this.accessKey;\n    String newVal = accessKey;\n\n    this.accessKey = accessKey;\n    firePropertyChange( ACCESS_KEY, previousVal, newVal );\n  }\n\n  public String getSecretKey() {\n    return secretKey;\n  }\n\n  public void setSecretKey( String secretKey ) {\n    String previousVal = this.secretKey;\n    String newVal = secretKey;\n\n    this.secretKey = secretKey;\n    firePropertyChange( SECRET_KEY, previousVal, newVal );\n  }\n\n  public String getSessionToken() {\n    return sessionToken;\n  }\n\n  public void setSessionToken( String sessionToken ) {\n    String previousVal = this.sessionToken;\n    String newVal = sessionToken;\n\n    this.sessionToken = sessionToken;\n    firePropertyChange( SESSION_TOKEN, previousVal, newVal );\n  }\n\n  public String getRegion() {\n    return region;\n  }\n\n  public void setRegion( String region ) {\n    String previousVal = this.region;\n    String newVal = region;\n\n    this.region = region;\n    firePropertyChange( REGION, previousVal, newVal );\n  }\n\n  public String getStagingDir() {\n    return stagingDir;\n  }\n\n  public void setStagingDir( String stagingDir ) {\n    String previousVal = this.stagingDir;\n    String newVal = stagingDir;\n\n    this.stagingDir = stagingDir;\n    firePropertyChange( STAGING_DIR, previousVal, newVal );\n  }\n\n  public FileObject getStagingDirFile() {\n    return stagingDirFile;\n  }\n\n  public void setStagingDirFile( FileObject stagingDirFile ) {\n    FileObject previousVal = this.stagingDirFile;\n    FileObject newVal = stagingDirFile;\n\n    this.stagingDirFile = stagingDirFile;\n    firePropertyChange( STAGING_DIR_FILE, previousVal, newVal );\n  }\n\n  public String getJarUrl() {\n    return jarUrl;\n  }\n\n  public void setJarUrl( String jarUrl ) {\n    String previousVal = this.jarUrl;\n    String newVal = jarUrl;\n\n    this.jarUrl = jarUrl;\n    firePropertyChange( JAR_URL, previousVal, newVal );\n  }\n\n  public String getNumInstances() {\n    return numInstances;\n  }\n\n  public void setNumInstances( String numInstances ) {\n    String previousVal = this.numInstances;\n    String newVal = numInstances;\n\n    this.numInstances = numInstances;\n    firePropertyChange( NUM_INSTANCES, previousVal, newVal );\n  }\n\n  public String getMasterInstanceType() {\n    return masterInstanceType;\n  }\n\n  public void setMasterInstanceType( String masterInstanceType ) {\n    String previousVal = this.masterInstanceType;\n    String newVal = masterInstanceType;\n\n    this.masterInstanceType = masterInstanceType;\n    firePropertyChange( MASTER_INSTANCE_TYPE, previousVal, newVal );\n  }\n\n  public String getSlaveInstanceType() {\n    return slaveInstanceType;\n  }\n\n  public void setSlaveInstanceType( String slaveInstanceType ) {\n    String previousVal = this.slaveInstanceType;\n    String newVal = slaveInstanceType;\n\n    this.slaveInstanceType = slaveInstanceType;\n    firePropertyChange( SLAVE_INSTANCE_TYPE, previousVal, newVal );\n  }\n\n  public String getEc2Role() {\n    return ec2Role;\n  }\n\n  public void setEc2Role( String ec2Role ) {\n    String previousVal = this.ec2Role;\n    String newVal = ec2Role;\n\n    this.ec2Role = ec2Role;\n    firePropertyChange( EC2_ROLE, previousVal, newVal );\n  }\n\n  public String getEmrRole() {\n    return emrRole;\n  }\n\n  public void setEmrRole( String emrRole ) {\n    String previousVal = this.emrRole;\n    String newVal = emrRole;\n\n    this.emrRole = emrRole;\n    firePropertyChange( EMR_ROLE, previousVal, newVal );\n  }\n\n  public void invertBlocking() {\n    setBlocking( !getBlocking() );\n  }\n\n  public abstract AbstractAmazonJobEntry getJobEntry();\n\n  public abstract void setJobEntry( AbstractAmazonJobEntry jobEntry );\n\n  public String getCommandLineArgs() {\n    return commandLineArgs;\n  }\n\n  public void setCommandLineArgs( String commandLineArgs ) {\n    String previousVal = this.commandLineArgs;\n    String newVal = commandLineArgs;\n\n    this.commandLineArgs = commandLineArgs;\n\n    firePropertyChange( CMD_LINE_ARGS, previousVal, newVal );\n  }\n\n  public boolean isRunOnNewCluster() {\n    return runOnNewCluster;\n  }\n\n  public void setRunOnNewCluster( boolean selected ) {\n    boolean previousVal = this.runOnNewCluster;\n    boolean newVal = selected;\n\n    this.runOnNewCluster = selected;\n\n    firePropertyChange( RUN_ON_NEW_CLUSTER, previousVal, newVal );\n  }\n\n  public boolean getBlocking() {\n    return blocking;\n  }\n\n  public void setBlocking( boolean blocking ) {\n    boolean previousVal = this.blocking;\n    boolean newVal = blocking;\n\n    this.blocking = blocking;\n    firePropertyChange( BLOCKING, previousVal, newVal );\n  }\n\n  public String getLoggingInterval() {\n    return loggingInterval;\n  }\n\n  public void setLoggingInterval( String loggingInterval ) {\n    String previousVal = this.loggingInterval;\n    String newVal = loggingInterval;\n\n    this.loggingInterval = loggingInterval;\n    firePropertyChange( LOGGING_INTERVAL, previousVal, newVal );\n  }\n\n  public String getEmrRelease() {\n    return emrRelease;\n  }\n\n  public void setEmrRelease( String emrRelease ) {\n\n    if ( !suppressEventHandling ) {\n      this.emrRelease = emrRelease;\n      getJobEntry().setEmrRelease( this.emrRelease );\n\n      suppressEventHandling = true;\n      try {\n        firePropertyChange( \"emrRelease\", null, this.emrRelease );\n      } finally {\n        suppressEventHandling = false;\n      }\n    }\n  }\n\n  public boolean isAlive() {\n    return alive;\n  }\n\n  public void setAlive( boolean alive ) {\n    boolean previousVal = this.alive;\n    this.alive = alive;\n    firePropertyChange( ALIVE, previousVal, alive );\n  }\n\n  public void invertAlive() {\n    setAlive( !isAlive() );\n  }\n\n  public FileObject resolveFile( Bowl bowl, String fileUri ) throws FileSystemException, KettleFileException {\n    VariableSpace vs = getVariableSpace();\n    FileSystemOptions opts = new FileSystemOptions();\n    DefaultFileSystemConfigBuilder.getInstance().setUserAuthenticator( opts,\n      new StaticUserAuthenticator( null, getAccessKey(), getSecretKey() ) );\n    FileObject file = KettleVFS.getInstance( bowl ).getFileObject( fileUri, vs, opts );\n    return file;\n  }\n\n  protected S3VfsFileChooserHelper getFileChooserHelper() throws KettleFileException, FileSystemException {\n    if ( helper == null ) {\n      XulDialog xulDialog\n        = ( XulDialog ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_JOB_ENTRY_DIALOG );\n      Shell shell = ( Shell ) xulDialog.getRootObject();\n\n      helper = new S3VfsFileChooserHelper( shell, getFileChooserDialog(), getVariableSpace(), getFileSystemOptions() );\n    }\n    return helper;\n  }\n\n  public void help() {\n    JobEntryInterface jobEntry = getJobEntry();\n    XulDialog xulDialog\n      = ( XulDialog ) getXulDomContainer().getDocumentRoot().getElementById( XUL_AMAZON_EMR_JOB_ENTRY_DIALOG );\n    Shell shell = ( Shell ) xulDialog.getRootObject();\n    PluginInterface plugin\n      = PluginRegistry.getInstance().findPluginWithId( JobEntryPluginType.class, jobEntry.getPluginId() );\n    HelpUtils.openHelpDialog( shell, plugin );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AmazonEmrReleases.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\n\n/**\n * Created by Aliaksandr_Zhuk on 1/20/2018.\n */\npublic enum AmazonEmrReleases {\n  EMR_770( \"emr-7.7.0\" ),\n  EMR_700( \"emr-7.0.0\" ),\n  EMR_5360( \"emr-5.36.0\" ),\n  EMR_5110( \"emr-5.11.0\" ),\n  EMR_5100( \"emr-5.10.0\" ),\n  EMR_590( \"emr-5.9.0\" ),\n  EMR_580( \"emr-5.8.0\" ),\n  EMR_570( \"emr-5.7.0\" ),\n  EMR_560( \"emr-5.6.0\" ),\n  EMR_550( \"emr-5.5.0\" ),\n  EMR_540( \"emr-5.4.0\" ),\n  EMR_531( \"emr-5.3.1\" ),\n  EMR_530( \"emr-5.3.0\" ),\n  EMR_522( \"emr-5.2.2\" ),\n  EMR_521( \"emr-5.2.1\" ),\n  EMR_520( \"emr-5.2.0\" ),\n  EMR_510( \"emr-5.1.0\" ),\n  EMR_503( \"emr-5.0.3\" ),\n  EMR_500( \"emr-5.0.0\" ),\n  EMR_492( \"emr-4.9.2\" ),\n  EMR_491( \"emr-4.9.1\" ),\n  EMR_484( \"emr-4.8.4\" ),\n  EMR_483( \"emr-4.8.3\" ),\n  EMR_482( \"emr-4.8.2\" ),\n  EMR_480( \"emr-4.8.0\" ),\n  EMR_472( \"emr-4.7.2\" ),\n  EMR_471( \"emr-4.7.1\" ),\n  EMR_470( \"emr-4.7.0\" ),\n  EMR_460( \"emr-4.6.0\" ),\n  EMR_450( \"emr-4.5.0\" ),\n  EMR_440( \"emr-4.4.0\" ),\n  EMR_430( \"emr-4.3.0\" ),\n  EMR_420( \"emr-4.2.0\" ),\n  EMR_410( \"emr-4.1.0\" ),\n  EMR_400( \"emr-4.0.0\" );\n\n  private final String emrRelease;\n\n  AmazonEmrReleases( String emrRelease ) {\n    this.emrRelease = emrRelease;\n  }\n\n  public String getEmrRelease() {\n    return emrRelease;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/AmazonRegion.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\n\n/**\n * Created by Aliaksandr_Zhuk on 1/11/2018.\n */\npublic enum AmazonRegion {\n\n  US_EAST_1( \"us-east-1\", \"N. Virginia\", \"US East\" ),\n  US_EAST_2( \"us-east-2\", \"Ohio\", \"US East\" ),\n  US_WEST_1( \"us-west-1\", \"N. California\", \"US West\" ),\n  US_WEST_2( \"us-west-2\", \"Oregon\", \"US West\" ),\n  AP_Mumbai( \"ap-south-1\", \"Mumbai\", \"Asia Pacific\" ),\n  AP_Seoul( \"ap-northeast-2\", \"Seoul\", \"Asia Pacific\" ),\n  AP_Singapore( \"ap-southeast-1\", \"Singapore\", \"Asia Pacific\" ),\n  AP_Sydney( \"ap-southeast-2\", \"Sydney\", \"Asia Pacific\" ),\n  AP_Tokyo( \"ap-northeast-1\", \"Tokyo\", \"Asia Pacific\" ),\n  CA_CENTRAL( \"ca-central-1\", \"Central\", \"Canada\" ),\n  EU_Frankfurt( \"eu-central-1\", \"Frankfurt\", \"EU\" ),\n  EU_Ireland( \"eu-west-1\", \"Ireland\", \"EU\" ),\n  EU_London( \"eu-west-2\", \"London\", \"EU\" ),\n  EU_Paris( \"eu-west-3\", \"Paris\", \"EU\" ),\n  SA_SaoPaulo( \"sa-east-1\", \"Sao Paulo\", \"South America\" ),\n  US_GovCloud( \"us-gov-west-1\", \"US\", \"AWS GovCloud\" );\n\n  private String regionId;\n  private String city;\n  private String region;\n\n  private static final AmazonRegion DEFAULT_REGION = AmazonRegion.US_EAST_1;\n\n  AmazonRegion( String regionId, String city, String region ) {\n    this.regionId = regionId;\n    this.city = city;\n    this.region = region;\n  }\n\n  public String getHumanReadableRegion() {\n    StringBuilder sb = new StringBuilder( this.region ).append( \" (\" ).append( this.city ).append( \")\" );\n    return sb.toString();\n  }\n\n  public static String extractRegionFromDescription( String humanReadableRegion ) {\n    for ( AmazonRegion region : AmazonRegion.values() ) {\n      if ( region.getHumanReadableRegion().equals( humanReadableRegion ) ) {\n        return region.regionId;\n      }\n    }\n    return DEFAULT_REGION.regionId;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/InstanceType.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\n/**\n * Created by Aliaksandr_Zhuk on 1/26/2018.\n */\npublic class InstanceType implements Comparable<InstanceType> {\n\n  private String type;\n  private String instanceFamily;\n\n  public InstanceType( String type, String instanceFamily ) {\n    this.type = type;\n    this.instanceFamily = instanceFamily;\n  }\n\n  public String getType() {\n    return type;\n  }\n\n  public void setType( String type ) {\n    this.type = type;\n  }\n\n  public String getInstanceFamily() {\n    return instanceFamily;\n  }\n\n  public void setInstanceFamily( String instanceFamily ) {\n    this.instanceFamily = instanceFamily;\n  }\n\n  public static String createDescription( InstanceType type ) {\n    return type.getType() + \" (\" + type.getInstanceFamily() + \")\";\n  }\n\n  public static String getTypeFromDescription( String description ) {\n    return description.trim().split( \"\\\\s+\" )[ 0 ];\n  }\n\n  @Override\n  public boolean equals( Object o ) {\n    if ( this == o ) {\n      return true;\n    }\n    if ( o == null || getClass() != o.getClass() ) {\n      return false;\n    }\n\n    InstanceType instanceType = (InstanceType) o;\n\n    return type != null ? type.equals( instanceType.type ) : false;\n  }\n\n  @Override\n  public int hashCode() {\n    int result = 1;\n    result = 31 * result + ( ( type == null ) ? 0 : type.hashCode() );\n    return result;\n  }\n\n  @Override\n  public int compareTo( InstanceType instanceType ) {\n    return extractInt( type ) - extractInt( instanceType.getType() );\n  }\n\n  private int extractInt( String type ) {\n    String number = type.split( \"\\\\.\" )[ 1 ].replaceAll( \"\\\\D\", \"\" );\n    return number.isEmpty() ? 0 : Integer.parseInt( number );\n  }\n\n  public static List<String> sortInstanceTypes( List<InstanceType> instances ) {\n\n    List<InstanceType> instanceGroup;\n    List<String> instanceGroupTypes;\n    List<String> sortedInstances = new ArrayList<>();\n    Collections.sort( instances, ( o1, o2 ) -> o1.getType().compareTo( o2.getType() ) );\n\n    while ( instances.size() > 0 ) {\n      instanceGroup = instances.stream()\n        .filter( e -> instances.get( 0 ).getType().split( \"\\\\.\" )[ 0 ].equals( e.getType().split( \"\\\\.\" )[ 0 ] ) )\n        .collect(\n          Collectors.toList() );\n\n      instances.removeAll( instanceGroup );\n      Collections.sort( instanceGroup );\n\n      instanceGroupTypes = instanceGroup.stream().map( e -> e.getType() ).collect( Collectors.toList() );\n\n      sortedInstances.addAll( instanceGroupTypes );\n    }\n    return sortedInstances;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/AbstractClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic abstract class AbstractClientFactory<T> {\n\n  public abstract T createClient( String accessKey, String secretKey, String sessionToken, String region );\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/AmazonClientCredentials.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\nimport com.amazonaws.auth.AWSCredentials;\nimport com.amazonaws.auth.BasicAWSCredentials;\nimport com.amazonaws.auth.BasicSessionCredentials;\nimport com.amazonaws.regions.RegionUtils;\nimport org.pentaho.amazon.AmazonRegion;\nimport org.pentaho.di.core.util.StringUtil;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class AmazonClientCredentials {\n\n  private AWSCredentials credentials;\n  private String region;\n\n  public AmazonClientCredentials( String accessKey, String secretKey, String sessionToken, String region ) {\n    if ( !StringUtil.isEmpty( sessionToken ) ) {\n      credentials = new BasicSessionCredentials( accessKey, secretKey, sessionToken );\n    } else {\n      credentials = new BasicAWSCredentials( accessKey, secretKey );\n    }\n    this.region = extractRegion( region );\n  }\n\n  public AWSCredentials getAWSCredentials() {\n    return credentials;\n  }\n\n  public String getRegion() {\n    return region;\n  }\n\n  private String extractRegion( String region ) {\n    return RegionUtils.getRegion( AmazonRegion.extractRegionFromDescription( region ) ).getName();\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/ClientFactoriesManager.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\nimport org.pentaho.amazon.client.impl.AimClientFactory;\nimport org.pentaho.amazon.client.impl.Ec2ClientFactory;\nimport org.pentaho.amazon.client.impl.EmrClientFactory;\nimport org.pentaho.amazon.client.impl.PricingClientFactory;\nimport org.pentaho.amazon.client.impl.S3ClientFactory;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class ClientFactoriesManager {\n\n  private Map<ClientType, AbstractClientFactory> clientFactoryMap;\n  private static ClientFactoriesManager instance;\n\n  private ClientFactoriesManager() {\n    clientFactoryMap = new HashMap<>();\n    clientFactoryMap.put( ClientType.S3, new S3ClientFactory() );\n    clientFactoryMap.put( ClientType.EMR, new EmrClientFactory() );\n    clientFactoryMap.put( ClientType.AIM, new AimClientFactory() );\n    clientFactoryMap.put( ClientType.PRICING, new PricingClientFactory() );\n    clientFactoryMap.put( ClientType.EC2, new Ec2ClientFactory() );\n  }\n\n  public static ClientFactoriesManager getInstance() {\n    if ( instance == null ) {\n      instance = new ClientFactoriesManager();\n    }\n    return instance;\n  }\n\n  public <T> T createClient( String accessKey, String secretKey, String sessionToken, String region, ClientType clientType ) {\n    AbstractClientFactory clientFactory = getClientFactory( clientType );\n    T amazonClient = (T) clientFactory.createClient( accessKey, secretKey, sessionToken, region );\n    return amazonClient;\n  }\n\n  private AbstractClientFactory getClientFactory( ClientType clientType ) {\n    if ( clientFactoryMap.containsKey( clientType ) ) {\n      return clientFactoryMap.get( clientType );\n    }\n    return null;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/ClientType.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic enum ClientType {\n  S3, EMR, AIM, PRICING, EC2\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/api/AimClient.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.api;\n\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic interface AimClient {\n\n  AbstractModelList<String> getEc2RolesFromAmazonAccount();\n\n  AbstractModelList<String> getEmrRolesFromAmazonAccount();\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/api/Ec2Client.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.api;\n\nimport java.util.List;\n\n/**\n * EC2 Client interface for interacting with AWS EC2 services\n */\npublic interface Ec2Client {\n\n  /**\n   * Retrieves a list of available VPC subnets in the configured region\n   * \n   * @return List of subnet information containing subnet ID, name, VPC ID, availability zone, and CIDR block\n   */\n  List<SubnetInfo> getAvailableSubnets();\n\n  /**\n   * Represents information about an AWS VPC subnet\n   */\n  class SubnetInfo {\n    private final String subnetId;\n    private final String subnetName;\n    private final String vpcId;\n    private final String availabilityZone;\n    private final String cidrBlock;\n    private final String state;\n\n    public SubnetInfo( String subnetId, String subnetName, String vpcId, \n                      String availabilityZone, String cidrBlock, String state ) {\n      this.subnetId = subnetId;\n      this.subnetName = subnetName;\n      this.vpcId = vpcId;\n      this.availabilityZone = availabilityZone;\n      this.cidrBlock = cidrBlock;\n      this.state = state;\n    }\n\n    public String getSubnetId() {\n      return subnetId;\n    }\n\n    public String getSubnetName() {\n      return subnetName != null && !subnetName.isEmpty() ? subnetName : subnetId;\n    }\n\n    public String getVpcId() {\n      return vpcId;\n    }\n\n    public String getAvailabilityZone() {\n      return availabilityZone;\n    }\n\n    public String getCidrBlock() {\n      return cidrBlock;\n    }\n\n    public String getState() {\n      return state;\n    }\n\n    /**\n     * Returns a display string for UI dropdown\n     * Format: \"subnet-name (subnet-id) - AZ: az-name - CIDR: cidr-block\"\n     */\n    public String getDisplayString() {\n      StringBuilder display = new StringBuilder();\n      display.append( getSubnetName() );\n      if ( subnetName != null && !subnetName.isEmpty() && !subnetName.equals( subnetId ) ) {\n        display.append( \" (\" ).append( subnetId ).append( \")\" );\n      }\n      display.append( \" - AZ: \" ).append( availabilityZone );\n      display.append( \" - CIDR: \" ).append( cidrBlock );\n      return display.toString();\n    }\n\n    @Override\n    public String toString() {\n      return getDisplayString();\n    }\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/api/EmrClient.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.api;\n\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\n\nimport java.net.URISyntaxException;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic interface EmrClient {\n\n  void runJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl, String stepType, String mainClass,\n                   String bootstrapActions,\n                   AbstractAmazonJobEntry jobEntry\n  );\n\n  String getHadoopJobFlowId();\n\n  String getStepId();\n\n  void addStepToExistingJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl, String stepType, String mainClass,\n                                 AbstractAmazonJobEntry jobEntry );\n\n  String getCurrentClusterState();\n\n  String getCurrentStepState();\n\n  boolean isClusterRunning();\n\n  boolean isStepRunning();\n\n  boolean isRunning();\n\n  boolean isClusterTerminated();\n\n  boolean isStepFailed();\n\n  boolean isStepNotSuccess();\n\n  String getJobFlowLogUri()throws URISyntaxException;\n\n  boolean stopSteps();\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/api/PricingClient.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.api;\n\nimport com.amazonaws.services.pricing.model.AWSPricingException;\n\nimport java.io.IOException;\nimport java.util.List;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic interface PricingClient {\n\n  List<String> populateInstanceTypesForSelectedRegion() throws AWSPricingException, IOException;\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/api/S3Client.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.api;\n\nimport java.io.File;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic interface S3Client {\n\n  /**\n   * Copied from\n   * @class S3FileProvider.java\n   * @module s3-vfs\n   */\n  String SCHEME = \"s3\";\n\n  void createBucketIfNotExists( String stagingBucketName );\n\n  void deleteObjectFromBucket( String stagingBucketName, String key );\n\n  void putObjectInBucket( String stagingBucketName, String key, File tmpFile );\n\n  String readStepLogsFromS3( String stagingBucketName, String hadoopJobFlowId, String stepId );\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/AimClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.auth.AWSStaticCredentialsProvider;\nimport com.amazonaws.services.identitymanagement.AmazonIdentityManagement;\nimport com.amazonaws.services.identitymanagement.AmazonIdentityManagementClientBuilder;\nimport org.pentaho.amazon.client.AmazonClientCredentials;\nimport org.pentaho.amazon.client.AbstractClientFactory;\nimport org.pentaho.amazon.client.api.AimClient;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class AimClientFactory extends AbstractClientFactory<AimClient> {\n\n  @Override\n  public AimClient createClient( String accessKey, String secretKey, String sessionToken, String region ) {\n    AmazonClientCredentials clientCredentials = new AmazonClientCredentials( accessKey, secretKey, sessionToken, region );\n\n    AmazonIdentityManagement awsAimClient =\n      AmazonIdentityManagementClientBuilder.standard().withRegion( clientCredentials.getRegion() )\n        .withCredentials( new AWSStaticCredentialsProvider( clientCredentials.getAWSCredentials() ) ).build();\n\n    AimClient aimClient = new AimClientImpl( awsAimClient );\n    return aimClient;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/AimClientImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.identitymanagement.AmazonIdentityManagement;\nimport com.amazonaws.services.identitymanagement.model.InstanceProfile;\nimport com.amazonaws.services.identitymanagement.model.Role;\nimport org.pentaho.amazon.client.api.AimClient;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.util.List;\nimport java.util.stream.Collectors;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class AimClientImpl implements AimClient {\n\n  private AmazonIdentityManagement aim;\n\n  public AimClientImpl( AmazonIdentityManagement aim ) {\n    this.aim = aim;\n  }\n\n  @Override\n  public AbstractModelList<String> getEc2RolesFromAmazonAccount() {\n    List<InstanceProfile> ec2RolesList = aim.listInstanceProfiles().getInstanceProfiles();\n    AbstractModelList<String> ec2List;\n    ec2List = ec2RolesList.stream().map( e -> e.getInstanceProfileName() )\n      .collect( Collectors.toCollection( AbstractModelList<String>::new ) );\n\n    return ec2List;\n  }\n\n  @Override\n  public AbstractModelList<String> getEmrRolesFromAmazonAccount() {\n    List<Role> emrRolesList = aim.listRoles().getRoles();\n    AbstractModelList<String> emrList;\n    emrList =\n      emrRolesList.stream().filter( e -> e.getAssumeRolePolicyDocument().contains( \"elasticmapreduce\" ) )\n        .map( e -> e.getRoleName() ).collect( Collectors.toCollection( AbstractModelList<String>::new ) );\n\n    return emrList;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/Ec2ClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport org.pentaho.amazon.client.AbstractClientFactory;\nimport org.pentaho.amazon.client.api.Ec2Client;\n\n/**\n * Factory for creating EC2 client instances\n */\npublic class Ec2ClientFactory extends AbstractClientFactory<Ec2Client> {\n\n  @Override\n  public Ec2Client createClient( String accessKey, String secretKey, String sessionToken, String region ) {\n    return new Ec2ClientImpl( accessKey, secretKey, sessionToken, region );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/Ec2ClientImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.auth.AWSStaticCredentialsProvider;\nimport com.amazonaws.auth.BasicAWSCredentials;\nimport com.amazonaws.auth.BasicSessionCredentials;\nimport com.amazonaws.regions.RegionUtils;\nimport com.amazonaws.services.ec2.AmazonEC2;\nimport com.amazonaws.services.ec2.AmazonEC2ClientBuilder;\nimport com.amazonaws.services.ec2.model.DescribeSubnetsRequest;\nimport com.amazonaws.services.ec2.model.DescribeSubnetsResult;\nimport com.amazonaws.services.ec2.model.Filter;\nimport com.amazonaws.services.ec2.model.Subnet;\nimport com.amazonaws.services.ec2.model.Tag;\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.pentaho.amazon.AmazonRegion;\nimport org.pentaho.amazon.client.api.Ec2Client;\n\n/**\n * Implementation of EC2 Client for interacting with AWS EC2 services\n */\npublic class Ec2ClientImpl implements Ec2Client {\n\n  private AmazonEC2 ec2Client;\n\n  public Ec2ClientImpl( String awsAccessKeyId, String awsSecretKey, String awsSessionToken, String region ) {\n    AmazonEC2ClientBuilder ec2ClientBuilder = AmazonEC2ClientBuilder.standard();\n\n    // Set up credentials\n    if ( awsSessionToken != null && !awsSessionToken.trim().isEmpty() ) {\n      // Use session credentials if session token is provided\n      BasicSessionCredentials sessionCredentials = \n        new BasicSessionCredentials( awsAccessKeyId, awsSecretKey, awsSessionToken );\n      ec2ClientBuilder.withCredentials( new AWSStaticCredentialsProvider( sessionCredentials ) );\n    } else {\n      // Use basic credentials\n      BasicAWSCredentials awsCredentials = new BasicAWSCredentials( awsAccessKeyId, awsSecretKey );\n      ec2ClientBuilder.withCredentials( new AWSStaticCredentialsProvider( awsCredentials ) );\n    }\n\n    // Set region - convert from human-readable to AWS region code\n    if ( region != null && !region.trim().isEmpty() ) {\n      String regionCode = AmazonRegion.extractRegionFromDescription( region );\n      ec2ClientBuilder.withRegion( RegionUtils.getRegion( regionCode ).getName() );\n    }\n\n    ec2Client = ec2ClientBuilder.build();\n  }\n\n  @Override\n  public List<SubnetInfo> getAvailableSubnets() {\n    List<SubnetInfo> subnetInfoList = new ArrayList<>();\n\n    try {\n      // Create request to describe all subnets\n      DescribeSubnetsRequest request = new DescribeSubnetsRequest();\n      \n      // Optional: Filter to only show available subnets\n      Filter availableFilter = new Filter( \"state\", java.util.Arrays.asList( \"available\" ) );\n      request.withFilters( availableFilter );\n\n      DescribeSubnetsResult result = ec2Client.describeSubnets( request );\n\n      // Process each subnet\n      for ( Subnet subnet : result.getSubnets() ) {\n        String subnetId = subnet.getSubnetId();\n        String vpcId = subnet.getVpcId();\n        String availabilityZone = subnet.getAvailabilityZone();\n        String cidrBlock = subnet.getCidrBlock();\n        String state = subnet.getState();\n        \n        // Extract subnet name from tags\n        String subnetName = extractNameFromTags( subnet.getTags() );\n\n        SubnetInfo subnetInfo = new SubnetInfo( \n          subnetId, \n          subnetName, \n          vpcId, \n          availabilityZone, \n          cidrBlock, \n          state \n        );\n        \n        subnetInfoList.add( subnetInfo );\n      }\n    } catch ( Exception e ) {\n      // Log error but don't throw - return empty list instead\n      System.err.println( \"Error retrieving subnets: \" + e.getMessage() );\n      e.printStackTrace();\n    }\n\n    return subnetInfoList;\n  }\n\n  /**\n   * Extracts the Name tag value from a list of EC2 tags\n   * \n   * @param tags List of EC2 tags\n   * @return The value of the Name tag, or empty string if not found\n   */\n  private String extractNameFromTags( List<Tag> tags ) {\n    if ( tags == null || tags.isEmpty() ) {\n      return \"\";\n    }\n\n    for ( Tag tag : tags ) {\n      if ( \"Name\".equals( tag.getKey() ) ) {\n        return tag.getValue();\n      }\n    }\n\n    return \"\";\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/EmrClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.auth.AWSStaticCredentialsProvider;\nimport com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;\nimport com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClientBuilder;\nimport org.pentaho.amazon.client.AmazonClientCredentials;\nimport org.pentaho.amazon.client.AbstractClientFactory;\nimport org.pentaho.amazon.client.api.EmrClient;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class EmrClientFactory extends AbstractClientFactory<EmrClient> {\n\n  @Override\n  public EmrClient createClient( String accessKey, String secretKey, String sessionToken, String region ) {\n    AmazonClientCredentials clientCredentials = new AmazonClientCredentials( accessKey, secretKey, sessionToken, region );\n\n    AmazonElasticMapReduce awsEmrClient =\n      AmazonElasticMapReduceClientBuilder.standard().withRegion( clientCredentials.getRegion() )\n        .withCredentials( new AWSStaticCredentialsProvider( clientCredentials.getAWSCredentials() ) )\n        .build();\n\n    EmrClient emrClient = new EmrClientImpl( awsEmrClient );\n\n    return emrClient;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/EmrClientImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;\nimport com.amazonaws.services.elasticmapreduce.model.ActionOnFailure;\nimport com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsRequest;\nimport com.amazonaws.services.elasticmapreduce.model.Application;\nimport com.amazonaws.services.elasticmapreduce.model.BootstrapActionConfig;\nimport com.amazonaws.services.elasticmapreduce.model.CancelStepsRequest;\nimport com.amazonaws.services.elasticmapreduce.model.ClusterState;\nimport com.amazonaws.services.elasticmapreduce.model.DescribeClusterRequest;\nimport com.amazonaws.services.elasticmapreduce.model.DescribeClusterResult;\nimport com.amazonaws.services.elasticmapreduce.model.DescribeStepRequest;\nimport com.amazonaws.services.elasticmapreduce.model.DescribeStepResult;\nimport com.amazonaws.services.elasticmapreduce.model.HadoopJarStepConfig;\nimport com.amazonaws.services.elasticmapreduce.model.JobFlowInstancesConfig;\nimport com.amazonaws.services.elasticmapreduce.model.ListStepsRequest;\nimport com.amazonaws.services.elasticmapreduce.model.ListStepsResult;\nimport com.amazonaws.services.elasticmapreduce.model.RunJobFlowRequest;\nimport com.amazonaws.services.elasticmapreduce.model.RunJobFlowResult;\nimport com.amazonaws.services.elasticmapreduce.model.ScriptBootstrapActionConfig;\nimport com.amazonaws.services.elasticmapreduce.model.StepConfig;\nimport com.amazonaws.services.elasticmapreduce.model.StepExecutionState;\nimport com.amazonaws.services.elasticmapreduce.model.StepSummary;\nimport com.amazonaws.services.elasticmapreduce.model.TerminateJobFlowsRequest;\nimport com.amazonaws.services.elasticmapreduce.util.StepFactory;\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\nimport org.pentaho.amazon.client.api.EmrClient;\nimport org.pentaho.di.core.Const;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.core.util.Utils;\n\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.List;\nimport java.util.StringTokenizer;\nimport java.util.stream.Collectors;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class EmrClientImpl implements EmrClient {\n  private static final String EMR_EC2_DEFAULT_ROLE = \"EMR_EC2_DefaultRole\";\n  private static final String EMR_EFAULT_ROLE = \"EMR_DefaultRole\";\n\n  private static final String STEP_HIVE = \"hive\";\n  private static final String STEP_EMR = \"emr\";\n\n  private AmazonElasticMapReduce emrClient;\n  private String currentClusterState;\n  private String currentStepState;\n  private RunJobFlowResult runJobFlowResult;\n  private String hadoopJobFlowId;\n  private String stepId;\n  private List<StepSummary> stepSummaries = null;\n  private boolean alive;\n  private boolean runOnNewCluster = true;\n  private boolean requestClusterShutdown = false;\n  private boolean requestStepCancell = false;\n\n  public EmrClientImpl( AmazonElasticMapReduce emrClient ) {\n    this.emrClient = emrClient;\n  }\n\n  @Override\n  public void runJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl, String stepType, String mainClass,\n                          String bootstrapActions,\n                          AbstractAmazonJobEntry jobEntry\n  ) {\n\n    this.setAlive( jobEntry.getAlive() );\n    this.runOnNewCluster = true; // Running on a new cluster\n\n    RunJobFlowRequest runJobFlowRequest\n      = initEmrCluster( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, bootstrapActions, jobEntry );\n\n    runJobFlowResult = emrClient.runJobFlow( runJobFlowRequest );\n    hadoopJobFlowId = runJobFlowResult.getJobFlowId();\n    stepId = getCurrentlyRunningStepId();\n  }\n\n  @Override\n  public String getHadoopJobFlowId() {\n    return runJobFlowResult.getJobFlowId();\n  }\n\n  @Override\n  public String getStepId() {\n    return stepId;\n  }\n\n  @Override\n  public void addStepToExistingJobFlow( String stagingS3FileUrl, String stagingS3BucketUrl, String stepType,\n                                        String mainClass,\n                                        AbstractAmazonJobEntry jobEntry ) {\n    this.setAlive( jobEntry.getAlive() );\n    this.runOnNewCluster = false; // Running on an existing cluster\n    this.hadoopJobFlowId = jobEntry.getHadoopJobFlowId();\n\n    setStepsFromCluster();\n    List<StepConfig> steps = initSteps( stagingS3FileUrl, stepType, mainClass, jobEntry );\n    AddJobFlowStepsRequest addJobFlowStepsRequest = new AddJobFlowStepsRequest();\n    addJobFlowStepsRequest.setJobFlowId( hadoopJobFlowId );\n    addJobFlowStepsRequest.setSteps( steps );\n    emrClient.addJobFlowSteps( addJobFlowStepsRequest );\n\n    stepId = getSpecifiedRunningStep();\n  }\n\n  /**\n   * Determine if the step flow is in a running state.\n   *\n   * @return true if it is not in COMPLETED or CANCELLED or FAILED or\n   * INTERRUPTED, and false otherwise.\n   */\n  @Override\n  public boolean isClusterRunning() {\n    if ( ClusterState.WAITING.name().equalsIgnoreCase( currentClusterState ) ) {\n      return false;\n    }\n    if ( ClusterState.TERMINATED.name().equalsIgnoreCase( currentClusterState ) ) {\n      return false;\n    }\n    if ( ClusterState.TERMINATED_WITH_ERRORS.name().equalsIgnoreCase( currentClusterState ) ) {\n      return false;\n    }\n    return true;\n  }\n\n  @Override\n  public boolean isStepRunning() {\n    if ( StepExecutionState.CANCELLED.name().equalsIgnoreCase( currentStepState ) ) {\n      return false;\n    }\n    if ( StepExecutionState.INTERRUPTED.name().equalsIgnoreCase( currentStepState ) ) {\n      return false;\n    }\n    if ( StepExecutionState.COMPLETED.name().equalsIgnoreCase( currentStepState ) ) {\n      return false;\n    }\n    if ( StepExecutionState.FAILED.name().equalsIgnoreCase( currentStepState ) ) {\n      return false;\n    }\n    return true;\n  }\n\n  @Override\n  public boolean isRunning() {\n    currentStepState = getActualStepState();\n    currentClusterState = getActualClusterState();\n    boolean isClusterRunning = isClusterRunning();\n    boolean isStepRunning = isStepRunning();\n\n    // Terminate cluster only if we created it (runOnNewCluster) and it's not set to stay alive\n    // For existing clusters, never terminate\n    boolean shouldTerminate = runOnNewCluster && !isAlive();\n    \n    if ( shouldTerminate && !requestClusterShutdown && ClusterState.WAITING.name().equalsIgnoreCase( currentClusterState ) ) {\n      if ( !isStepRunning ) {\n        terminateJobFlows();\n        return isClusterRunning();\n      }\n    }\n    return ( isClusterRunning || isStepRunning );\n  }\n\n  @Override\n  public String getCurrentClusterState() {\n    return currentClusterState;\n  }\n\n  @Override\n  public String getCurrentStepState() {\n    return currentStepState;\n  }\n\n  @Override\n  public boolean isClusterTerminated() {\n    return ClusterState.TERMINATED.name().equalsIgnoreCase( currentClusterState ) || ClusterState.TERMINATED_WITH_ERRORS\n      .name().equalsIgnoreCase( currentClusterState );\n  }\n\n  @Override\n  public boolean isStepFailed() {\n    return StepExecutionState.FAILED.name().equalsIgnoreCase( currentStepState );\n  }\n\n  @Override\n  public boolean isStepNotSuccess() {\n    currentStepState = getActualStepState();\n    if ( StepExecutionState.CANCELLED.name().equalsIgnoreCase( currentStepState ) ) {\n      return true;\n    }\n    if ( StepExecutionState.INTERRUPTED.name().equalsIgnoreCase( currentStepState ) ) {\n      return true;\n    }\n    if ( StepExecutionState.FAILED.name().equalsIgnoreCase( currentStepState ) ) {\n      return true;\n    }\n    return false;\n  }\n\n  private JobFlowInstancesConfig initEC2Instance( Integer numInsts, String masterInstanceType,\n                                                  String slaveInstanceType, String ec2SubnetId ) {\n    JobFlowInstancesConfig instances = new JobFlowInstancesConfig();\n    instances.setInstanceCount( numInsts );\n    instances.setMasterInstanceType( masterInstanceType );\n    instances.setSlaveInstanceType( slaveInstanceType );\n    instances.setKeepJobFlowAliveWhenNoSteps( isAlive() );\n\n    // Set EC2 subnet if provided (required for VPC-only instance types like c5.*)\n    if ( ec2SubnetId != null && !ec2SubnetId.trim().isEmpty() ) {\n      instances.setEc2SubnetId( ec2SubnetId.trim() );\n    }\n\n    return instances;\n  }\n\n  @VisibleForTesting\n  RunJobFlowRequest initEmrCluster( String stagingS3FileUrl, String stagingS3BucketUrl, String stepType,\n                                    String mainClass,\n                                    String bootstrapActions,\n                                    AbstractAmazonJobEntry jobEntry\n  ) {\n\n    RunJobFlowRequest runJobFlowRequest = new RunJobFlowRequest();\n\n    runJobFlowRequest.setName( jobEntry.getHadoopJobName() );\n    runJobFlowRequest.setReleaseLabel( jobEntry.getEmrRelease() );\n    runJobFlowRequest.setLogUri( stagingS3BucketUrl );\n\n    JobFlowInstancesConfig instances\n      = initEC2Instance( Integer.parseInt( jobEntry.getNumInstances() ), jobEntry.getMasterInstanceType(),\n      jobEntry.getSlaveInstanceType(), jobEntry.getEc2SubnetId() );\n    runJobFlowRequest.setInstances( instances );\n\n    List<StepConfig> steps = initSteps( stagingS3FileUrl, stepType, mainClass, jobEntry );\n    if ( steps.size() > 0 ) {\n      runJobFlowRequest.setSteps( steps );\n    }\n\n    if ( stepType.equals( STEP_HIVE ) ) {\n      List<Application> applications = initApplications();\n      if ( applications.size() > 0 ) {\n        runJobFlowRequest.setApplications( applications );\n      }\n\n      List<BootstrapActionConfig> stepBootstrapActions = initBootstrapActions( bootstrapActions );\n      if ( stepBootstrapActions != null && stepBootstrapActions.size() > 0 ) {\n        runJobFlowRequest.setBootstrapActions( stepBootstrapActions );\n      }\n    }\n\n    String ec2Role = jobEntry.getEc2Role();\n    if ( ec2Role == null || ec2Role.trim().isEmpty() ) {\n      runJobFlowRequest.setJobFlowRole( EMR_EC2_DEFAULT_ROLE );\n    } else {\n      runJobFlowRequest.setJobFlowRole( ec2Role );\n    }\n\n    String emrRole = jobEntry.getEmrRole();\n    if ( emrRole == null || emrRole.trim().isEmpty() ) {\n      runJobFlowRequest.setServiceRole( EMR_EFAULT_ROLE );\n    } else {\n      runJobFlowRequest.setServiceRole( emrRole );\n    }\n\n    runJobFlowRequest.setVisibleToAllUsers( true );\n\n    return runJobFlowRequest;\n  }\n\n  private StepConfig configureHiveStep( String stagingS3qUrl, String cmdLineArgs ) {\n\n    String[] cmdLineArgsArr;\n    if ( cmdLineArgs == null ) {\n      cmdLineArgsArr = new String[] {\"\"};\n    } else {\n      List<String> cmdArgs = Arrays.asList( cmdLineArgs.split( \"\\\\s+\" ) );\n      List<String> updatedCmdArgs = cmdArgs.stream().map( e -> replaceDoubleS3( e ) ).collect( Collectors.toList() );\n      cmdLineArgsArr = updatedCmdArgs.toArray( new String[updatedCmdArgs.size()] );\n    }\n\n    StepConfig hiveStepConfig\n      = new StepConfig( \"Hive\",\n      new StepFactory().newRunHiveScriptStep( stagingS3qUrl, cmdLineArgsArr ) );\n    \n    // Set ActionOnFailure based on cluster type:\n    // - Existing cluster: CONTINUE (never terminate)\n    // - New cluster + alive: CANCEL_AND_WAIT (keep cluster running)\n    // - New cluster + not alive: TERMINATE_JOB_FLOW (terminate on failure)\n    if ( !runOnNewCluster ) {\n      // For existing clusters, never terminate - just let the step fail\n      hiveStepConfig.withActionOnFailure( ActionOnFailure.CONTINUE );\n    } else if ( isAlive() ) {\n      hiveStepConfig.withActionOnFailure( ActionOnFailure.CANCEL_AND_WAIT );\n    } else {\n      hiveStepConfig.withActionOnFailure( ActionOnFailure.TERMINATE_JOB_FLOW );\n    }\n    return hiveStepConfig;\n  }\n\n  private StepConfig initHiveStep( String stagingS3qUrl, String cmdLineArgs ) {\n    StepConfig hiveStepConfig = configureHiveStep( stagingS3qUrl, cmdLineArgs );\n    return hiveStepConfig;\n  }\n\n  private static HadoopJarStepConfig configureHadoopStep( String stagingS3Jar, String mainClass,\n                                                          List<String> jarStepArgs ) {\n    HadoopJarStepConfig hadoopJarStepConfig = new HadoopJarStepConfig();\n    hadoopJarStepConfig.setJar( stagingS3Jar );\n    hadoopJarStepConfig.setMainClass( mainClass );\n    hadoopJarStepConfig.setArgs( jarStepArgs );\n\n    return hadoopJarStepConfig;\n  }\n\n  private StepConfig initHadoopStep( String jarUrl, String mainClass, List<String> jarStepArgs ) {\n    StepConfig stepConfig = new StepConfig();\n    stepConfig.setName( \"custom jar: \" + jarUrl );\n\n    stepConfig.setHadoopJarStep( configureHadoopStep( jarUrl, mainClass, jarStepArgs ) );\n    \n    // Set ActionOnFailure based on cluster type:\n    // - Existing cluster: CONTINUE (never terminate)\n    // - New cluster + alive: CANCEL_AND_WAIT (keep cluster running)\n    // - New cluster + not alive: TERMINATE_JOB_FLOW (terminate on failure)\n    if ( !runOnNewCluster ) {\n      // For existing clusters, never terminate - just let the step fail\n      stepConfig.withActionOnFailure( ActionOnFailure.CONTINUE );\n    } else if ( this.isAlive() ) {\n      stepConfig.withActionOnFailure( ActionOnFailure.CANCEL_AND_WAIT );\n    } else {\n      stepConfig.withActionOnFailure( ActionOnFailure.TERMINATE_JOB_FLOW );\n    }\n    return stepConfig;\n  }\n\n  @VisibleForTesting\n  public static String removeLineBreaks( String multiLineFieldValue ) {\n    if ( StringUtil.isEmpty( multiLineFieldValue ) ) {\n      return multiLineFieldValue;\n    }\n    return multiLineFieldValue.replaceAll( \"\\\\s+\", \" \" ).trim();\n  }\n\n  private List<StepConfig> initSteps( String stagingS3FileUrl, String stepType,\n                                      String mainClass,\n                                      AbstractAmazonJobEntry jobEntry ) {\n    List<StepConfig> steps = new ArrayList<>();\n    StepConfig config = null;\n\n    String cmdLineArgs = removeLineBreaks( jobEntry.getCmdLineArgs() );\n\n    if ( stepType.equals( STEP_HIVE ) ) {\n      config = initHiveStep( stagingS3FileUrl, cmdLineArgs );\n    }\n\n    if ( stepType.equals( STEP_EMR ) ) {\n      List<String> jarStepArgs = parseJarStepArgs( cmdLineArgs );\n      config = initHadoopStep( stagingS3FileUrl, mainClass, jarStepArgs );\n    }\n\n    steps.add( config );\n    return steps;\n  }\n\n  private List<String> parseJarStepArgs( String cmdLineArgs ) {\n    List<String> jarStepArgs = new ArrayList<>();\n    if ( !StringUtil.isEmpty( cmdLineArgs ) ) {\n      StringTokenizer st = new StringTokenizer( cmdLineArgs, \" \" );\n      while ( st.hasMoreTokens() ) {\n        String token = st.nextToken();\n        jarStepArgs.add( replaceDoubleS3( token ) );\n      }\n    }\n    return jarStepArgs;\n  }\n\n  private static String replaceDoubleS3( String token ) {\n\n    if ( token.contains( \"s3://s3/\" ) ) {\n      token = token.replace( \"s3://s3/\", \"s3://\" );\n    }\n    return token;\n  }\n\n  private List<Application> initApplications() {\n    List<Application> applications = new ArrayList<>();\n    Application hive = new Application().withName( \"Hive\" );\n    applications.add( hive );\n    return applications;\n  }\n\n  private List<BootstrapActionConfig> initBootstrapActions( String bootstrapActions ) {\n    List<BootstrapActionConfig> actionConfigs = configBootstrapActions( removeLineBreaks( bootstrapActions ) );\n    return actionConfigs;\n  }\n\n  /**\n   * Configure the bootstrap actions, which are executed before Hadoop starts.\n   *\n   * @return List<StepConfig> configuration data for the bootstrap actions\n   */\n  private static List<BootstrapActionConfig> configBootstrapActions( String bootstrapActions ) {\n\n    List<BootstrapActionConfig> bootstrapActionConfigs = new ArrayList<>();\n\n    if ( !StringUtil.isEmpty( bootstrapActions ) ) {\n\n      StringTokenizer st = new StringTokenizer( bootstrapActions, \" \" );\n      String path = \"\";\n      String name = \"\";\n      List<String> args = null;\n      int actionCount = 0;\n\n      while ( st.hasMoreTokens() ) {\n\n        // Take a key/value pair.\n        String key = st.nextToken();\n        String value = st.nextToken();\n\n        try {\n          // If an argument is enclosed by double quote, take the string without double quote.\n          if ( value.startsWith( \"\\\"\" ) ) {\n            while ( !value.endsWith( \"\\\"\" ) ) {\n              if ( st.hasMoreTokens() ) {\n                value += \" \" + st.nextToken();\n              } else {\n                throw new RuntimeException( \"Argument does not end with a double quote: \" + key + \" \" + value );\n              }\n            }\n            value = value.substring( 1, value.length() - 1 );\n          }\n          if ( key.equals( \"--bootstrap-action\" ) ) {\n            if ( !Const.isEmpty( path ) ) {\n              actionCount++;\n              if ( name.equals( \"\" ) ) {\n                name = \"Bootstrap Action \" + actionCount;\n              }\n              // Enter data for one bootstrap action.\n              BootstrapActionConfig bootstrapActionConfig = configureBootstrapAction( path, name, args );\n              bootstrapActionConfigs.add( bootstrapActionConfig );\n              name = \"\";\n              args = null;\n            }\n            if ( value.startsWith( \"s3://\" ) ) {\n              value = replaceDoubleS3( value );\n              path = value;\n            } else { // The value for a bootstrap action does not start with \"s3://\".\n              throw new RuntimeException( \"s3:// path expected for bootstrap action: \" + key + \" \" + value );\n            }\n          }\n        } catch ( RuntimeException e ) {\n          e.printStackTrace();\n          return null;\n        }\n        if ( key.equals( \"--bootstrap-name\" ) ) {\n          name = value;\n        }\n        if ( key.equals( \"--args\" ) ) {\n          args = configArgs( value, \",\" );\n        }\n      }\n\n      if ( !Utils.isEmpty( path ) ) {\n        actionCount++;\n        if ( name.equals( \"\" ) ) {\n          name = \"Bootstrap Action \" + actionCount;\n        }\n        // Enter data for the last bootstrap action.\n        BootstrapActionConfig bootstrapActionConfig = configureBootstrapAction( path, name, args );\n        bootstrapActionConfigs.add( bootstrapActionConfig );\n      }\n    }\n\n    return bootstrapActionConfigs;\n  }\n\n  /**\n   * Configure a bootstrap action object, given its name, path and arguments.\n   *\n   * @param path - path for the bootstrap action program in S3\n   * @param name - name of the bootstrap action\n   * @param args - arguments for the bootstrap action\n   * @return configuration data object for one bootstrap action\n   */\n  private static BootstrapActionConfig configureBootstrapAction( String path, String name, List<String> args ) {\n\n    ScriptBootstrapActionConfig scriptBootstrapActionConfig = new ScriptBootstrapActionConfig();\n    BootstrapActionConfig bootstrapActionConfig = new BootstrapActionConfig();\n    scriptBootstrapActionConfig.setPath( path );\n    scriptBootstrapActionConfig.setArgs( args );\n    bootstrapActionConfig.setName( name );\n    bootstrapActionConfig.setScriptBootstrapAction( scriptBootstrapActionConfig );\n\n    return bootstrapActionConfig;\n  }\n\n  /**\n   * Given a unparsed arguments and a separator, print log for each argument\n   * and return a list of arguments.\n   *\n   * @param args      - unparsed arguments\n   * @param separator - separates one argument from another.\n   * @return A list of arguments\n   */\n  private static List<String> configArgs( String args, String separator ) {\n\n    List<String> argList = new ArrayList<String>();\n    if ( !StringUtil.isEmpty( args ) ) {\n      StringTokenizer st = new StringTokenizer( args, separator );\n      while ( st.hasMoreTokens() ) {\n        String token = st.nextToken();\n        argList.add( token );\n      }\n    }\n    return argList;\n  }\n\n  /**\n   * Configure a bootstrap action object, given its name, path and arguments.\n   *\n   * @param path - path for the bootstrap action program in S3\n   * @param name - name of the bootstrap action\n   * @param args - arguments for the bootstrap action\n   * @return configuration data object for one bootstrap action\n   */\n  private static BootstrapActionConfig createBootstrapAction( String path, String name, List<String> args ) {\n\n    ScriptBootstrapActionConfig scriptBootstrapActionConfig = new ScriptBootstrapActionConfig();\n    BootstrapActionConfig bootstrapActionConfig = new BootstrapActionConfig();\n    if ( !path.isEmpty() ) {\n      scriptBootstrapActionConfig.setPath( path );\n      scriptBootstrapActionConfig.setArgs( args );\n    }\n    bootstrapActionConfig.setName( name );\n    bootstrapActionConfig.setScriptBootstrapAction( scriptBootstrapActionConfig );\n\n    return bootstrapActionConfig;\n  }\n\n  @VisibleForTesting\n  protected List<StepSummary> getSteps() {\n\n    ListStepsRequest listStepsRequest = new ListStepsRequest();\n    listStepsRequest.setClusterId( hadoopJobFlowId );\n    ListStepsResult listStepsResult = emrClient.listSteps( listStepsRequest );\n    List<StepSummary> stepSummaries = listStepsResult.getSteps();\n\n    if ( stepSummaries.isEmpty() ) {\n      return null;\n    }\n    return stepSummaries;\n  }\n\n  @VisibleForTesting\n  protected void setStepsFromCluster() {\n    stepSummaries = getSteps();\n  }\n\n  private String getCurrentlyRunningStepId() {\n    return getSteps().get( 0 ).getId();\n  }\n\n  private String getSpecifiedRunningStep() {\n\n    List<StepSummary> currentSteps = getSteps();\n\n    if ( stepSummaries != null && !stepSummaries.isEmpty() ) {\n      currentSteps.removeAll( stepSummaries );\n    }\n\n    if ( currentSteps.isEmpty() ) {\n      return null;\n    }\n    return currentSteps.get( 0 ).getId();\n  }\n\n  @VisibleForTesting\n  protected void terminateJobFlows() {\n    if ( !requestClusterShutdown ) {\n      TerminateJobFlowsRequest terminateJobFlowsRequest = new TerminateJobFlowsRequest();\n      terminateJobFlowsRequest.withJobFlowIds( hadoopJobFlowId );\n      emrClient.terminateJobFlows( terminateJobFlowsRequest );\n      currentClusterState = getActualClusterState();\n      requestClusterShutdown = true;\n    }\n  }\n\n  @VisibleForTesting\n  protected void cancelStepExecution() {\n    if ( !requestStepCancell ) {\n      CancelStepsRequest cancelStepsRequest = new CancelStepsRequest();\n      cancelStepsRequest.setClusterId( hadoopJobFlowId );\n      Collection<String> stepIds = new ArrayList<>();\n      stepIds.add( stepId );\n      cancelStepsRequest.setStepIds( stepIds );\n      emrClient.cancelSteps( cancelStepsRequest );\n      requestStepCancell = true;\n    }\n  }\n\n  @Override\n  public boolean stopSteps() {\n    if ( isAlive() ) {\n      cancelStepExecution();\n      return true;\n    } else if ( runOnNewCluster ) {\n      // Terminate cluster only if we created it (runOnNewCluster)\n      terminateJobFlows();\n    } else {\n      // For existing clusters, just cancel the step execution - never terminate\n      cancelStepExecution();\n    }\n    return false;\n  }\n\n  private String getActualClusterState() {\n    String clusterState = null;\n    DescribeClusterRequest describeClusterRequest = new DescribeClusterRequest();\n    describeClusterRequest.setClusterId( hadoopJobFlowId );\n    DescribeClusterResult describeClusterResult = emrClient.describeCluster( describeClusterRequest );\n\n    if ( describeClusterResult != null ) {\n      clusterState = describeClusterResult.getCluster().getStatus().getState();\n    }\n    return clusterState;\n  }\n\n  private String getActualStepState() {\n    String stepState = null;\n    DescribeStepRequest describeStepRequest = new DescribeStepRequest();\n    describeStepRequest.setClusterId( hadoopJobFlowId );\n    describeStepRequest.setStepId( stepId );\n    DescribeStepResult describeStepResult = emrClient.describeStep( describeStepRequest );\n\n    if ( describeStepResult != null ) {\n      stepState = describeStepResult.getStep().getStatus().getState();\n    }\n    return stepState;\n  }\n\n  @Override\n  public String getJobFlowLogUri() throws URISyntaxException {\n    DescribeClusterRequest clusterRequest = new DescribeClusterRequest();\n    clusterRequest.setClusterId( hadoopJobFlowId );\n\n    DescribeClusterResult clusterResult = emrClient.describeCluster( clusterRequest );\n    String clusterLogUri = clusterResult.getCluster().getLogUri();\n    String clusterLogBucket = new URI( clusterLogUri ).getHost();\n    return clusterLogBucket;\n  }\n\n  @VisibleForTesting\n  protected boolean isAlive() {\n    return alive;\n  }\n\n  @VisibleForTesting\n  protected void setAlive( boolean alive ) {\n    this.alive = alive;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/PricingClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.auth.AWSStaticCredentialsProvider;\nimport com.amazonaws.services.pricing.AWSPricing;\nimport com.amazonaws.services.pricing.AWSPricingAsyncClientBuilder;\nimport com.amazonaws.services.s3.model.Region;\nimport org.pentaho.amazon.client.AmazonClientCredentials;\nimport org.pentaho.amazon.client.AbstractClientFactory;\nimport org.pentaho.amazon.client.api.PricingClient;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class PricingClientFactory extends AbstractClientFactory<PricingClient> {\n\n  @Override\n  public PricingClient createClient( String accessKey, String secretKey, String sessionToken, String region ) {\n    AmazonClientCredentials clientCredentials = new AmazonClientCredentials( accessKey, secretKey, sessionToken, region );\n\n    AWSPricing awsPricingClient =\n      AWSPricingAsyncClientBuilder.standard().withRegion( Region.US_Standard.toAWSRegion().getName() )\n        .withCredentials( new AWSStaticCredentialsProvider( clientCredentials.getAWSCredentials() ) ).build();\n\n    PricingClient pricingClient = new PricingClientImpl( awsPricingClient, region );\n\n    return pricingClient;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/PricingClientImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.pricing.AWSPricing;\nimport com.amazonaws.services.pricing.model.AWSPricingException;\nimport com.amazonaws.services.pricing.model.Filter;\nimport com.amazonaws.services.pricing.model.GetProductsRequest;\nimport com.amazonaws.services.pricing.model.GetProductsResult;\nimport com.fasterxml.jackson.databind.ObjectMapper;\nimport com.google.common.annotations.VisibleForTesting;\nimport org.pentaho.amazon.InstanceType;\nimport org.pentaho.amazon.client.api.PricingClient;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.Collection;\nimport java.util.List;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class PricingClientImpl implements PricingClient {\n\n  private AWSPricing pricing;\n  private String humanReadableRegion;\n  private Collection<Filter> filters = new ArrayList<>();\n  private List<String> instanceTypes;\n\n  private static final String FIELD_TYPE = \"TERM_MATCH\";\n\n  public PricingClientImpl( AWSPricing pricing, String humanReadableRegion ) {\n    this.pricing = pricing;\n    this.humanReadableRegion = humanReadableRegion;\n  }\n\n  private static Filter createProductFilter( String fieldType, String fieldName, String fieldValue ) {\n    Filter fieldFilter = new Filter();\n    fieldFilter.setType( fieldType );\n    fieldFilter.setField( fieldName );\n    fieldFilter.setValue( fieldValue );\n\n    return fieldFilter;\n  }\n\n  private void addFiltersToProductRequest() {\n    filters.add( createProductFilter( FIELD_TYPE, \"softwareType\", \"EMR\" ) );\n    filters.add( createProductFilter( FIELD_TYPE, \"location\", humanReadableRegion ) );\n  }\n\n  private GetProductsRequest initProductsRequest() {\n    GetProductsRequest productsRequest = new GetProductsRequest();\n    addFiltersToProductRequest();\n    productsRequest.setServiceCode( \"ElasticMapReduce\" );\n    productsRequest.setFilters( filters );\n\n    return productsRequest;\n  }\n\n  @VisibleForTesting\n  protected List<String> getProductDescriptions() {\n    GetProductsRequest productsRequest = initProductsRequest();\n    GetProductsResult productsResult = pricing.getProducts( productsRequest );\n    List<String> productDescriptions = productsResult.getPriceList();\n\n    return productDescriptions;\n  }\n\n  @Override\n  public List<String> populateInstanceTypesForSelectedRegion() throws AWSPricingException, IOException {\n\n    List<String> productDescriptions = getProductDescriptions();\n\n    if ( productDescriptions == null || productDescriptions.size() == 0 ) {\n      return instanceTypes;\n    }\n\n    List<InstanceType> tmpInstanceTypes = new ArrayList<>();\n\n    ObjectMapper mapper = new ObjectMapper();\n    String instanceTypeName;\n    String instanceFamily;\n\n    for ( String description : productDescriptions ) {\n      instanceTypeName =\n        mapper.readTree( description ).path( \"product\" ).get( \"attributes\" ).get( \"instanceType\" ).asText();\n      instanceFamily =\n        mapper.readTree( description ).path( \"product\" ).get( \"attributes\" ).get( \"instanceFamily\" ).asText();\n      tmpInstanceTypes.add( new InstanceType( instanceTypeName, instanceFamily ) );\n    }\n\n    instanceTypes = InstanceType.sortInstanceTypes( tmpInstanceTypes );\n\n    return instanceTypes;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/S3ClientFactory.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.auth.AWSStaticCredentialsProvider;\nimport com.amazonaws.services.s3.AmazonS3;\nimport com.amazonaws.services.s3.AmazonS3ClientBuilder;\nimport org.pentaho.amazon.client.AmazonClientCredentials;\nimport org.pentaho.amazon.client.AbstractClientFactory;\nimport org.pentaho.amazon.client.api.S3Client;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class S3ClientFactory extends AbstractClientFactory<S3Client> {\n\n  @Override\n  public S3Client createClient( String accessKey, String secretKey, String sessionToken, String region ) {\n    AmazonClientCredentials clientCredentials = new AmazonClientCredentials( accessKey, secretKey, sessionToken, region );\n\n    AmazonS3 awsS3Client =\n      AmazonS3ClientBuilder.standard().withRegion( clientCredentials.getRegion() )\n        .withCredentials( new AWSStaticCredentialsProvider( clientCredentials.getAWSCredentials() ) ).build();\n\n    S3Client s3Client = new S3ClientImpl( awsS3Client );\n\n    return s3Client;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/client/impl/S3ClientImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.s3.AmazonS3;\nimport com.amazonaws.services.s3.model.PutObjectRequest;\nimport com.amazonaws.services.s3.model.S3Object;\nimport com.amazonaws.services.s3.model.S3ObjectInputStream;\nimport org.pentaho.amazon.client.api.S3Client;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.Scanner;\nimport java.util.zip.GZIPInputStream;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/5/2018.\n */\npublic class S3ClientImpl implements S3Client {\n\n  private AmazonS3 s3Client;\n\n  public S3ClientImpl( AmazonS3 s3Client ) {\n    this.s3Client = s3Client;\n  }\n\n  @Override\n  public void createBucketIfNotExists( String stagingBucketName ) {\n    if ( !s3Client.doesBucketExistV2( stagingBucketName ) ) {\n      s3Client.createBucket( stagingBucketName );\n    }\n  }\n\n  @Override\n  public void deleteObjectFromBucket( String stagingBucketName, String key ) {\n    s3Client.deleteObject( stagingBucketName, key );\n  }\n\n  @Override\n  public void putObjectInBucket( String stagingBucketName, String key, File tmpFile ) {\n    s3Client.putObject( new PutObjectRequest( stagingBucketName, key, tmpFile ) );\n  }\n\n  @Override\n  public String readStepLogsFromS3( String stagingBucketName, String hadoopJobFlowId, String stepId ) {\n\n    String lineSeparator = System.getProperty( \"line.separator\" );\n    String[] logArchives = { \"/controller.gz\", \"/stdout.gz\", \"/syslog.gz\", \"/stderr.gz\" };\n    StringBuilder logContents = new StringBuilder();\n    String logFromS3File = \"\";\n    String pathToStepLogs = \"\";\n\n    for ( String gzLogFile : logArchives ) {\n      logFromS3File = readLogFromS3( stagingBucketName, hadoopJobFlowId + \"/steps/\" + stepId + gzLogFile );\n      if ( logFromS3File != null && !logFromS3File.isEmpty() ) {\n        logContents.append( logFromS3File + lineSeparator );\n      }\n    }\n    if ( logContents.length() == 0 ) {\n      pathToStepLogs = \"s3://\" + stagingBucketName + \"/\" + hadoopJobFlowId + \"/steps/\" + stepId;\n      logContents.append( \"Step \" + stepId + \" failed. See logs here: \" + pathToStepLogs + lineSeparator );\n    }\n    return logContents.toString();\n  }\n\n  protected String readLogFromS3( String stagingBucketName, String key ) {\n\n    Scanner logScanner = null;\n    S3ObjectInputStream s3ObjectInputStream = null;\n    GZIPInputStream gzipInputStream = null;\n    String lineSeparator = System.getProperty( \"line.separator\" );\n    StringBuilder logContents = new StringBuilder();\n    S3Object outObject;\n\n    try {\n      if ( s3Client.doesObjectExist( stagingBucketName, key ) ) {\n\n        outObject = s3Client.getObject( stagingBucketName, key );\n        s3ObjectInputStream = outObject.getObjectContent();\n        gzipInputStream = new GZIPInputStream( s3ObjectInputStream );\n\n        logScanner = new Scanner( gzipInputStream );\n        while ( logScanner.hasNextLine() ) {\n          logContents.append( logScanner.nextLine() + lineSeparator );\n        }\n      }\n    } catch ( IOException e ) {\n      e.printStackTrace();\n    } finally {\n      try {\n        if ( logScanner != null ) {\n          logScanner.close();\n        }\n        if ( s3ObjectInputStream != null ) {\n          s3ObjectInputStream.close();\n        }\n        if ( gzipInputStream != null ) {\n          gzipInputStream.close();\n        }\n      } catch ( IOException e ) {\n        //do nothing\n      }\n    }\n    return logContents.toString();\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/emr/job/AmazonElasticMapReduceJobExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.emr.job;\n\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.lang.reflect.Method;\nimport java.lang.reflect.Modifier;\nimport java.net.MalformedURLException;\nimport java.net.URISyntaxException;\nimport java.net.URL;\nimport java.net.URLClassLoader;\nimport java.util.ArrayList;\nimport java.util.Iterator;\nimport java.util.List;\nimport java.util.jar.JarEntry;\nimport java.util.jar.JarFile;\nimport java.util.jar.JarInputStream;\nimport java.util.jar.Manifest;\n\nimport org.apache.commons.io.IOUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.amazon.AbstractAmazonJobExecutor;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.hadoop.shim.api.core.ShimIdentifierInterface;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.pentaho.platform.engine.core.system.PentahoSystem;\nimport org.w3c.dom.Node;\n\n@JobEntry( id = \"EMRJobExecutorPlugin\", image = \"EMR.svg\", name = \"EMRJobExecutorPlugin.Name\",\n  description = \"EMRJobExecutorPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.amazon.emr.job\" )\npublic class AmazonElasticMapReduceJobExecutor extends AbstractAmazonJobExecutor {\n\n  private static Class<?> PKG = AmazonElasticMapReduceJobExecutor.class;\n  private static final String STEP_EMR = \"emr\";\n  private static final String SESSION_TOKEN_TAG = \"session_token\";\n  private static final String EMR_SHIM_VENDOR = \"EMR\";\n  private URL localFileUrl;\n\n  protected String jarUrl = \"\";\n\n  public String getJarUrl() {\n    return jarUrl;\n  }\n\n  public void setJarUrl( String jarUrl ) {\n    this.jarUrl = jarUrl;\n  }\n\n  public AmazonElasticMapReduceJobExecutor() {\n  }\n\n  public String getMainClass( URL localJarUrl ) throws Exception {\n    ClassLoader parentClassloader = this.getClass().getClassLoader();\n\n    URLClassLoader newClassLoader = new URLClassLoader( new URL[] { localJarUrl }, parentClassloader );\n\n    final Class<?> mainClass = getMainClassFromManifest( localJarUrl, newClassLoader );\n    if ( mainClass != null ) {\n      return mainClass.getName();\n    } else {\n      List<Class<?>> classesWithMains =\n          getClassesInJarWithMain( localJarUrl.toExternalForm(), parentClassloader );\n      if ( !classesWithMains.isEmpty() ) {\n        return classesWithMains.get( 0 ).getName();\n      }\n    }\n    throw new RuntimeException( \"Could not find main class in: \" + localJarUrl.toExternalForm() );\n  }\n\n  public boolean isAlive() {\n    return alive;\n  }\n\n  @Override\n  public File createStagingFile() throws IOException, KettleException {\n    // pull down .jar file from VSF\n    FileObject jarFile = KettleVFS.getInstance( parentJobMeta.getBowl() ).getFileObject( buildFilename( jarUrl ) );\n    File tmpFile = File.createTempFile( \"customEMR\", \"jar\" );\n    tmpFile.deleteOnExit();\n    FileOutputStream tmpFileOut = new FileOutputStream( tmpFile );\n    IOUtils.copy( jarFile.getContent().getInputStream(), tmpFileOut );\n    localFileUrl = tmpFile.toURI().toURL();\n    setS3BucketKey( jarFile );\n    return tmpFile;\n  }\n\n  @Override\n  public String getStepBootstrapActions() {\n    return null;\n  }\n\n  @Override\n  public String getMainClass() throws Exception {\n    return getMainClass( localFileUrl );\n  }\n\n  public String getStepType() {\n    return STEP_EMR;\n  }\n\n  @Override\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers,\n                       Repository rep, IMetaStore metaStore )\n    throws KettleXMLException {\n    super.loadXML( entrynode, databases, slaveServers );\n    hadoopJobName = XMLHandler.getTagValue( entrynode, \"hadoop_job_name\" );\n    hadoopJobFlowId = XMLHandler.getTagValue( entrynode, \"hadoop_job_flow_id\" );\n    jarUrl = XMLHandler.getTagValue( entrynode, \"jar_url\" );\n    accessKey = Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, \"access_key\" ) );\n    secretKey = Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, \"secret_key\" ) );\n    sessionToken = Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, SESSION_TOKEN_TAG ) );\n    stagingDir = XMLHandler.getTagValue( entrynode, \"staging_dir\" );\n    region = XMLHandler.getTagValue( entrynode, \"region\" );\n    ec2Role = XMLHandler.getTagValue( entrynode, \"ec2_role\" );\n    emrRole = XMLHandler.getTagValue( entrynode, \"emr_role\" );\n    masterInstanceType = XMLHandler.getTagValue( entrynode, \"master_instance_type\" );\n    slaveInstanceType = XMLHandler.getTagValue( entrynode, \"slave_instance_type\" );\n    numInstances = XMLHandler.getTagValue( entrynode, \"num_instances\" );\n    emrRelease = XMLHandler.getTagValue( entrynode, \"emr_release\" );\n    cmdLineArgs = XMLHandler.getTagValue( entrynode, \"command_line_args\" );\n    ec2SubnetId = XMLHandler.getTagValue( entrynode, \"ec2_subnet_id\" );\n    alive = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"alive\" ) );\n    runOnNewCluster = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"runOnNewCluster\" ) );\n    blocking = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"blocking\" ) );\n    loggingInterval = XMLHandler.getTagValue( entrynode, \"logging_interval\" );\n  }\n\n  @Override\n  public String getXML() {\n    StringBuffer retval = new StringBuffer( 1024 );\n    retval.append( super.getXML() );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_name\", hadoopJobName ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_flow_id\", hadoopJobFlowId ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"jar_url\", jarUrl ) );\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"access_key\", Encr.encryptPasswordIfNotUsingVariables( accessKey ) ) );\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"secret_key\", Encr.encryptPasswordIfNotUsingVariables( secretKey ) ) );\n    retval.append( \"      \" )\n            .append( XMLHandler.addTagValue( SESSION_TOKEN_TAG, Encr.encryptPasswordIfNotUsingVariables( sessionToken ) ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"region\", region ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"ec2_role\", ec2Role ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"emr_role\", emrRole ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"master_instance_type\", masterInstanceType ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"slave_instance_type\", slaveInstanceType ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"emr_release\", emrRelease ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"num_instances\", numInstances ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"staging_dir\", stagingDir ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"command_line_args\", cmdLineArgs ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"ec2_subnet_id\", ec2SubnetId ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"alive\", alive ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"runOnNewCluster\", runOnNewCluster ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"blocking\", blocking ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"logging_interval\", loggingInterval ) );\n\n    return retval.toString();\n  }\n\n  @Override\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    if ( rep != null ) {\n      super.loadRep( rep, metaStore, id_jobentry, databases, slaveServers );\n\n      setHadoopJobName( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_name\" ) );\n      setHadoopJobFlowId( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_flow_id\" ) );\n      setJarUrl( rep.getJobEntryAttributeString( id_jobentry, \"jar_url\" ) );\n      setAccessKey( Encr\n        .decryptPasswordOptionallyEncrypted( rep.getJobEntryAttributeString( id_jobentry, \"access_key\" ) ) );\n      setSecretKey( Encr\n        .decryptPasswordOptionallyEncrypted( rep.getJobEntryAttributeString( id_jobentry, \"secret_key\" ) ) );\n      setSessionToken( Encr\n              .decryptPasswordOptionallyEncrypted( rep.getJobEntryAttributeString( id_jobentry, SESSION_TOKEN_TAG ) ) );\n      setStagingDir( rep.getJobEntryAttributeString( id_jobentry, \"staging_dir\" ) );\n      setRegion( rep.getJobEntryAttributeString( id_jobentry, \"region\" ) );\n      setEc2Role( rep.getJobEntryAttributeString( id_jobentry, \"ec2_role\" ) );\n      setEmrRole( rep.getJobEntryAttributeString( id_jobentry, \"emr_role\" ) );\n      setMasterInstanceType( rep.getJobEntryAttributeString( id_jobentry, \"master_instance_type\" ) );\n      setSlaveInstanceType( rep.getJobEntryAttributeString( id_jobentry, \"slave_instance_type\" ) );\n      setEmrRelease( rep.getJobEntryAttributeString( id_jobentry, \"emr_release\" ) );\n      setNumInstances( rep.getJobEntryAttributeString( id_jobentry, \"num_instances\" ) );\n      setCmdLineArgs( rep.getJobEntryAttributeString( id_jobentry, \"command_line_args\" ) );\n      setEc2SubnetId( rep.getJobEntryAttributeString( id_jobentry, \"ec2_subnet_id\" ) );\n      setAlive( rep.getJobEntryAttributeBoolean( id_jobentry, \"alive\" ) );\n      setRunOnNewCluster( rep.getJobEntryAttributeBoolean( id_jobentry, \"runOnNewCluster\" ) );\n      setBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"blocking\" ) );\n      setLoggingInterval( rep.getJobEntryAttributeString( id_jobentry, \"logging_interval\" ) );\n\n    } else {\n      throw new KettleException( BaseMessages.getString( PKG,\n        \"AmazonElasticMapReduceJobExecutor.LoadFromRepository.Error\" ) );\n    }\n  }\n\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_job ) throws KettleException {\n    if ( rep != null ) {\n      super.saveRep( rep, metaStore, id_job );\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_name\", hadoopJobName );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_flow_id\", hadoopJobFlowId );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"jar_url\", jarUrl );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n        \"secret_key\", Encr.encryptPasswordIfNotUsingVariables( secretKey ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n        \"access_key\", Encr.encryptPasswordIfNotUsingVariables( accessKey ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n              SESSION_TOKEN_TAG, Encr.encryptPasswordIfNotUsingVariables( sessionToken ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"staging_dir\", stagingDir );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"region\", region );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"ec2_role\", ec2Role );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"emr_role\", emrRole );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"master_instance_type\", masterInstanceType );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"slave_instance_type\", slaveInstanceType );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"emr_release\", emrRelease );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_instances\", numInstances );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"command_line_args\", cmdLineArgs );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"ec2_subnet_id\", ec2SubnetId );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"alive\", alive );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"runOnNewCluster\", runOnNewCluster );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"blocking\", blocking );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"logging_interval\", loggingInterval );\n\n    } else {\n      throw new KettleException(\n        BaseMessages.getString( PKG, \"AmazonElasticMapReduceJobExecutor.SaveToRepository.Error\" ) );\n    }\n  }\n\n  public String buildFilename( String filename ) {\n    filename = environmentSubstitute( filename );\n    return filename;\n  }\n\n  @Override\n  public boolean evaluates() {\n    return true;\n  }\n\n  @Override\n  public boolean isUnconditional() {\n    return true;\n  }\n\n  @Override\n  public String getDialogClassName() {\n    String className = getClass().getCanonicalName();\n    className = className.replaceFirst( \"\\\\.job\\\\.\", \".ui.\" );\n    className += \"Dialog\";\n    return className;\n  }\n\n  private Class<?> getMainClassFromManifest( URL jarUrl, ClassLoader parentClassLoader )\n          throws IOException, ClassNotFoundException {\n    JarFile jarFile = getJarFile( jarUrl, parentClassLoader );\n    try {\n      Manifest manifest = jarFile.getManifest();\n      String className = manifest == null ? null : manifest.getMainAttributes().getValue( \"Main-Class\" );\n      return loadClassByName( className, jarUrl, parentClassLoader );\n    } finally {\n      jarFile.close();\n    }\n  }\n\n  private JarFile getJarFile( final URL jarUrl, final ClassLoader parentClassLoader ) throws IOException {\n    if ( jarUrl == null || parentClassLoader == null ) {\n      throw new NullPointerException();\n    }\n    JarFile jarFile;\n    try {\n      jarFile = new JarFile( new File( jarUrl.toURI() ) );\n    } catch ( URISyntaxException ex ) {\n      throw new IOException( \"Error locating jar: \" + jarUrl );\n    } catch ( IOException ex ) {\n      throw new IOException( \"Error opening job jar: \" + jarUrl, ex );\n    }\n    return jarFile;\n  }\n\n  private Class<?> loadClassByName( final String className, final URL jarUrl, final ClassLoader parentClassLoader )\n          throws ClassNotFoundException {\n    if ( className != null ) {\n      URLClassLoader cl = new URLClassLoader( new URL[] { jarUrl }, parentClassLoader );\n      Class<?> clazz = cl.loadClass( className.replaceAll( \"/\", \".\" ) );\n      try {\n        cl.close();\n      } catch ( IOException e ) {\n      }\n      return clazz;\n    } else {\n      return null;\n    }\n  }\n\n  public List<Class<?>> getClassesInJarWithMain( String jarUrl, ClassLoader parentClassloader )\n          throws MalformedURLException {\n    ArrayList<Class<?>> mainClasses = new ArrayList<Class<?>>();\n    List<Class<?>> allClasses = getClassesInJar( jarUrl, parentClassloader );\n    for ( Class<?> clazz : allClasses ) {\n      try {\n        Method mainMethod = clazz.getMethod( \"main\", new Class[] { String[].class } );\n        if ( Modifier.isStatic( mainMethod.getModifiers() ) ) {\n          mainClasses.add( clazz );\n        }\n      } catch ( Throwable ignored ) {\n        // Ignore classes without main() methods\n      }\n    }\n    return mainClasses;\n  }\n\n  public List<Class<?>> getClassesInJar( String jarUrl, ClassLoader parentClassloader )\n          throws MalformedURLException {\n    ArrayList<Class<?>> classes = new ArrayList<Class<?>>();\n    URL url = new URL( jarUrl );\n    URL[] urls = new URL[] { url };\n    try ( URLClassLoader loader = new URLClassLoader( urls, parentClassloader );\n          JarInputStream jarFile = new JarInputStream( new FileInputStream( new File( url.toURI() ) ) ) ) {\n      while ( true ) {\n        JarEntry jarEntry = jarFile.getNextJarEntry();\n        if ( jarEntry == null ) {\n          break;\n        }\n        if ( jarEntry.getName().endsWith( \".class\" ) ) {\n          String className = jarEntry.getName().substring( 0, jarEntry.getName().indexOf( \".class\" ) ).replaceAll( \"/\", \"\\\\.\" );\n          classes.add( loader.loadClass( className ) );\n        }\n      }\n    } catch ( IOException e ) {\n    } catch ( ClassNotFoundException e ) {\n    } catch ( URISyntaxException e ) {\n    }\n    return classes;\n  }\n\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/emr/ui/AmazonElasticMapReduceJobExecutorController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.emr.ui;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\nimport org.pentaho.amazon.AbstractAmazonJobExecutorController;\nimport org.pentaho.amazon.emr.job.AmazonElasticMapReduceJobExecutor;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.lang.reflect.InvocationTargetException;\n\npublic class AmazonElasticMapReduceJobExecutorController extends AbstractAmazonJobExecutorController {\n\n  private static final Class<?> PKG = AmazonElasticMapReduceJobExecutor.class;\n\n  // Define string names for the attributes.\n  public static final String JAR_URL = \"jarUrl\";\n\n  public static final String XUL_JAR_URL = \"jar-url\";\n\n  private AmazonElasticMapReduceJobExecutor jobEntry;\n  private String jarUrl = \"\";\n\n  public AmazonElasticMapReduceJobExecutorController( XulDomContainer container, AbstractAmazonJobEntry jobEntry,\n                                                      BindingFactory bindingFactory ) {\n\n    super( container, jobEntry, bindingFactory );\n    this.jobEntry = (AmazonElasticMapReduceJobExecutor) jobEntry;\n    initializeEmrSettingsGroupMenuFields();\n  }\n\n  @Override\n  protected void initializeTextFields() {\n    super.initializeTextFields();\n    ExtTextbox tempBox = (ExtTextbox) container.getDocumentRoot().getElementById( XUL_JAR_URL );\n    tempBox.setVariableSpace( getVariableSpace() );\n  }\n\n  protected void createBindings() {\n\n    super.createBindings();\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n    bindingFactory.createBinding( XUL_JAR_URL, \"value\", this, JAR_URL );\n    initializeTextFields();\n  }\n\n  @Override\n  public String getDialogElementId() {\n    return \"amazon-emr-job-entry-dialog\";\n  }\n\n  @Override\n  protected void syncModel( Bowl bowl ) {\n    super.syncModel( bowl );\n    ExtTextbox tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( XUL_JAR_URL );\n    this.jarUrl = ( (Text) tempBox.getTextControl() ).getText();\n  }\n\n  public String getJarUrl() {\n    return jarUrl;\n  }\n\n  public void setJarUrl( String jarUrl ) {\n    String previousVal = this.jarUrl;\n    String newVal = jarUrl;\n\n    this.jarUrl = jarUrl;\n    firePropertyChange( AmazonElasticMapReduceJobExecutorController.JAR_URL, previousVal, newVal );\n\n  }\n\n  @Override\n  protected void configureJobEntry() {\n    super.configureJobEntry();\n    jobEntry.setJarUrl( jarUrl );\n  }\n\n  @Override\n  protected String buildValidationErrorMessages() {\n    String validationErrors = super.buildValidationErrorMessages();\n    if ( StringUtil.isEmpty( jarUrl ) ) {\n      validationErrors +=\n        BaseMessages.getString( PKG, \"AmazonElasticMapReduceJobExecutor.JarURL.Error\" ) + \"\\n\";\n    }\n    return validationErrors;\n  }\n\n  /**\n   * Initialize the dialog by loading model data, creating bindings and firing initial sync\n   *\n   * @throws XulException\n   * @throws InvocationTargetException\n   */\n  public void init() throws XulException, InvocationTargetException {\n\n    createBindings();\n\n    super.init();\n    if ( jobEntry != null ) {\n      setJarUrl( jobEntry.getJarUrl() );\n    }\n  }\n\n  public void browseJar() throws KettleException, FileSystemException {\n    String[] fileFilters = new String[] { \"*.jar;*.zip\" };\n    String[] fileFilterNames = new String[] { \"Java Archives (jar)\" };\n\n    FileSystemOptions opts = getFileSystemOptions();\n\n    FileObject selectedFile =\n      browse( fileFilters, fileFilterNames, getVariableSpace().environmentSubstitute( jarUrl ), opts,\n      VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE, true );\n\n    if ( selectedFile != null ) {\n      setJarUrl( selectedFile.getName().getURI() );\n    }\n  }\n\n  @Override\n  public AbstractAmazonJobEntry getJobEntry() {\n    return this.jobEntry;\n  }\n\n  @Override\n  public void setJobEntry( AbstractAmazonJobEntry jobEntry ) {\n    this.jobEntry = (AmazonElasticMapReduceJobExecutor) jobEntry;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/emr/ui/AmazonElasticMapReduceJobExecutorDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.emr.ui;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\nimport org.pentaho.amazon.AbstractAmazonJobEntryDialog;\nimport org.pentaho.amazon.AbstractAmazonJobExecutorController;\nimport org.pentaho.amazon.emr.job.AmazonElasticMapReduceJobExecutor;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.BindingFactory;\n\nimport java.lang.reflect.InvocationTargetException;\n\n@PluginDialog( id = \"EMRJobExecutorPlugin\", image = \"EMR.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/amazon-emr-job-executor\" )\npublic class AmazonElasticMapReduceJobExecutorDialog extends AbstractAmazonJobEntryDialog {\n\n  public AmazonElasticMapReduceJobExecutorDialog( Shell parent, JobEntryInterface jobEntry, Repository rep,\n                                                  JobMeta jobMeta )\n    throws XulException, InvocationTargetException {\n    super( parent, jobEntry, rep, jobMeta );\n  }\n\n  @Override\n  protected String getXulFile() {\n    return \"org/pentaho/amazon/emr/ui/AmazonElasticMapReduceJobExecutorDialog.xul\";\n  }\n\n  @Override\n  protected Class<?> getMessagesClass() {\n    return AmazonElasticMapReduceJobExecutor.class;\n  }\n\n  @Override\n  protected AbstractAmazonJobExecutorController createController( XulDomContainer container, AbstractAmazonJobEntry\n    jobEntry,\n                                                                  BindingFactory bindingFactory ) {\n    return new AmazonElasticMapReduceJobExecutorController( container, jobEntry, bindingFactory );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/hive/job/AmazonHiveJobExecutor.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.hive.job;\n\nimport java.io.File;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.util.List;\n\nimport org.apache.commons.io.IOUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.pentaho.amazon.AbstractAmazonJobExecutor;\nimport org.pentaho.amazon.client.api.S3Client;\nimport org.pentaho.di.cluster.SlaveServer;\nimport org.pentaho.di.core.annotations.JobEntry;\nimport org.pentaho.di.core.database.DatabaseMeta;\nimport org.pentaho.di.core.encryption.Encr;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.exception.KettleXMLException;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.core.xml.XMLHandler;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.repository.ObjectId;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.metastore.api.IMetaStore;\nimport org.w3c.dom.Node;\n\n/**\n * AmazonHiveJobExecutor A job entry plug-in class to submits a Hive job into the AWS Elastic MapReduce service from\n * Pentaho Data Integration (Kettle).\n */\n@JobEntry( id = \"HiveJobExecutorPlugin\", image = \"AWS-HIVE.svg\", name = \"HiveJobExecutorPlugin.Name\",\n  description = \"HiveJobExecutorPlugin.Description\",\n  categoryDescription = \"i18n:org.pentaho.di.job:JobCategory.Category.BigData\",\n  i18nPackageName = \"org.pentaho.amazon.hive.job\" )\npublic class AmazonHiveJobExecutor extends AbstractAmazonJobExecutor {\n\n  private static Class<?> PKG = AmazonHiveJobExecutor.class;\n  private static final String STEP_HIVE = \"hive\";\n  private static final String SESSION_TOKEN_TAG = \"session_token\";\n\n  protected String qUrl = \"\";\n  protected String bootstrapActions = \"\";\n\n  public AmazonHiveJobExecutor() {\n  }\n\n  public String getQUrl() {\n    return qUrl;\n  }\n\n  public void setQUrl( String qUrl ) {\n    this.qUrl = qUrl;\n  }\n\n  public String getBootstrapActions() {\n    return bootstrapActions;\n  }\n\n  public void setBootstrapActions( String bootstrapActions ) {\n    this.bootstrapActions = bootstrapActions;\n  }\n\n  public boolean isAlive() {\n    return alive;\n  }\n\n  @Override\n  public File createStagingFile() throws IOException, KettleException {\n    // pull down .q file from VSF\n    FileObject qFile = KettleVFS.getInstance( parentJobMeta.getBowl() ).getFileObject( buildFilename( qUrl ) );\n    File tmpFile = File.createTempFile( \"customEMR\", \"q\" );\n    tmpFile.deleteOnExit();\n    FileOutputStream tmpFileOut = new FileOutputStream( tmpFile );\n    IOUtils.copy( qFile.getContent().getInputStream(), tmpFileOut );\n    //localFileUrl = tmpFile.toURI().toURL();\n    setS3BucketKey( qFile );\n    return tmpFile;\n  }\n\n  @Override\n  public String getStepBootstrapActions() {\n    return bootstrapActions;\n  }\n\n  @Override\n  public String getMainClass() throws Exception {\n    return null;\n  }\n\n  public String getStepType() {\n    return STEP_HIVE;\n  }\n\n  /**\n   * Load attributes\n   */\n  @Override\n  public void loadXML( Node entrynode, List<DatabaseMeta> databases, List<SlaveServer> slaveServers,\n                       Repository rep, IMetaStore metaStore )\n    throws KettleXMLException {\n    super.loadXML( entrynode, databases, slaveServers );\n    hadoopJobName = XMLHandler.getTagValue( entrynode, \"hadoop_job_name\" );\n    hadoopJobFlowId = XMLHandler.getTagValue( entrynode, \"hadoop_job_flow_id\" );\n    qUrl = XMLHandler.getTagValue( entrynode, \"q_url\" );\n    accessKey =\n      Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, \"access_key\" ) );\n    secretKey =\n      Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, \"secret_key\" ) );\n    sessionToken =\n            Encr.decryptPasswordOptionallyEncrypted( XMLHandler.getTagValue( entrynode, SESSION_TOKEN_TAG ) );\n    region = XMLHandler.getTagValue( entrynode, \"region\" );\n    ec2Role = XMLHandler.getTagValue( entrynode, \"ec2_role\" );\n    emrRole = XMLHandler.getTagValue( entrynode, \"emr_role\" );\n    masterInstanceType = XMLHandler.getTagValue( entrynode, \"master_instance_type\" );\n    slaveInstanceType = XMLHandler.getTagValue( entrynode, \"slave_instance_type\" );\n    numInstances = XMLHandler.getTagValue( entrynode, \"num_instances\" );\n    emrRelease = XMLHandler.getTagValue( entrynode, \"emr_release\" );\n    //selectedInstanceType = XMLHandler.getTagValue( entrynode, \"selected_instance_type\" );\n    bootstrapActions = XMLHandler.getTagValue( entrynode, \"bootstrap_actions\" );\n    stagingDir = XMLHandler.getTagValue( entrynode, \"staging_dir\" );\n    cmdLineArgs = XMLHandler.getTagValue( entrynode, \"command_line_args\" );\n    ec2SubnetId = XMLHandler.getTagValue( entrynode, \"ec2_subnet_id\" );\n    alive = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"alive\" ) );\n    runOnNewCluster = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"runOnNewCluster\" ) );\n    blocking = \"Y\".equalsIgnoreCase( XMLHandler.getTagValue( entrynode, \"blocking\" ) );\n    loggingInterval = XMLHandler.getTagValue( entrynode, \"logging_interval\" );\n  }\n\n  /**\n   * Get attributes\n   */\n  @Override\n  public String getXML() {\n    StringBuilder retval = new StringBuilder( 1024 );\n    retval.append( super.getXML() );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_name\", hadoopJobName ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"hadoop_job_flow_id\", hadoopJobFlowId ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"q_url\", qUrl ) );\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"access_key\", Encr.encryptPasswordIfNotUsingVariables( accessKey ) ) );\n    retval.append( \"      \" )\n      .append( XMLHandler.addTagValue( \"secret_key\", Encr.encryptPasswordIfNotUsingVariables( secretKey ) ) );\n    retval.append( \"      \" )\n            .append( XMLHandler.addTagValue( SESSION_TOKEN_TAG, Encr.encryptPasswordIfNotUsingVariables( sessionToken ) ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"region\", region ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"ec2_role\", ec2Role ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"emr_role\", emrRole ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"master_instance_type\", masterInstanceType ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"slave_instance_type\", slaveInstanceType ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"emr_release\", emrRelease ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"num_instances\", numInstances ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"bootstrap_actions\", bootstrapActions ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"staging_dir\", stagingDir ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"command_line_args\", cmdLineArgs ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"ec2_subnet_id\", ec2SubnetId ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"alive\", alive ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"runOnNewCluster\", runOnNewCluster ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"blocking\", blocking ) );\n    retval.append( \"      \" ).append( XMLHandler.addTagValue( \"logging_interval\", loggingInterval ) );\n\n    return retval.toString();\n  }\n\n  /**\n   * Load attributes from a repository\n   */\n  @Override\n  public void loadRep( Repository rep, IMetaStore metaStore, ObjectId id_jobentry, List<DatabaseMeta> databases,\n                       List<SlaveServer> slaveServers ) throws KettleException {\n    if ( rep != null ) {\n      super.loadRep( rep, metaStore, id_jobentry, databases, slaveServers );\n\n      setHadoopJobName( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_name\" ) );\n      setHadoopJobFlowId( rep.getJobEntryAttributeString( id_jobentry, \"hadoop_job_flow_id\" ) );\n      setQUrl( rep.getJobEntryAttributeString( id_jobentry, \"q_url\" ) );\n      setAccessKey( Encr\n        .decryptPasswordOptionallyEncrypted(\n          rep.getJobEntryAttributeString( id_jobentry, \"access_key\" ) ) );\n      setSecretKey( Encr\n        .decryptPasswordOptionallyEncrypted(\n          rep.getJobEntryAttributeString( id_jobentry, \"secret_key\" ) ) );\n      setSessionToken( Encr\n              .decryptPasswordOptionallyEncrypted(\n                      rep.getJobEntryAttributeString( id_jobentry, SESSION_TOKEN_TAG ) ) );\n      setRegion( rep.getJobEntryAttributeString( id_jobentry, \"region\" ) );\n      setEc2Role( rep.getJobEntryAttributeString( id_jobentry, \"ec2_role\" ) );\n      setEmrRole( rep.getJobEntryAttributeString( id_jobentry, \"emr_role\" ) );\n      setMasterInstanceType( rep.getJobEntryAttributeString( id_jobentry, \"master_instance_type\" ) );\n      setSlaveInstanceType( rep.getJobEntryAttributeString( id_jobentry, \"slave_instance_type\" ) );\n      setEmrRelease( rep.getJobEntryAttributeString( id_jobentry, \"emr_release\" ) );\n      setNumInstances( rep.getJobEntryAttributeString( id_jobentry, \"num_instances\" ) );\n      setBootstrapActions( rep.getJobEntryAttributeString( id_jobentry, \"bootstrap_actions\" ) );\n      setStagingDir( rep.getJobEntryAttributeString( id_jobentry, \"staging_dir\" ) );\n      setCmdLineArgs( rep.getJobEntryAttributeString( id_jobentry, \"command_line_args\" ) );\n      setEc2SubnetId( rep.getJobEntryAttributeString( id_jobentry, \"ec2_subnet_id\" ) );\n      setAlive( rep.getJobEntryAttributeBoolean( id_jobentry, \"alive\" ) );\n      setRunOnNewCluster( rep.getJobEntryAttributeBoolean( id_jobentry, \"runOnNewCluster\" ) );\n      setBlocking( rep.getJobEntryAttributeBoolean( id_jobentry, \"blocking\" ) );\n      setLoggingInterval( rep.getJobEntryAttributeString( id_jobentry, \"logging_interval\" ) );\n\n    } else {\n      throw new KettleException( BaseMessages.getString( PKG, \"AmazonHiveJobExecutor.LoadFromRepository.Error\" ) );\n    }\n  }\n\n  /**\n   * Save attributes to a repository\n   */\n  @Override\n  public void saveRep( Repository rep, IMetaStore metaStore, ObjectId id_job ) throws KettleException {\n    if ( rep != null ) {\n      super.saveRep( rep, metaStore, id_job );\n\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_name\", hadoopJobName );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"hadoop_job_flow_id\", hadoopJobFlowId );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"q_url\", qUrl );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n        \"secret_key\", Encr.encryptPasswordIfNotUsingVariables( secretKey ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n        \"access_key\", Encr.encryptPasswordIfNotUsingVariables( accessKey ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(),\n              SESSION_TOKEN_TAG, Encr.encryptPasswordIfNotUsingVariables( sessionToken ) );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"region\", region );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"ec2_role\", ec2Role );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"emr_role\", emrRole );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"master_instance_type\", masterInstanceType );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"slave_instance_type\", slaveInstanceType );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"emr_release\", emrRelease );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"num_instances\", numInstances );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"bootstrap_actions\", bootstrapActions );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"staging_dir\", stagingDir );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"command_line_args\", cmdLineArgs );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"ec2_subnet_id\", ec2SubnetId );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"alive\", alive );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"blocking\", blocking );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"runOnNewCluster\", runOnNewCluster );\n      rep.saveJobEntryAttribute( id_job, getObjectId(), \"logging_interval\", loggingInterval );\n    } else {\n      throw new KettleException( BaseMessages.getString( PKG, \"AmazonHiveJobExecutor.SaveToRepository.Error\" ) );\n    }\n  }\n\n  /**\n   * Build S3 URL. Replace \"/\" and \"\\\" with ASCII equivalents within the access/secret keys, otherwise VFS will have\n   * trouble in parsing the filename.\n   *\n   * @param filename - S3 URL of a file with access/secret keys in it\n   * @return S3 URL with \"/\" and \"\\\" with ASCII equivalents within the access/secret keys\n   */\n  public String buildFilename( String filename ) {\n    filename = environmentSubstitute( filename );\n    if ( filename.startsWith( S3Client.SCHEME ) ) {\n      String authPart =\n        filename\n          .substring( S3Client.SCHEME.length() + 3, filename.indexOf( \"@s3\" ) ).replaceAll( \"\\\\+\", \"%2B\" )\n          .replaceAll( \"/\", \"%2F\" );\n      filename = S3Client.SCHEME + \"://\" + authPart + \"@s3\" + filename\n        .substring( filename.indexOf( \"@s3\" ) + 3 );\n    }\n    return filename;\n  }\n\n  @Override\n  public boolean evaluates() {\n    return true;\n  }\n\n  @Override\n  public boolean isUnconditional() {\n    return true;\n  }\n\n  /**\n   * Get the class name for the dialog box of this plug-in.\n   */\n  @Override\n  public String getDialogClassName() {\n    String className = getClass().getCanonicalName();\n    className = className.replaceFirst( \"\\\\.job\\\\.\", \".ui.\" );\n    className += \"Dialog\";\n    return className;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/hive/ui/AmazonHiveJobExecutorController.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.hive.ui;\n\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Text;\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\nimport org.pentaho.amazon.AbstractAmazonJobExecutorController;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\nimport org.pentaho.di.core.bowl.Bowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.util.StringUtil;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.Binding;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\nimport java.lang.reflect.InvocationTargetException;\n\n/**\n * AmazonHiveJobExecutorController: Handles the attribute dialog box UI for AmazonHiveJobExecutor class.\n */\npublic class AmazonHiveJobExecutorController extends AbstractAmazonJobExecutorController {\n\n  private static final Class<?> PKG = AmazonHiveJobExecutor.class;\n\n  // Define string names for the attributes.\n  public static final String Q_URL = \"qUrl\";\n  public static final String BOOTSTRAP_ACTIONS = \"bootstrapActions\";\n\n  /* XUL Element id's */\n  public static final String XUL_QURL = \"q-url\";\n  public static final String XUL_BOOTSTRAP_ACTIONS = \"bootstrap-actions\";\n\n  // Attributes\n  private String qUrl = \"\";\n  private String bootstrapActions = \"\";\n\n  private AmazonHiveJobExecutor jobEntry;\n\n\n  public AmazonHiveJobExecutorController( XulDomContainer container, AbstractAmazonJobEntry jobEntry,\n                                          BindingFactory bindingFactory ) {\n\n    super( container, jobEntry, bindingFactory );\n    this.jobEntry = (AmazonHiveJobExecutor) jobEntry;\n    initializeEmrSettingsGroupMenuFields();\n  }\n\n  @Override\n  protected void initializeTextFields() {\n    super.initializeTextFields();\n    ExtTextbox tempBox = (ExtTextbox) container.getDocumentRoot().getElementById( XUL_QURL );\n    tempBox.setVariableSpace( getVariableSpace() );\n    tempBox = (ExtTextbox) container.getDocumentRoot().getElementById( XUL_BOOTSTRAP_ACTIONS );\n    tempBox.setVariableSpace( getVariableSpace() );\n  }\n\n  protected void createBindings() {\n\n    super.createBindings();\n    bindingFactory.setBindingType( Binding.Type.BI_DIRECTIONAL );\n    bindingFactory.createBinding( XUL_BOOTSTRAP_ACTIONS, \"value\", this, BOOTSTRAP_ACTIONS );\n    bindingFactory.createBinding( XUL_QURL, \"value\", this, Q_URL );\n\n    initializeTextFields();\n  }\n\n  @Override\n  public String getDialogElementId() {\n    return \"amazon-emr-job-entry-dialog\";\n  }\n\n  @Override\n  protected void syncModel( Bowl bowl ) {\n    super.syncModel( bowl );\n    ExtTextbox tempBox =\n      (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( XUL_BOOTSTRAP_ACTIONS ); //$NON-NLS-1$\n    this.bootstrapActions = ( (Text) tempBox.getTextControl() ).getText();\n    tempBox = (ExtTextbox) getXulDomContainer().getDocumentRoot().getElementById( XUL_QURL ); //$NON-NLS-1$\n    this.qUrl = ( (Text) tempBox.getTextControl() ).getText();\n  }\n\n  @Override\n  protected String buildValidationErrorMessages() {\n    String validationErrors = super.buildValidationErrorMessages();\n    if ( StringUtil.isEmpty( qUrl ) ) {\n      validationErrors += BaseMessages.getString( PKG, \"AmazonHiveJobExecutor.QURL.Error\" ) + \"\\n\";\n    }\n    return validationErrors;\n  }\n\n  @Override\n  protected void configureJobEntry() {\n    super.configureJobEntry();\n    jobEntry.setQUrl( qUrl );\n    jobEntry.setBootstrapActions( bootstrapActions );\n  }\n\n  /**\n   * Initialize the dialog by loading model data, creating bindings and firing initial sync\n   *\n   * @throws XulException\n   * @throws InvocationTargetException\n   */\n  public void init() throws XulException, InvocationTargetException {\n\n    createBindings();\n\n    super.init();\n    if ( jobEntry != null ) {\n      setQUrl( jobEntry.getQUrl() );\n      setBootstrapActions( jobEntry.getBootstrapActions() );\n    }\n  }\n\n  /*\n   * Open VFS Browser when the \"Browse...\" button next to the \"Hive Script\" text box is pressed in the attribute dialog\n   * box.\n   */\n  public void browseQ() throws KettleException, FileSystemException {\n    String[] fileFilters = new String[] { \"*.*\" }; //$NON-NLS-1$\n    String[] fileFilterNames = new String[] { \"All\" }; //$NON-NLS-1$\n\n    FileSystemOptions opts = getFileSystemOptions();\n    FileObject selectedFile =\n      browse( fileFilters, fileFilterNames, getVariableSpace().environmentSubstitute( qUrl ), opts,\n        VfsFileChooserDialog.VFS_DIALOG_OPEN_FILE, true );\n\n    if ( selectedFile != null ) {\n      setQUrl( selectedFile.getName().getURI() );\n    }\n  }\n\n  public String getQUrl() {\n    return qUrl;\n  }\n\n  public void setQUrl( String qUrl ) {\n    String previousVal = this.qUrl;\n    String newVal = qUrl;\n\n    this.qUrl = qUrl;\n    firePropertyChange( AmazonHiveJobExecutorController.Q_URL, previousVal, newVal );\n  }\n\n  public String getBootstrapActions() {\n    return bootstrapActions;\n  }\n\n  public void setBootstrapActions( String bootstrapActions ) {\n    String previousVal = this.bootstrapActions;\n    String newVal = bootstrapActions;\n\n    this.bootstrapActions = bootstrapActions;\n    firePropertyChange( BOOTSTRAP_ACTIONS, previousVal, newVal );\n  }\n\n  public void invertAlive() {\n    setAlive( !isAlive() );\n  }\n\n  public void invertBlocking() {\n    setBlocking( !getBlocking() );\n  }\n\n  @Override\n  public AbstractAmazonJobEntry getJobEntry() {\n    return this.jobEntry;\n  }\n\n  @Override\n  public void setJobEntry( AbstractAmazonJobEntry jobEntry ) {\n    this.jobEntry = (AmazonHiveJobExecutor) jobEntry;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/hive/ui/AmazonHiveJobExecutorDialog.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.hive.ui;\n\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.annotations.PluginDialog;\nimport org.pentaho.amazon.AbstractAmazonJobEntry;\nimport org.pentaho.amazon.AbstractAmazonJobEntryDialog;\nimport org.pentaho.amazon.AbstractAmazonJobExecutorController;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.job.entry.JobEntryInterface;\nimport org.pentaho.di.repository.Repository;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulException;\nimport org.pentaho.ui.xul.binding.BindingFactory;\n\nimport java.lang.reflect.InvocationTargetException;\n\n/**\n * Created by Aliaksandr_Zhuk on 1/24/2018.\n */\n@PluginDialog( id = \"HiveJobExecutorPlugin\", image = \"AWS-HIVE.svg\", pluginType = PluginDialog.PluginType.JOBENTRY,\n        documentationUrl = \"pdi-job-entries-reference-overview/amazon-hive-job-executor\" )\npublic class AmazonHiveJobExecutorDialog extends AbstractAmazonJobEntryDialog {\n\n\n  public AmazonHiveJobExecutorDialog( Shell parent, JobEntryInterface jobEntry, Repository rep, JobMeta jobMeta )\n    throws XulException, InvocationTargetException {\n    super( parent, jobEntry, rep, jobMeta );\n  }\n\n  @Override\n  protected String getXulFile() {\n    return \"org/pentaho/amazon/hive/ui/AmazonHiveJobExecutorDialog.xul\";\n  }\n\n  @Override\n  protected Class<?> getMessagesClass() {\n    return AmazonHiveJobExecutor.class;\n  }\n\n  @Override\n  protected AbstractAmazonJobExecutorController createController( XulDomContainer container, AbstractAmazonJobEntry jobEntry,\n                                                                  BindingFactory bindingFactory ) {\n    return new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/s3/S3VfsFileChooserHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.s3;\n\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.amazon.client.api.S3Client;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\n\n/**\n * created by: rfellows date: 5/24/12\n */\npublic class S3VfsFileChooserHelper extends VfsFileChooserHelper {\n\n  public S3VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace ) {\n    super( shell, fileChooserDialog, variableSpace );\n    setDefaultScheme( S3Client.SCHEME );\n    setSchemeRestriction( S3Client.SCHEME );\n  }\n\n  public S3VfsFileChooserHelper( Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace,\n      FileSystemOptions fileSystemOptions ) {\n    super( shell, fileChooserDialog, variableSpace, fileSystemOptions );\n    setDefaultScheme( S3Client.SCHEME );\n    setSchemeRestriction( S3Client.SCHEME );\n  }\n\n  @Override\n  protected boolean returnsUserAuthenticatedFileObjects() {\n    return true;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/java/org/pentaho/amazon/s3/VfsFileChooserHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.s3;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.commons.vfs2.FileSystemOptions;\nimport org.eclipse.swt.widgets.Shell;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.variables.VariableSpace;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.ui.spoon.Spoon;\nimport org.pentaho.hadoop.shim.api.cluster.NamedCluster;\nimport org.pentaho.vfs.ui.CustomVfsUiPanel;\nimport org.pentaho.vfs.ui.VfsFileChooserDialog;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\n\n/**\n * User: RFellows Date: 6/8/12\n */\npublic class VfsFileChooserHelper {\n  private static final Logger logger = LogManager.getLogger( VfsFileChooserHelper.class );\n  private VfsFileChooserDialog fileChooserDialog = null;\n  private Shell shell = null;\n  private VariableSpace variableSpace = null;\n  private FileSystemOptions fileSystemOptions = null;\n  private String defaultScheme = \"file\";\n  private String[] schemeRestrictions = null;\n  private boolean showFileScheme = true;\n\n  public VfsFileChooserHelper(Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace ) {\n    this( shell, fileChooserDialog, variableSpace, new FileSystemOptions() );\n  }\n\n  public VfsFileChooserHelper(Shell shell, VfsFileChooserDialog fileChooserDialog, VariableSpace variableSpace,\n                              FileSystemOptions fileSystemOptions ) {\n    this.fileChooserDialog = fileChooserDialog;\n    this.shell = shell;\n    this.variableSpace = variableSpace;\n    this.fileSystemOptions = fileSystemOptions;\n    this.schemeRestrictions = new String[0];\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri ) throws KettleException,\n    FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode,\n      boolean showLocation ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, true );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, int fileDialogMode,\n      boolean showLocation, boolean showCustomUI ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, fileSystemOptions, fileDialogMode, showLocation, showCustomUI );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts )\n    throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, VfsFileChooserDialog.VFS_DIALOG_OPEN_DIRECTORY );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n      int fileDialogMode ) throws KettleException, FileSystemException {\n    return browse( fileFilters, fileFilterNames, fileUri, opts, fileDialogMode, true, true );\n  }\n\n  public FileObject browse( String[] fileFilters, String[] fileFilterNames, String fileUri, FileSystemOptions opts,\n      int fileDialogMode, boolean showLocation, boolean showCustomUI ) throws KettleException, FileSystemException {\n    // Get current file\n    FileObject rootFile = null;\n    FileObject initialFile = null;\n\n    Spoon spoon = Spoon.getInstance();\n\n    if ( fileUri != null ) {\n      initialFile = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( fileUri, variableSpace, opts );\n    } else {\n      initialFile = KettleVFS.getInstance( spoon.getExecutionBowl() ).getFileObject( spoon.getLastFileOpened() );\n    }\n    rootFile = initialFile.getFileSystem().getRoot();\n    fileChooserDialog.setRootFile( rootFile );\n    fileChooserDialog.setInitialFile( initialFile );\n    fileChooserDialog.defaultInitialFile = rootFile;\n\n    FileObject selectedFile = null;\n    selectedFile = fileChooserDialog.open(\n        shell, this.schemeRestrictions, getDefaultScheme(), showFileScheme(), initialFile.getName().getPath(),\n        fileFilters, fileFilterNames, returnsUserAuthenticatedFileObjects(), fileDialogMode, showLocation, showCustomUI );\n\n    return selectedFile;\n  }\n\n  public VariableSpace getVariableSpace() {\n    return variableSpace;\n  }\n\n  public void setVariableSpace( VariableSpace variableSpace ) {\n    this.variableSpace = variableSpace;\n  }\n\n  public FileSystemOptions getFileSystemOptions() {\n    return fileSystemOptions;\n  }\n\n  public void setFileSystemOptions( FileSystemOptions fileSystemOptions ) {\n    this.fileSystemOptions = fileSystemOptions;\n  }\n\n  public String getDefaultScheme() {\n    return defaultScheme;\n  }\n\n  public void setDefaultScheme( String defaultScheme ) {\n    this.defaultScheme = defaultScheme;\n  }\n\n  public String getSchemeRestriction() {\n    String schemaRestriction = null;\n    if ( this.schemeRestrictions != null && this.schemeRestrictions.length > 0 ) {\n      schemaRestriction = this.schemeRestrictions[0];\n    }\n    return schemaRestriction;\n  }\n\n  public void setSchemeRestriction( String schemeRestriction ) {\n    this.schemeRestrictions = new String[1];\n    this.schemeRestrictions[0] = schemeRestriction;\n  }\n\n  public void setSchemeRestrictions( String[] schemeRestrictions ) {\n    this.schemeRestrictions = schemeRestrictions;\n  }\n\n  public boolean showFileScheme() {\n    return this.showFileScheme;\n  }\n\n  public void setShowFileScheme( boolean showFileScheme ) {\n    this.showFileScheme = showFileScheme;\n  }\n\n  protected boolean returnsUserAuthenticatedFileObjects() {\n    return false;\n  }\n\n  public void setNamedCluster( NamedCluster namedCluster ) {\n    VfsFileChooserDialog dialog = Spoon.getInstance().getVfsFileChooserDialog( null, null );\n    for ( CustomVfsUiPanel currentPanel : dialog.getCustomVfsUiPanels() ) {\n      if ( currentPanel != null ) {\n        try {\n          Method setNamedCluster = currentPanel.getClass().getMethod( \"setNamedCluster\", new Class[] { String.class } );\n          setNamedCluster.invoke( currentPanel, namedCluster.getName() );\n        } catch ( NoSuchMethodException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because it doesn't have setNamedCluster method.\", e );\n          }\n        } catch ( InvocationTargetException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because of exception.\", e.getCause() );\n          }\n        } catch ( IllegalAccessException e ) {\n          if ( logger.isDebugEnabled() ) {\n            logger.debug( \"Couldn't set named cluster \" + namedCluster.getName() + \" on \" + currentPanel + \" because setNamedCluster method isn't accessible.\", e );\n          }\n        }\n      }\n    }\n  }\n\n  @VisibleForTesting\n    VfsFileChooserDialog getFileChooserDialog() {\n    return fileChooserDialog;\n  }\n\n  @VisibleForTesting\n    Shell getShell() {\n    return shell;\n  }\n\n  @VisibleForTesting\n    String[] getSchemeRestrictions() {\n    return schemeRestrictions;\n  }\n\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/META-INF/version.properties",
    "content": "version=${project.version}"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/org/pentaho/amazon/emr/job/messages/messages_en_US.properties",
    "content": "EMRJobExecutorPlugin.Name=Amazon EMR job executor\nEMRJobExecutorPlugin.Description=Execute MapReduce jobs in Amazon EMR \n\n\nJobEntryDialog.Title=Amazon EMR job executor\nJobEntry.Name.Label=Entry name:\n\nAmazonElasticMapReduceJobExecutor.Name.Label=EMR job flow name:\nAmazonElasticMapReduceJobExecutor.AWSAccessKey.Label=Access key:\nAmazonElasticMapReduceJobExecutor.AWSSecretKey.Label=Secret key:\nAmazonElasticMapReduceJobExecutor.AWSSessionToken.Label=Session token (optional):\nAmazonElasticMapReduceJobExecutor.AwsConnection.Group.Label=AWS connection\nAmazonElasticMapReduceJobExecutor.Emr.Tab.Label=EMR settings\nAmazonElasticMapReduceJobExecutor.Job.Tab.Label=Job settings\nAmazonElasticMapReduceJobExecutor.Cluster.Group.Label=Cluster\nAmazonElasticMapReduceJobExecutor.New.Radio.Label=New\nAmazonElasticMapReduceJobExecutor.Existing.Radio.Label=Existing\nAmazonElasticMapReduceJobExecutor.Region.Label=Region:\nAmazonElasticMapReduceJobExecutor.Ec2SubnetId.Label=EC2 subnet:\nAmazonElasticMapReduceJobExecutor.LoadSubnets.Button=Load\nAmazonElasticMapReduceJobExecutor.EmrSettings.Connect=Connect\nAmazonElasticMapReduceJobExecutor.Ec2Role.Label=EC2 role:\nAmazonElasticMapReduceJobExecutor.EmrRole.Label=EMR role:\nAmazonElasticMapReduceJobExecutor.MasterInstanceType.Label=Master instance type:\nAmazonElasticMapReduceJobExecutor.SlaveInstanceType.Label=Slave instance type:\nAmazonElasticMapReduceJobExecutor.EmrRelease.Label=EMR release:\nAmazonElasticMapReduceJobExecutor.NumInstances.Label=Number of instances:\n\nAmazonElasticMapReduceJobExecutor.JobFlowId.Label=Existing JobFlow ID:\nAmazonElasticMapReduceJobExecutor.JarUrl.Label=MapReduce Jar:\nAmazonElasticMapReduceJobExecutor.JarUrl.Browse=Browse...\n\nAmazonElasticMapReduceJobExecutor.S3StagingDir.Label=S3 staging directory:\nAmazonElasticMapReduceJobExecutor.S3StagingDir.Browse=Browse...\nAmazonElasticMapReduceJobExecutor.CommandLineArguments.Label=Command line arguments:\n\nAmazonElasticMapReduceJobExecutor.Alive.Label=Keep job flow alive\nAmazonElasticMapReduceJobExecutor.Blocking.Label=Enable blocking\nAmazonElasticMapReduceJobExecutor.Logging.Interval.Label=Logging interval (in seconds):\n\nAmazonElasticMapReduceJobExecutor.ResolvedJar=Using jar path: {0}\nAmazonElasticMapReduceJobExecutor.RunningPercent=Setup Complete: {0} Mapper Completion: {1} Reducer Completion: {2}\nAmazonElasticMapReduceJobExecutor.TaskDetails=[{0}] -- Task: {1}  Attempt: {2}  Event: {3} {4}\nAmazonElasticMapReduceJobExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}],{1}\nAmazonElasticMapReduceJobExecutor.SimpleMode=Running Hadoop Job in Simple Mode\nAmazonElasticMapReduceJobExecutor.AdvancedMode=Running Hadoop Job in Advanced Mode\n\nAmazonElasticMapReduceJobExecutor.ModeAdvanced.NumMapTasks.Label=Number of Mapper Tasks:\nAmazonElasticMapReduceJobExecutor.ModeAdvanced.NumReduceTasks.Label=Number of Reducer Tasks:\n\nAmazonElasticMapReduceJobExecutor.Error.JarDoesNotExist=Specified jar [{0}] does not exist.\n\nAmazonElasticMapReduceJobExecutor.LoadFromRepository.Error=Unable to load from a repository. The repository is null.\nAmazonElasticMapReduceJobExecutor.SaveToRepository.Error=Unable to save to a repository. The repository is null.\nAmazonElasticMapReduceJobExecutor.JarURL.Error=You must enter a JAR location.\nAmazonElasticMapReduceJobExecutor.Shim.Error=You must have at least one EMR driver installed.\n\n\nDialog.OK=OK\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\n"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/org/pentaho/amazon/emr/ui/AmazonElasticMapReduceJobExecutorDialog.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n\n<window id=\"amazon-emr-window-wrapper\" onload=\"jobEntryController.init()\">\n\n  <dialog id=\"amazon-emr-job-entry-dialog\"\n          xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n          xmlns:pen=\"http://www.pentaho.org/2008/xul\"\n          title=\"${JobEntryDialog.Title}\"\n          resizable=\"true\"\n          height=\"617\"\n          width=\"452\"\n          linuxHeight=\"674\"\n          linuxWidth=\"517\"\n          appicon=\"EMR.png\"\n          buttons=\"\">\n\n    <vbox>\n      <grid>\n        <columns>\n          <column/>\n          <column/>\n          <column/>\n        </columns>\n        <rows>\n          <row>\n            <hbox width=\"5\"></hbox>\n            <vbox flex=\"1\">\n              <label value=\"${JobEntry.Name.Label}\"/>\n              <textbox id=\"jobentry-name\" flex=\"1\" multiline=\"false\" width=\"200\"/>\n            </vbox>\n            <hbox flex=\"1\">\n              <spacer flex=\"1\"/>\n              <image src=\"EMR.png\"/>\n            </hbox>\n          </row>\n        </rows>\n      </grid>\n    </vbox>\n    <vbox height=\"3\"></vbox>\n    <hbox>\n      <hbox width=\"9\"></hbox>\n      <separator padding=\"0\" flex=\"1\" orient=\"HORIZONTAL\"/>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"4\"></vbox>\n    <hbox flex=\"1\">\n      <hbox width=\"9\"></hbox>\n      <tabbox selectedIndex=\"0\" flex=\"1\">\n        <tabs>\n          <tab label=\"${AmazonElasticMapReduceJobExecutor.Emr.Tab.Label}\"/>\n          <tab label=\"${AmazonElasticMapReduceJobExecutor.Job.Tab.Label}\"/>\n        </tabs>\n        <tabpanels flex=\"1\">\n          <tabpanel flex=\"1\">\n            <hbox flex=\"1\">\n              <hbox width=\"1\"></hbox>\n              <vbox padding=\"8\" flex=\"1\">\n                <groupbox>\n                  <caption label=\"${AmazonElasticMapReduceJobExecutor.AwsConnection.Group.Label}\"/>\n                  <hbox padding=\"0\" flex=\"1\">\n                    <hbox width=\"5\" padding=\"0\"></hbox>\n                    <grid>\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <vbox>\n                            <label value=\"${AmazonElasticMapReduceJobExecutor.AWSAccessKey.Label}\"/>\n                            <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"access-key\" flex=\"1\"\n                                     multiline=\"false\" width=\"263\"/>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonElasticMapReduceJobExecutor.AWSSecretKey.Label}\"/>\n                              <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"secret-key\" flex=\"1\"\n                                       multiline=\"false\" width=\"263\"/>\n                              <hbox padding=\"0\"></hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonElasticMapReduceJobExecutor.AWSSessionToken.Label}\"/>\n                              <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"session-token\" flex=\"1\"\n                                       multiline=\"false\" width=\"263\"/>\n                              <hbox padding=\"0\"></hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonElasticMapReduceJobExecutor.Region.Label}\"/>\n                              <hbox padding=\"0\">\n                                <menulist id=\"region\" width=\"160\">\n                                  <menupopup>\n                                  </menupopup>\n                                </menulist>\n                                <hbox padding=\"0\" spacing=\"0\" width=\"1\"></hbox>\n                                <button id=\"emr-settings\"\n                                        label=\"${AmazonElasticMapReduceJobExecutor.EmrSettings.Connect}\"\n                                        onclick=\"jobEntryController.getEmrSettings()\" disabled=\"true\"/>\n                              </hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                    <hbox width=\"5\" padding=\"0\"></hbox>\n                  </hbox>\n                  <hbox height=\"8\"></hbox>\n                </groupbox>\n                <hbox height=\"7\"></hbox>\n                <groupbox>\n                  <caption label=\"${AmazonElasticMapReduceJobExecutor.Cluster.Group.Label}\"/>\n                  <hbox padding=\"0\">\n                    <grid>\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <hbox>\n                            <hbox width=\"3\"></hbox>\n                            <vbox padding=\"0\">\n                              <hbox padding=\"0\"></hbox>\n                              <radiogroup id=\"cluster-mode\">\n                                <radio command=\"jobEntryController.updateClusterState()\" id=\"new-cluster\"\n                                       label=\"${AmazonElasticMapReduceJobExecutor.New.Radio.Label}\"\n                                       selected=\"true\"/>\n                                <hbox padding=\"0\" height=\"2\"></hbox>\n                                <radio command=\"jobEntryController.updateClusterState()\" id=\"existing-cluster\"\n                                       label=\"${AmazonElasticMapReduceJobExecutor.Existing.Radio.Label}\"/>\n                              </radiogroup>\n                            </vbox>\n                            <hbox padding=\"0\" spacing=\"0\"></hbox>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                    <vbox padding=\"0\">\n                      <hbox padding=\"0\" height=\"8\"></hbox>\n                      <separator padding=\"0\" flex=\"1\" spacing=\"0\" width=\"2\" orient=\"VERTICAL\"/>\n                      <hbox padding=\"0\" height=\"10\"></hbox>\n                    </vbox>\n                    <deck padding=\"0\" spacing=\"0\" id=\"cluster-tab\" selectedIndex=\"0\">\n                      <grid padding=\"0\" spacing=\"0\">\n                        <columns>\n                          <column/>\n                        </columns>\n                        <rows>\n                          <row>\n                            <vbox flex=\"1\" spacing=\"0\">\n                              <hbox>\n                                <hbox padding=\"0\" width=\"7\"></hbox>\n                                <vbox padding=\"0\">\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.Ec2Role.Label}\"/>\n                                  <menulist width=\"90\" id=\"ec2-role\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.MasterInstanceType.Label}\"/>\n                                  <menulist width=\"90\" id=\"master-instance-type\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.EmrRelease.Label}\"/>\n                                  <menulist width=\"90\" id=\"emr-release\" editable=\"true\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.Ec2SubnetId.Label}\"/>\n                                  <menulist width=\"90\" id=\"ec2-subnet-id\" editable=\"true\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <vbox padding=\"0\" spacing=\"0\"></vbox>\n                                </vbox>\n                                <hbox padding=\"0\" width=\"11\"></hbox>\n                                <vbox padding=\"0\">\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.EmrRole.Label}\"/>\n                                  <menulist id=\"emr-role\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonElasticMapReduceJobExecutor.SlaveInstanceType.Label}\"/>\n                                  <menulist id=\"slave-instance-type\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <hbox padding=\"0\">\n                                    <label value=\"${AmazonElasticMapReduceJobExecutor.NumInstances.Label}\"/>\n                                  </hbox>\n                                  <hbox padding=\"0\">\n                                    <textbox pen:customclass=\"variabletextbox\" id=\"num-instances\" multiline=\"false\"\n                                             width=\"48\"\n                                             disabled=\"true\"/>\n                                  </hbox>\n                                </vbox>\n                                <hbox padding=\"0\" width=\"5\"></hbox>\n                              </hbox>\n                              <vbox padding=\"0\" height=\"6\"></vbox>\n                            </vbox>\n                          </row>\n                        </rows>\n                      </grid>\n                      <grid padding=\"0\" spacing=\"0\">\n                        <columns>\n                          <column/>\n                        </columns>\n                        <rows>\n                          <row>\n                            <hbox>\n                              <hbox width=\"7\"></hbox>\n                              <vbox>\n                                <label value=\"${AmazonElasticMapReduceJobExecutor.JobFlowId.Label}\"/>\n                                <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-flow-id\"\n                                         multiline=\"false\" width=\"263\"/>\n                              </vbox>\n                            </hbox>\n                          </row>\n                        </rows>\n                      </grid>\n                    </deck>\n                  </hbox>\n                </groupbox>\n              </vbox>\n              <hbox width=\"1\"></hbox>\n            </hbox>\n          </tabpanel>\n          <tabpanel flex=\"1\">\n            <vbox padding=\"5\" flex=\"1\">\n              <grid>\n                <columns>\n                  <column flex=\"1\"/>\n                </columns>\n                <rows>\n                  <row>\n                    <hbox padding=\"0\">\n                      <hbox padding=\"0\" width=\"4\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonElasticMapReduceJobExecutor.Name.Label}\"/>\n                        <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-name\" flex=\"1\"\n                                 multiline=\"false\"\n                                 width=\"263\"/>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox>\n                      <hbox padding=\"0\" width=\"2\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonElasticMapReduceJobExecutor.S3StagingDir.Label}\"/>\n                        <hbox padding=\"0\">\n                          <textbox pen:customclass=\"variabletextbox\" id=\"s3-staging-directory\" flex=\"1\" width=\"263\"\n                                   multiline=\"false\"/>\n                          <spacer/>\n                          <button id=\"browseS3Stage\" label=\"${AmazonElasticMapReduceJobExecutor.S3StagingDir.Browse}\"\n                                  onclick=\"jobEntryController.browseS3StagingDir()\"/>\n                        </hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox padding=\"0\">\n                      <hbox padding=\"0\" width=\"4\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonElasticMapReduceJobExecutor.JarUrl.Label}\"/>\n                        <hbox padding=\"0\">\n                          <textbox pen:customclass=\"variabletextbox\" id=\"jar-url\" flex=\"1\" width=\"263\"\n                                   multiline=\"false\"/>\n                          <spacer/>\n                          <button id=\"browseJarUrl\" label=\"${AmazonElasticMapReduceJobExecutor.JarUrl.Browse}\"\n                                  onclick=\"jobEntryController.browseJar()\"/>\n                        </hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox>\n                      <hbox padding=\"0\" width=\"2\"></hbox>\n                      <vbox padding=\"0\" spacing=\"0\">\n                        <label value=\"${AmazonElasticMapReduceJobExecutor.CommandLineArguments.Label}\"/>\n                        <hbox padding=\"0\" height=\"2\"></hbox>\n                        <textbox pen:customclass=\"variabletextbox\" id=\"command-line-arguments\" height=\"39\"\n                                 multiline=\"true\" width=\"339\"/>\n                        <hbox padding=\"0\"></hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <grid spacing=\"0\">\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <vbox padding=\"0\">\n                            <hbox padding=\"0\">\n                              <hbox padding=\"0\" width=\"2\"></hbox>\n                              <hbox padding=\"0\">\n                                <checkbox id=\"alive\" flex=\"1\" command=\"jobEntryController.invertAlive()\"/>\n                                <label value=\"${AmazonElasticMapReduceJobExecutor.Alive.Label}\"/>\n                              </hbox>\n                            </hbox>\n                            <hbox padding=\"0\"></hbox>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\" width=\"2\"></hbox>\n                            <hbox padding=\"0\">\n                              <checkbox id=\"blocking\" flex=\"1\" command=\"jobEntryController.invertBlocking()\"/>\n                              <label value=\"${AmazonElasticMapReduceJobExecutor.Blocking.Label}\"/>\n                            </hbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <vbox padding=\"0\">\n                            <hbox padding=\"0\">\n                              <hbox padding=\"0\" width=\"2\"></hbox>\n                              <label value=\"${AmazonElasticMapReduceJobExecutor.Logging.Interval.Label}\"/>\n                            </hbox>\n                            <hbox padding=\"0\"></hbox>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\" width=\"2\"></hbox>\n                            <textbox pen:customclass=\"variabletextbox\" type=\"numeric\" id=\"logging-interval\" width=\"48\"\n                                     multiline=\"false\"/>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                  </row>\n                </rows>\n              </grid>\n            </vbox>\n          </tabpanel>\n        </tabpanels>\n      </tabbox>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"7\"></vbox>\n    <hbox>\n      <hbox width=\"9\"></hbox>\n      <separator padding=\"0\" flex=\"1\" orient=\"HORIZONTAL\"/>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"6\"></vbox>\n    <hbox padding=\"0\">\n      <hbox width=\"11\"></hbox>\n      <button label=\"${Dialog.Help}\" image=\"help_web.png\" onclick=\"jobEntryController.help()\"/>\n      <spacer flex=\"1\"/>\n      <button label=\"${Dialog.Accept}\" width=\"75\" onclick=\"jobEntryController.accept()\"/>\n      <hbox width=\"1\"></hbox>\n      <button label=\"${Dialog.Cancel}\" width=\"75\" onclick=\"jobEntryController.cancel()\"/>\n      <hbox width=\"11\"></hbox>\n    </hbox>\n    <vbox padding=\"0\" height=\"11\"></vbox>\n  </dialog>\n\n  <!--  ###############################################################################   -->\n  <!--     ERROR DIALOG: Dialog to display error text                                     -->\n  <!--  ###############################################################################   -->\n  <dialog id=\"amazon-emr-error-dialog\" title=\"${Dialog.Error}\" buttonlabelaccept=\"${Dialog.OK}\" buttons=\"accept\"\n          ondialogaccept=\"jobEntryController.closeErrorDialog()\" width=\"600\" height=\"300\" buttonalign=\"center\">\n    <textbox id=\"amazon-emr-error-message\" value=\"${errorDialog.errorOccurred}\" multiline=\"true\" readonly=\"true\"\n             flex=\"1\"/>\n  </dialog>\n</window>"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/org/pentaho/amazon/hive/job/messages/messages_en_US.properties",
    "content": "HiveJobExecutorPlugin.Name=Amazon Hive job executor\nHiveJobExecutorPlugin.Description=Execute Hive jobs in Amazon EMR\n\n\nJobEntryDialog.Title=Amazon Hive job executor\nJobEntry.Name.Label=Entry name:\n\n\nAmazonHiveJobExecutor.Name.Label=Hive job flow name:\nAmazonHiveJobExecutor.JobFlowId.Label=Existing JobFlow ID:\nAmazonHiveJobExecutor.AWSAccessKey.Label=Access key:\nAmazonHiveJobExecutor.AWSSecretKey.Label=Secret key:\nAmazonHiveJobExecutor.AWSSessionToken.Label=Session token (optional):\nAmazonHiveJobExecutor.AwsConnection.Group.Label=AWS connection\nAmazonHiveJobExecutor.Hive.Tab.Label=Hive settings\nAmazonHiveJobExecutor.Job.Tab.Label=Job settings\nAmazonHiveJobExecutor.Cluster.Group.Label=Cluster\nAmazonHiveJobExecutor.New.Radio.Label=New\nAmazonHiveJobExecutor.Existing.Radio.Label=Existing\nAmazonHiveJobExecutor.Region.Label=Region:\nAmazonHiveJobExecutor.Ec2SubnetId.Label=EC2 subnet:\nAmazonHiveJobExecutor.LoadSubnets.Button=Load\nAmazonHiveJobExecutor.EmrSettings.Connect=Connect\nAmazonHiveJobExecutor.Ec2Role.Label=EC2 role:\nAmazonHiveJobExecutor.EmrRole.Label=EMR role:\nAmazonHiveJobExecutor.MasterInstanceType.Label=Master instance type:\nAmazonHiveJobExecutor.SlaveInstanceType.Label=Slave instance type:\nAmazonHiveJobExecutor.EmrRelease.Label=EMR release:\nAmazonHiveJobExecutor.NumInstances.Label=Number of instances:\n\nAmazonHiveJobExecutor.S3StagingDir.Label=S3 staging directory:\nAmazonHiveJobExecutor.S3StagingDir.Browse=Browse...\nAmazonHiveJobExecutor.CommandLineArguments.Label=Command line arguments:\n\nAmazonHiveJobExecutor.Alive.Label=Keep job flow alive\nAmazonHiveJobExecutor.Blocking.Label=Enable blocking\nAmazonHiveJobExecutor.Logging.Interval.Label=Logging interval (in seconds):\n\n\nAmazonHiveJobExecutor.QUrl.Label=Hive script:\nAmazonHiveJobExecutor.QUrl.Browse=Browse...\nAmazonHiveJobExecutor.BootstrapActions.Label=Bootstrap actions:\n\n\nAmazonHiveJobExecutor.QURL.Error=You must enter a Hive script file location.\n\n\nAmazonHiveJobExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}],{1}\n\n\n\nAmazonHiveJobExecutor.JobEntryName.Error=You must enter a name for this job entry.\nAmazonHiveJobExecutor.JobFlowName.Error=You must enter an Amazon Hive job flow name.\nAmazonHiveJobExecutor.JarURL.Error=You must enter a Jar location.\n\nAmazonHiveJobExecutor.AccessKey.Error=You must enter an Access key.\nAmazonHiveJobExecutor.SecretKey.Error=You must enter a Secret key.\nAmazonHiveJobExecutor.StagingDir.Error=You must enter a valid S3 Log directory.\nAmazonHiveJobExecutor.NumInstances.Error=The instance count must be at least 1.\nAmazonHiveJobExecutor.MasterInstanceType.Error=Master instance type is invalid or undefined.\nAmazonHiveJobExecutor.SlaveInstanceType.Error=Slave instance type is invalid or undefined.\nAmazonHiveJobExecutor.HiveScriptFilename.Error=Hive script filename must start with \"file:///\". Please use browser to select the file:\nAmazonHiveJobExecutor.LoggingInterval.Error=Unable to parse logging interval \"{0}\" - using default of 10...\nAmazonHiveJobExecutor.InstanceNumber.Error=Unable to parse number of instances to use \"{0}\" - using 2 instances...\nAmazonHiveJobExecutor.BootstrapActionArgument.Error=Argument does not end with a double quote: {0} {1}\nAmazonHiveJobExecutor.BootstrapActionPath.Error=s3:// path expected for bootstrap action: {0} {1}\nAmazonHiveJobExecutor.LoadFromRepository.Error=Unable to load from a repository. The repository is null.\nAmazonHiveJobExecutor.SaveToRepository.Error=Unable to save to a repository. The repository is null.\n\nDialog.OK=OK\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\n"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/org/pentaho/amazon/hive/ui/AmazonHiveJobExecutorDialog.xul",
    "content": "<?xml version=\"1.0\"?>\n<?xml-stylesheet href=\"chrome://global/skin/\" type=\"text/css\"?>\n\n<window id=\"amazon-emr-window-wrapper\" onload=\"jobEntryController.init()\">\n\n  <dialog id=\"amazon-emr-job-entry-dialog\"\n          xmlns=\"http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul\"\n          xmlns:pen=\"http://www.pentaho.org/2008/xul\"\n          title=\"${JobEntryDialog.Title}\"\n          resizable=\"true\"\n          height=\"682\"\n          width=\"465\"\n          linuxHeight=\"742\"\n          linuxWidth=\"514\"\n          appicon=\"AWS-HIVE.png\"\n          buttons=\"\">\n\n    <vbox>\n      <grid>\n        <columns>\n          <column/>\n          <column/>\n          <column/>\n        </columns>\n        <rows>\n          <row>\n            <hbox width=\"5\"></hbox>\n            <vbox flex=\"1\">\n              <label value=\"${JobEntry.Name.Label}\"/>\n              <textbox id=\"jobentry-name\" flex=\"1\" multiline=\"false\" width=\"200\"/>\n            </vbox>\n            <hbox flex=\"1\">\n              <spacer flex=\"1\"/>\n              <image src=\"AWS-HIVE.png\"/>\n            </hbox>\n          </row>\n        </rows>\n      </grid>\n    </vbox>\n    <vbox height=\"3\"></vbox>\n    <hbox>\n      <hbox width=\"9\"></hbox>\n      <separator padding=\"0\" flex=\"1\" orient=\"HORIZONTAL\"/>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"4\"></vbox>\n    <hbox flex=\"1\">\n      <hbox width=\"9\"></hbox>\n      <tabbox selectedIndex=\"0\" flex=\"1\">\n        <tabs>\n          <tab label=\"${AmazonHiveJobExecutor.Hive.Tab.Label}\"/>\n          <tab label=\"${AmazonHiveJobExecutor.Job.Tab.Label}\"/>\n        </tabs>\n        <tabpanels flex=\"1\">\n          <tabpanel flex=\"1\">\n            <hbox flex=\"1\">\n              <hbox width=\"1\"></hbox>\n              <vbox padding=\"8\" flex=\"1\">\n                <groupbox>\n                  <caption label=\"${AmazonHiveJobExecutor.AwsConnection.Group.Label}\"/>\n                  <hbox padding=\"0\" flex=\"1\">\n                    <hbox width=\"5\" padding=\"0\"></hbox>\n                    <grid>\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <vbox>\n                            <label value=\"${AmazonHiveJobExecutor.AWSAccessKey.Label}\"/>\n                            <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"access-key\" flex=\"1\"\n                                     multiline=\"false\" width=\"263\"/>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonHiveJobExecutor.AWSSecretKey.Label}\"/>\n                              <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"secret-key\" flex=\"1\"\n                                       multiline=\"false\" width=\"263\"/>\n                              <hbox padding=\"0\"></hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonHiveJobExecutor.AWSSessionToken.Label}\"/>\n                              <textbox pen:customclass=\"variabletextbox\" type=\"password\" id=\"session-token\" flex=\"1\"\n                                       multiline=\"false\" width=\"263\"/>\n                              <hbox padding=\"0\"></hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\"></hbox>\n                            <vbox padding=\"0\">\n                              <label value=\"${AmazonHiveJobExecutor.Region.Label}\"/>\n                              <hbox padding=\"0\">\n                                <menulist id=\"region\" width=\"160\">\n                                  <menupopup>\n                                  </menupopup>\n                                </menulist>\n                                <hbox padding=\"0\" spacing=\"0\" width=\"1\"></hbox>\n                                <button id=\"emr-settings\" label=\"${AmazonHiveJobExecutor.EmrSettings.Connect}\"\n                                        onclick=\"jobEntryController.getEmrSettings()\" disabled=\"true\"/>\n                              </hbox>\n                            </vbox>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                    <hbox width=\"5\" padding=\"0\"></hbox>\n                  </hbox>\n                  <hbox height=\"8\"></hbox>\n                </groupbox>\n                <hbox height=\"7\"></hbox>\n                <groupbox>\n                  <caption label=\"${AmazonHiveJobExecutor.Cluster.Group.Label}\"/>\n                  <hbox padding=\"0\">\n                    <grid>\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <hbox>\n                            <hbox width=\"3\"></hbox>\n                            <vbox padding=\"0\">\n                              <hbox padding=\"0\"></hbox>\n                              <radiogroup id=\"cluster-mode\">\n                                <radio command=\"jobEntryController.updateClusterState()\" id=\"new-cluster\"\n                                       label=\"${AmazonHiveJobExecutor.New.Radio.Label}\"\n                                       selected=\"true\"/>\n                                <hbox padding=\"0\" height=\"2\"></hbox>\n                                <radio command=\"jobEntryController.updateClusterState()\" id=\"existing-cluster\"\n                                       label=\"${AmazonHiveJobExecutor.Existing.Radio.Label}\"/>\n                              </radiogroup>\n                            </vbox>\n                            <hbox padding=\"0\" spacing=\"0\"></hbox>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                    <vbox padding=\"0\">\n                      <hbox padding=\"0\" height=\"8\"></hbox>\n                      <separator padding=\"0\" flex=\"1\" spacing=\"0\" orient=\"VERTICAL\"/>\n                      <hbox padding=\"0\" height=\"10\"></hbox>\n                    </vbox>\n                    <deck padding=\"0\" spacing=\"0\" id=\"cluster-tab\" selectedIndex=\"0\">\n                      <grid padding=\"0\" spacing=\"0\">\n                        <columns>\n                          <column/>\n                        </columns>\n                        <rows>\n                          <row>\n                            <vbox flex=\"1\" spacing=\"0\">\n                              <hbox>\n                                <hbox padding=\"0\" width=\"7\"></hbox>\n                                <vbox padding=\"0\">\n                                  <label value=\"${AmazonHiveJobExecutor.Ec2Role.Label}\"/>\n                                  <menulist width=\"90\" id=\"ec2-role\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonHiveJobExecutor.MasterInstanceType.Label}\"/>\n                                  <menulist width=\"90\" id=\"master-instance-type\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonHiveJobExecutor.EmrRelease.Label}\"/>\n                                  <menulist width=\"90\" id=\"emr-release\" editable=\"true\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonHiveJobExecutor.Ec2SubnetId.Label}\"/>\n                                  <menulist width=\"90\" id=\"ec2-subnet-id\" editable=\"true\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <vbox padding=\"0\" spacing=\"0\"></vbox>\n                                </vbox>\n                                <hbox padding=\"0\" width=\"11\"></hbox>\n                                <vbox padding=\"0\">\n                                  <label value=\"${AmazonHiveJobExecutor.EmrRole.Label}\"/>\n                                  <menulist id=\"emr-role\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <label value=\"${AmazonHiveJobExecutor.SlaveInstanceType.Label}\"/>\n                                  <menulist id=\"slave-instance-type\" disabled=\"true\">\n                                    <menupopup>\n                                    </menupopup>\n                                  </menulist>\n                                  <hbox padding=\"0\"></hbox>\n                                  <hbox padding=\"0\">\n                                    <label value=\"${AmazonHiveJobExecutor.NumInstances.Label}\"/>\n                                  </hbox>\n                                  <hbox padding=\"0\">\n                                    <textbox pen:customclass=\"variabletextbox\" id=\"num-instances\" multiline=\"false\"\n                                             width=\"48\"\n                                             disabled=\"true\"/>\n                                  </hbox>\n                                </vbox>\n                                <hbox padding=\"0\" width=\"5\"></hbox>\n                              </hbox>\n                              <hbox flex=\"1\" padding=\"0\" spacing=\"0\">\n                                <hbox padding=\"0\" width=\"11\"></hbox>\n                                <vbox flex=\"1\" padding=\"0\" spacing=\"0\">\n                                  <label value=\"${AmazonHiveJobExecutor.BootstrapActions.Label}\"/>\n                                  <hbox padding=\"0\" height=\"2\"></hbox>\n                                  <textbox pen:customclass=\"variabletextbox\" id=\"bootstrap-actions\" height=\"39\"\n                                           width=\"265\" multiline=\"true\"/>\n                                </vbox>\n                                <hbox padding=\"0\" width=\"8\"></hbox>\n                              </hbox>\n                              <vbox padding=\"0\" height=\"10\"></vbox>\n                            </vbox>\n                          </row>\n                        </rows>\n                      </grid>\n                      <grid padding=\"0\" spacing=\"0\">\n                        <columns>\n                          <column/>\n                        </columns>\n                        <rows>\n                          <row>\n                            <hbox>\n                              <hbox width=\"7\"></hbox>\n                              <vbox>\n                                <label value=\"${AmazonHiveJobExecutor.JobFlowId.Label}\"/>\n                                <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-flow-id\"\n                                         multiline=\"false\" width=\"263\"/>\n                              </vbox>\n                            </hbox>\n                          </row>\n                        </rows>\n                      </grid>\n                    </deck>\n                  </hbox>\n                </groupbox>\n              </vbox>\n              <hbox width=\"1\"></hbox>\n            </hbox>\n          </tabpanel>\n          <tabpanel flex=\"1\">\n            <vbox padding=\"5\" flex=\"1\">\n              <grid>\n                <columns>\n                  <column flex=\"1\"/>\n                </columns>\n                <rows>\n                  <row>\n                    <hbox padding=\"0\">\n                      <hbox padding=\"0\" width=\"4\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonHiveJobExecutor.Name.Label}\"/>\n                        <textbox pen:customclass=\"variabletextbox\" id=\"jobentry-hadoopjob-name\" flex=\"1\"\n                                 multiline=\"false\"\n                                 width=\"263\"/>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox>\n                      <hbox padding=\"0\" width=\"2\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonHiveJobExecutor.S3StagingDir.Label}\"/>\n                        <hbox padding=\"0\">\n                          <textbox pen:customclass=\"variabletextbox\" id=\"s3-staging-directory\" flex=\"1\" width=\"263\"\n                                   multiline=\"false\"/>\n                          <spacer/>\n                          <button id=\"browseS3Stage\" label=\"${AmazonHiveJobExecutor.S3StagingDir.Browse}\"\n                                  onclick=\"jobEntryController.browseS3StagingDir()\"/>\n                        </hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox padding=\"0\">\n                      <hbox padding=\"0\" width=\"4\"></hbox>\n                      <vbox padding=\"0\">\n                        <label value=\"${AmazonHiveJobExecutor.QUrl.Label}\"/>\n                        <hbox padding=\"0\">\n                          <textbox pen:customclass=\"variabletextbox\" id=\"q-url\" flex=\"1\" width=\"263\" multiline=\"false\"/>\n                          <spacer/>\n                          <button id=\"browseQUrl\" label=\"${AmazonHiveJobExecutor.QUrl.Browse}\"\n                                  onclick=\"jobEntryController.browseQ()\"/>\n                        </hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <hbox>\n                      <hbox padding=\"0\" width=\"2\"></hbox>\n                      <vbox padding=\"0\" spacing=\"0\">\n                        <label value=\"${AmazonHiveJobExecutor.CommandLineArguments.Label}\"/>\n                        <hbox padding=\"0\" height=\"2\"></hbox>\n                        <textbox pen:customclass=\"variabletextbox\" id=\"command-line-arguments\" height=\"39\"\n                                 multiline=\"true\" width=\"339\"/>\n                        <hbox padding=\"0\"></hbox>\n                      </vbox>\n                    </hbox>\n                  </row>\n                  <row>\n                    <grid spacing=\"0\">\n                      <columns>\n                        <column flex=\"1\"/>\n                      </columns>\n                      <rows>\n                        <row>\n                          <vbox padding=\"0\">\n                            <hbox padding=\"0\">\n                              <hbox padding=\"0\" width=\"2\"></hbox>\n                              <hbox padding=\"0\">\n                                <checkbox id=\"alive\" flex=\"1\" command=\"jobEntryController.invertAlive()\"/>\n                                <label value=\"${AmazonHiveJobExecutor.Alive.Label}\"/>\n                              </hbox>\n                            </hbox>\n                            <hbox padding=\"0\"></hbox>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\" width=\"2\"></hbox>\n                            <hbox padding=\"0\">\n                              <checkbox id=\"blocking\" flex=\"1\" command=\"jobEntryController.invertBlocking()\"/>\n                              <label value=\"${AmazonHiveJobExecutor.Blocking.Label}\"/>\n                            </hbox>\n                          </hbox>\n                        </row>\n                        <row>\n                          <vbox padding=\"0\">\n                            <hbox padding=\"0\">\n                              <hbox padding=\"0\" width=\"2\"></hbox>\n                              <label value=\"${AmazonHiveJobExecutor.Logging.Interval.Label}\"/>\n                            </hbox>\n                            <hbox padding=\"0\"></hbox>\n                          </vbox>\n                        </row>\n                        <row>\n                          <hbox padding=\"0\">\n                            <hbox padding=\"0\" width=\"2\"></hbox>\n                            <textbox pen:customclass=\"variabletextbox\" type=\"numeric\" id=\"logging-interval\" width=\"48\"\n                                     multiline=\"false\"/>\n                          </hbox>\n                        </row>\n                      </rows>\n                    </grid>\n                  </row>\n                </rows>\n              </grid>\n            </vbox>\n          </tabpanel>\n        </tabpanels>\n      </tabbox>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"7\"></vbox>\n    <hbox>\n      <hbox width=\"9\"></hbox>\n      <separator padding=\"0\" flex=\"1\" orient=\"HORIZONTAL\"/>\n      <hbox width=\"9\"></hbox>\n    </hbox>\n    <vbox height=\"6\"></vbox>\n    <hbox padding=\"0\">\n      <hbox width=\"11\"></hbox>\n      <button label=\"${Dialog.Help}\" image=\"help_web.png\" onclick=\"jobEntryController.help()\"/>\n      <spacer flex=\"1\"/>\n      <button label=\"${Dialog.Accept}\" width=\"75\" onclick=\"jobEntryController.accept()\"/>\n      <hbox width=\"1\"></hbox>\n      <button label=\"${Dialog.Cancel}\" width=\"75\" onclick=\"jobEntryController.cancel()\"/>\n      <hbox width=\"11\"></hbox>\n    </hbox>\n    <vbox padding=\"0\" height=\"11\"></vbox>\n  </dialog>\n\n  <!--  ###############################################################################   -->\n  <!--     ERROR DIALOG: Dialog to display error text                                     -->\n  <!--  ###############################################################################   -->\n  <dialog id=\"amazon-emr-error-dialog\" title=\"${Dialog.Error}\" buttonlabelaccept=\"${Dialog.OK}\" buttons=\"accept\"\n          ondialogaccept=\"jobEntryController.closeErrorDialog()\" width=\"600\" height=\"300\" buttonalign=\"center\">\n    <textbox id=\"amazon-emr-error-message\" value=\"${errorDialog.errorOccurred}\" multiline=\"true\" readonly=\"true\"\n             flex=\"1\"/>\n  </dialog>\n</window>"
  },
  {
    "path": "legacy-amazon/core/src/main/resources/org/pentaho/amazon/messages/messages_en_US.properties",
    "content": "S3SpoonPlugin.StartupError.FailedToLoadS3Driver=Failed to load S3 VFS driver\nS3VfsFileChooserDialog.openFile=Open File\nS3VfsFileChooserDialog.SaveAs=Save as\nS3VfsFileChooserDialog.FileSystemChoice.Label=Look in\nS3VfsFileChooserDialog.FileSystemChoice.S3.Label=S3\nS3NVfsFileChooserDialog.FileSystemChoice.S3.Label=S3N\nS3AVfsFileChooserDialog.FileSystemChoice.S3.Label=S3A\nS3VfsFileChooserDialog.FileSystemChoice.Local.Label=Local\nS3VfsFileChooserDialog.ConnectionGroup.Label=Connection\nS3VfsFileChooserDialog.AccessKey.Label=Access Key:\nS3VfsFileChooserDialog.SecretKey.Label=Secret Key:\nS3VfsFileChooserDialog.Bucket.Label=Bucket:\nS3VfsFileChooserDialog.ConnectionButton.Label=Connect\nS3VfsFileChooserDialog.warning=Warning\nS3VfsFileChooserDialog.noWriteSupport=This file system does not support write operations.\nS3VfsFileChooserDialog.error=Error\nS3VfsFileChooserDialog.FileSystem.error=A file system error occurred.  See log for details.\nS3VfsFileChooserDialog.Connection.error=Unable to connect to S3 server.\nS3FileOutputDialog.DialogTitle=S3 file output\nS3FileOutputDialog.GetCredentialsAtRuntime.Label=Get credentials at runtime?\nS3FileOutputDialog.GetCredentialsAtRuntime.Tooltip=Use the AWS default provider chain to retrieve credentials at runtime.\nS3FileOutputDialog.FileBrowser.FactoryNotAvailable=S3 VFS Browser cannot be loaded, factory unavailable.\nS3FileOutputDialog.FileBrowser.KettleFileException=Kettle File Exception\nS3FileOutputDialog.FileBrowser.FileSystemException=File System Exception\nS3FileSystemDisplayText=S3\n\nAbstractAmazonJobExecutorController.JobEntryName.Error=You must enter a name for this job entry.\nAbstractAmazonJobExecutorController.JobFlowName.Error=You must enter a Job flow name.\nAbstractAmazonJobExecutorController.JobFlowId.Error=You must enter Existing JobFlow ID.\nAbstractAmazonJobExecutorController.AccessKey.Error=You must enter an Access key.\nAbstractAmazonJobExecutorController.SecretKey.Error=You must enter a Secret key.\nAbstractAmazonJobExecutorController.Region.Error=Region is invalid or undefined.\nAbstractAmazonJobExecutorController.Ec2Role.Error=EC2 role is invalid or undefined.\nAbstractAmazonJobExecutorController.EmrRole.Error=EMR role is invalid or undefined.\nAbstractAmazonJobExecutorController.MasterInstanceType.Error=Master instance type is invalid or undefined.\nAbstractAmazonJobExecutorController.SlaveInstanceType.Error=Slave instance type is invalid or undefined.\nAbstractAmazonJobExecutorController.EmrRelease.Error=You must enter an EMR release.\nAbstractAmazonJobExecutorController.StagingDir.Error=You must enter a valid S3 staging directory.\nAbstractAmazonJobExecutorController.NumInstances.Error=The instance count must be at least 1.\n\nAbstractAmazonJobExecutor.FailedToOpenLogFile=Unable to open file appender for file [{0}],{1}\nAbstractAmazonJobExecutor.InstanceNumber.Error=Unable to parse number of instances to use \"{0}\"\nAbstractAmazonJobExecutor.LoggingInterval.Error=Unable to parse logging interval \"{0}\" - using default of 10...\nAbstractAmazonJobExecutor.JobFlowExecutionStatus=(JobFlow ID: {0}) cluster status:\nAbstractAmazonJobExecutor.JobFlowStepStatus=(Step ID: {0}) step status:\n\nAbstractAmazonJobExecutorController.JobEntry.Connection.error.title=Amazon AWS Connection Error\nAbstractAmazonJobExecutorController.JobEntry.Instance.error.title=Amazon AWS Instance Type Error\n\nDialog.OK=OK\nDialog.Accept=OK\nDialog.Cancel=Cancel\nDialog.Error=Error\nDialog.Help=Help\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/AbstractAmazonJobExecutorControllerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.client.ClientFactoriesManager;\nimport org.pentaho.amazon.client.ClientType;\nimport org.pentaho.amazon.client.api.PricingClient;\nimport org.pentaho.amazon.client.impl.AimClientImpl;\nimport org.pentaho.amazon.client.impl.PricingClientImpl;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\nimport org.pentaho.amazon.hive.ui.AmazonHiveJobExecutorController;\nimport org.pentaho.di.ui.core.database.dialog.tags.ExtTextbox;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.binding.BindingFactory;\nimport org.pentaho.ui.xul.components.XulButton;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.io.IOException;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.RETURNS_DEEP_STUBS;\nimport static org.mockito.Mockito.anyString;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.doNothing;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.eq;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.never;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class AbstractAmazonJobExecutorControllerTest {\n\n  private AmazonHiveJobExecutorController jobExecutorController;\n\n  @Mock\n  XulDomContainer container;\n\n  @Mock\n  AmazonHiveJobExecutor jobEntry;\n\n  @Mock\n  BindingFactory bindingFactory;\n\n  @Before\n  public void setUp() throws Exception {\n    jobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n  }\n\n  @Test\n  public void testPopulateRegions_getValidCountOfEnumElements() throws Exception {\n\n    int expectedCountOfRegions = 16;\n\n    AbstractModelList<String> listRegions = jobExecutorController.populateRegions();\n\n    assertEquals( expectedCountOfRegions, listRegions.size() );\n  }\n\n  @Test\n  public void testPopulateReleases_getValidCountOfEnumElements() throws Exception {\n\n    int expectedCountOfReleases = 35;\n\n    AbstractModelList<String> listReleases = jobExecutorController.populateReleases();\n\n    assertEquals( expectedCountOfReleases, listReleases.size() );\n  }\n\n  @Test\n  public void testPopulateReleases_setFirstEmrReleaseInJobEntry() throws Exception {\n\n    String expectedEmrRelease = \"emr-7.7.0\";\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    AbstractModelList<String> listReleases = hiveJobExecutorController.populateReleases();\n\n    assertEquals( expectedEmrRelease, jobEntry.getEmrRelease() );\n  }\n\n  @Test\n  public void testPopulateReleases_getValidEmrReleaseFormJobEntry() throws Exception {\n\n    String expectedEmrRelease = \"emr-5.11.0\";\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n\n    when( hiveJobExecutorController.getJobEntry().getEmrRelease() ).thenReturn( \"emr-5.11.0\" );\n\n    hiveJobExecutorController.populateReleases();\n\n    verify( hiveJobExecutorController, times( 2 ) ).getJobEntry();\n\n    assertEquals( expectedEmrRelease, jobEntry.getEmrRelease() );\n  }\n\n  @Test\n  public void testPopulateReleases_addNewEmrReleaseToReleasesList() throws Exception {\n\n    int expectedCountOfReleases = 36;\n\n    when( jobExecutorController.getJobEntry().getEmrRelease() ).thenReturn( \"emr-5.12.0\" );\n    AbstractModelList<String> listReleases = jobExecutorController.populateReleases();\n\n    verify( jobExecutorController, times( 2 ) ).getJobEntry();\n    assertEquals( expectedCountOfReleases, listReleases.size() );\n  }\n\n  @Test\n  public void testPopulateReleases_setEmrReleaseToFirstElementFromEmrReleasesList() throws Exception {\n\n    String expectedEmrRelease = \"emr-7.7.0\";\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n    jobEntry.setEmrRelease( null );\n\n    AbstractModelList<String> listReleases = hiveJobExecutorController.populateReleases();\n\n    verify( hiveJobExecutorController, times( 2 ) ).getJobEntry();\n    verify( jobEntry, times( 2 ) ).setEmrRelease( listReleases.get( 0 ) );\n\n    assertEquals( expectedEmrRelease, jobEntry.getEmrRelease() );\n  }\n\n  @Test\n  public void testPopulateEc2Roles_ec2RoleNull() {\n\n    AbstractModelList<String> roles = jobExecutorController.populateEc2Roles();\n\n    assertEquals( 0, roles.size() );\n  }\n\n  @Test\n  public void testPopulateEc2Roles_ec2RoleNotNull() {\n\n    String expectedEc2Role = \"ec2_role\";\n\n    when( jobExecutorController.getJobEntry().getEc2Role() ).thenReturn( expectedEc2Role );\n\n    AbstractModelList<String> roles = jobExecutorController.populateEc2Roles();\n\n    assertEquals( expectedEc2Role, roles.get( 0 ) );\n  }\n\n  @Test\n  public void testPopulateEmrRoles_emrRoleNull() {\n\n    AbstractModelList<String> roles = jobExecutorController.populateEmrRoles();\n\n    assertEquals( 0, roles.size() );\n  }\n\n  @Test\n  public void testPopulateEmrRoles_emrRoleNotNull() {\n\n    String expectedEmrRole = \"emr_role\";\n\n    when( jobExecutorController.getJobEntry().getEmrRole() ).thenReturn( expectedEmrRole );\n\n    AbstractModelList<String> roles = jobExecutorController.populateEmrRoles();\n\n    assertEquals( expectedEmrRole, roles.get( 0 ) );\n  }\n\n  @Test\n  public void testPopulateMasterInstanceTypes_instanceTypeNull() {\n\n    AbstractModelList<String> instanceTypes = jobExecutorController.populateMasterInstanceTypes();\n\n    assertEquals( 0, instanceTypes.size() );\n  }\n\n  @Test\n  public void testPopulateMasterInstanceTypes_instanceTypeNotNull() {\n\n    String expectedInstanceType = \"c1.medium\";\n\n    when( jobExecutorController.getJobEntry().getMasterInstanceType() ).thenReturn( expectedInstanceType );\n\n    AbstractModelList<String> instanceTypes = jobExecutorController.populateMasterInstanceTypes();\n\n    assertEquals( expectedInstanceType, instanceTypes.get( 0 ) );\n  }\n\n  @Test\n  public void testPopulateSlaveInstanceTypes_instanceTypeNull() {\n\n    AbstractModelList<String> instanceTypes = jobExecutorController.populateSlaveInstanceTypes();\n\n    assertEquals( 0, instanceTypes.size() );\n  }\n\n  @Test\n  public void testPopulateSlaveInstanceTypes_instanceTypeNotNull() {\n\n    String expectedInstanceType = \"c1.medium\";\n\n    when( jobExecutorController.getJobEntry().getSlaveInstanceType() ).thenReturn( expectedInstanceType );\n\n    AbstractModelList<String> instanceTypes = jobExecutorController.populateSlaveInstanceTypes();\n\n    assertEquals( expectedInstanceType, instanceTypes.get( 0 ) );\n  }\n\n  @Test\n  public void testSetEc2RolesFromAmazonAccount_RolesListNotEmpty() throws Exception {\n\n    AbstractModelList<String> expectedRolesList = new AbstractModelList<>();\n\n    expectedRolesList.add( \"default_role\" );\n    expectedRolesList.add( \"ec2_role\" );\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    AimClientImpl aimClient = mock( AimClientImpl.class );\n    when( aimClient.getEc2RolesFromAmazonAccount() ).thenReturn( expectedRolesList );\n\n    hiveJobExecutorController.setEc2RolesFromAmazonAccount( aimClient );\n\n    assertEquals( expectedRolesList.size(), hiveJobExecutorController.getEc2Roles().size() );\n  }\n\n  @Test\n  public void testSetEc2RolesFromAmazonAccount_RolesListWithdefaultRole() throws Exception {\n\n    String expectedEc2Role = \"EMR_EC2_DefaultRole\";\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    AimClientImpl aimClient = mock( AimClientImpl.class );\n    when( aimClient.getEc2RolesFromAmazonAccount() ).thenReturn( new AbstractModelList<>() );\n\n    hiveJobExecutorController.setEc2RolesFromAmazonAccount( aimClient );\n\n    assertEquals( expectedEc2Role, hiveJobExecutorController.getEc2Roles().get( 0 ) );\n  }\n\n  @Test\n  public void testSetEmrRolesFromAmazonAccount_RolesListNotEmpty() throws Exception {\n\n    AbstractModelList<String> expectedRolesList = new AbstractModelList<>();\n\n    expectedRolesList.add( \"default_role\" );\n    expectedRolesList.add( \"emr_role\" );\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    AimClientImpl aimClient = mock( AimClientImpl.class );\n    when( aimClient.getEmrRolesFromAmazonAccount() ).thenReturn( expectedRolesList );\n\n    hiveJobExecutorController.setEmrRolesFromAmazonAccount( aimClient );\n\n    assertEquals( expectedRolesList.size(), hiveJobExecutorController.getEmrRoles().size() );\n  }\n\n  @Test\n  public void testSetEmrRolesFromAmazonAccount_RolesListWithdefaultRole() throws Exception {\n\n    String expectedEmrRole = \"EMR_DefaultRole\";\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    AimClientImpl aimClient = mock( AimClientImpl.class );\n    when( aimClient.getEmrRolesFromAmazonAccount() ).thenReturn( new AbstractModelList<>() );\n\n    hiveJobExecutorController.setEmrRolesFromAmazonAccount( aimClient );\n\n    assertEquals( expectedEmrRole, hiveJobExecutorController.getEmrRoles().get( 0 ) );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_instanceTypesListNotEmpty() throws Exception {\n\n    List<String> instanceTypes = new ArrayList<>();\n    instanceTypes.add( \"c1.medium\" );\n    instanceTypes.add( \"c1.large\" );\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    PricingClient pricingClient = mock( PricingClientImpl.class );\n    when( pricingClient.populateInstanceTypesForSelectedRegion() ).thenReturn( instanceTypes );\n\n    hiveJobExecutorController.populateInstanceTypesForSelectedRegion( pricingClient );\n\n    assertEquals( instanceTypes.size(), hiveJobExecutorController.getMasterInstanceTypes().size() );\n    assertEquals( instanceTypes.size(), hiveJobExecutorController.getSlaveInstanceTypes().size() );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_instanceTypesListNull() throws Exception {\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory );\n\n    PricingClient pricingClient = mock( PricingClientImpl.class );\n    when( pricingClient.populateInstanceTypesForSelectedRegion() ).thenReturn( null );\n\n    hiveJobExecutorController.populateInstanceTypesForSelectedRegion( pricingClient );\n\n    assertEquals( 0, hiveJobExecutorController.getMasterInstanceTypes().size() );\n    assertEquals( 0, hiveJobExecutorController.getSlaveInstanceTypes().size() );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_catchException() throws Exception {\n\n    AmazonHiveJobExecutor jobEntry = spy( new AmazonHiveJobExecutor() );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n\n    PricingClient pricingClient = mock( PricingClientImpl.class );\n    when( pricingClient.populateInstanceTypesForSelectedRegion() ).thenThrow( new IOException() );\n    doNothing().when( hiveJobExecutorController ).showErrorDialog( nullable( String.class ), nullable( String.class ) );\n    doCallRealMethod().when( hiveJobExecutorController ).populateInstanceTypesForSelectedRegion( any() );\n\n\n    hiveJobExecutorController.populateInstanceTypesForSelectedRegion( pricingClient );\n\n    verify( hiveJobExecutorController, times( 1 ) ).showErrorDialog( nullable( String.class ), nullable( String.class ) );\n  }\n\n  @Test\n  public void testGetEmrSettings_catchExceptionWhenKeysAreNull() {\n\n    ExtTextbox textBox = mock( ExtTextbox.class );\n    XulButton btn = mock( XulButton.class );\n\n    XulDomContainer container = mock( XulDomContainer.class, RETURNS_DEEP_STUBS );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n\n    when( hiveJobExecutorController.getXulDomContainer() ).thenReturn( container );\n    when( container.getDocumentRoot().getElementById( \"access-key\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"secret-key\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"session-token\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"emr-settings\" ) ).thenReturn( btn );\n\n    doNothing().when( btn ).setDisabled( true );\n    doNothing().when( hiveJobExecutorController ).showErrorDialog( anyString(), anyString() );\n\n    hiveJobExecutorController.getEmrSettings();\n\n    verify( hiveJobExecutorController, times( 1 ) ).showErrorDialog( anyString(), anyString() );\n  }\n\n  @Test\n  public void testGetEmrSettings_catchExceptionWhenRegionIsNull() {\n\n    ExtTextbox textBox = mock( ExtTextbox.class );\n    XulButton btn = mock( XulButton.class );\n\n    XulDomContainer container = mock( XulDomContainer.class, RETURNS_DEEP_STUBS );\n\n    AmazonHiveJobExecutorController hiveJobExecutorController =\n      spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n\n    when( hiveJobExecutorController.getXulDomContainer() ).thenReturn( container );\n    when( container.getDocumentRoot().getElementById( \"access-key\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"secret-key\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"session-token\" ) ).thenReturn( textBox );\n    when( container.getDocumentRoot().getElementById( \"emr-settings\" ) ).thenReturn( btn );\n    when( textBox.getValue() ).thenReturn( \"testing\" );\n\n    doNothing().when( btn ).setDisabled( true );\n    doNothing().when( hiveJobExecutorController ).showErrorDialog( anyString(), anyString() );\n\n    hiveJobExecutorController.getEmrSettings();\n\n    verify( hiveJobExecutorController, times( 1 ) ).showErrorDialog( anyString(), anyString() );\n  }\n\n  @Test\n  public void testGetEmrSettings_setAllValidParameters() throws Exception {\n\n    AbstractModelList<String> rolesList = new AbstractModelList<>();\n    List<String> typesList = new ArrayList<>();\n    typesList.add( \"c1.medium\" );\n\n    ExtTextbox textBox = mock( ExtTextbox.class );\n    XulButton btn = mock( XulButton.class );\n    try ( MockedStatic<ClientFactoriesManager> clientFactoriesManagerMockedStatic =\n            Mockito.mockStatic( ClientFactoriesManager.class ) ) {\n      ClientFactoriesManager manager = mock( ClientFactoriesManager.class );\n      clientFactoriesManagerMockedStatic.when( () -> ClientFactoriesManager.getInstance() ).thenReturn( manager );\n\n      AimClientImpl aimClient = mock( AimClientImpl.class );\n      PricingClient pricingClient = mock( PricingClientImpl.class );\n\n      when( aimClient.getEc2RolesFromAmazonAccount() ).thenReturn( rolesList );\n      when( aimClient.getEmrRolesFromAmazonAccount() ).thenReturn( rolesList );\n      when( pricingClient.populateInstanceTypesForSelectedRegion() ).thenReturn( typesList );\n\n      XulDomContainer container = mock( XulDomContainer.class, RETURNS_DEEP_STUBS );\n      AmazonHiveJobExecutorController hiveJobExecutorController =\n        spy( new AmazonHiveJobExecutorController( container, jobEntry, bindingFactory ) );\n\n      when( hiveJobExecutorController.getXulDomContainer() ).thenReturn( container );\n      when( container.getDocumentRoot().getElementById( \"access-key\" ) ).thenReturn( textBox );\n      when( container.getDocumentRoot().getElementById( \"secret-key\" ) ).thenReturn( textBox );\n      when( container.getDocumentRoot().getElementById( \"session-token\" ) ).thenReturn( textBox );\n      when( container.getDocumentRoot().getElementById( \"num-instances\" ) ).thenReturn( textBox );\n      when( container.getDocumentRoot().getElementById( \"emr-settings\" ) ).thenReturn( btn );\n      when( textBox.getValue() ).thenReturn( \"testing\" );\n      when( hiveJobExecutorController.getJobEntry().getRegion() ).thenReturn( \"invalid Region\" );\n\n      doNothing().when( btn ).setDisabled( true );\n      doNothing().when( hiveJobExecutorController ).setXulMenusDisabled( false );\n      doNothing().when( hiveJobExecutorController ).setSelectedItemForEachMenu();\n\n      doReturn( aimClient ).when( manager )\n        .createClient( anyString(), anyString(), anyString(), anyString(), eq( ClientType.AIM ) );\n      doReturn( pricingClient ).when( manager )\n        .createClient( anyString(), anyString(), anyString(), anyString(), eq( ClientType.PRICING ) );\n\n      hiveJobExecutorController.getEmrSettings();\n\n      verify( hiveJobExecutorController, never() ).showErrorDialog( nullable( String.class ), nullable( String.class ) );\n    }\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/AbstractAmazonJobExecutorTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport com.amazonaws.auth.AWSCredentials;\nimport org.apache.commons.vfs2.FileObject;\nimport org.junit.Before;\nimport org.junit.ClassRule;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.runner.RunWith;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.job.JobMeta;\nimport org.pentaho.di.junit.rules.RestorePDIEngineEnvironment;\n\nimport java.io.File;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.ArgumentMatchers.any;\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.when;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/7/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class AbstractAmazonJobExecutorTest {\n  @ClassRule public static RestorePDIEngineEnvironment env = new RestorePDIEngineEnvironment();\n\n  @Rule\n  public TemporaryFolder temporaryFolder = new TemporaryFolder();\n\n  private File stagingFile;\n  private File stagingFolder;\n  private AbstractAmazonJobExecutor jobExecutor;\n\n  @Before\n  public void setUp() throws Exception {\n    jobExecutor = spy( new AmazonHiveJobExecutor() );\n    JobMeta parentJobMeta = mock( JobMeta.class );\n    when( parentJobMeta.getBowl() ).thenReturn( DefaultBowl.getInstance() );\n    jobExecutor.setParentJobMeta( parentJobMeta );\n    stagingFolder = temporaryFolder.newFolder( \"emr\" );\n    stagingFile = temporaryFolder.newFile( stagingFolder.getName() + \"/hive.q\" );\n  }\n\n  @Test\n  public void testGetS3FileObjectPath_validPath() throws Exception {\n\n    String stagingDirWithScheme = \"s3://s3/emr/hive\";\n    String expectedStagingDirPath = \"/s3/emr/hive\";\n\n    AWSCredentials credentials = mock( AWSCredentials.class );\n    jobExecutor.stagingDir = stagingDirWithScheme;\n    when( jobExecutor.getS3FileObjectPath() ).thenCallRealMethod();\n\n    String stagingDirPath = jobExecutor.getS3FileObjectPath();\n\n    assertEquals( expectedStagingDirPath, stagingDirPath );\n  }\n\n  @Test\n  public void testGetKeyFromS3StagingDir_getNullKey() throws Exception {\n\n    when( jobExecutor.getS3FileObjectPath() ).thenReturn( \"/test\" );\n    when( jobExecutor.getKeyFromS3StagingDir() ).thenCallRealMethod();\n\n    String bucketKey = jobExecutor.getKeyFromS3StagingDir();\n\n    assertEquals( null, bucketKey );\n  }\n\n  @Test\n  public void testGetKeyFromS3StagingDir_getNotNullKey() throws Exception {\n\n    when( jobExecutor.getS3FileObjectPath() ).thenReturn(  \"/bucket/key\" );\n    when( jobExecutor.getKeyFromS3StagingDir() ).thenCallRealMethod();\n\n    String bucketKey = jobExecutor.getKeyFromS3StagingDir();\n\n    assertEquals( \"key\", bucketKey );\n  }\n\n  @Test\n  public void testSetS3BucketKey_keyNotNull() throws Exception {\n\n    String bucketKey = \"key/subkey\";\n    String expectedKey = bucketKey + \"/\" + stagingFile.getName();\n\n    FileObject stagingFileObject = KettleVFS.getInstance( DefaultBowl.getInstance() )\n      .getFileObject( stagingFile.getPath() );\n\n    when( jobExecutor.getKeyFromS3StagingDir() ).thenReturn( bucketKey );\n    doCallRealMethod().when( jobExecutor ).setS3BucketKey( any() );\n\n    jobExecutor.setS3BucketKey( stagingFileObject );\n\n    String bucketKeyWithFileName = jobExecutor.key;\n\n    assertEquals( expectedKey, bucketKeyWithFileName );\n  }\n\n  @Test\n  public void testSetS3BucketKey_keyNull() throws Exception {\n\n    String bucketKey = null;\n    String expectedKey = stagingFile.getName();\n\n    FileObject stagingFileObject = KettleVFS.getInstance( DefaultBowl.getInstance() )\n      .getFileObject( stagingFile.getPath() );\n\n    when( jobExecutor.getKeyFromS3StagingDir() ).thenReturn( bucketKey );\n    doCallRealMethod().when( jobExecutor ).setS3BucketKey( any() );\n\n    jobExecutor.setS3BucketKey( stagingFileObject );\n\n    String bucketKeyWithFileName = jobExecutor.key;\n\n    assertEquals( expectedKey, bucketKeyWithFileName );\n  }\n\n  @Test\n  public void testSetS3BucketKey_keyEmptyString() throws Exception {\n\n    String bucketKey = \"\";\n    String expectedKey = stagingFile.getName();\n\n    FileObject stagingFileObject = KettleVFS.getInstance( DefaultBowl.getInstance() )\n      .getFileObject( stagingFile.getPath() );\n\n    when( jobExecutor.getKeyFromS3StagingDir() ).thenReturn( bucketKey );\n    doCallRealMethod().when( jobExecutor ).setS3BucketKey( any() );\n\n    jobExecutor.setS3BucketKey( stagingFileObject );\n\n    String bucketKeyWithFileName = jobExecutor.key;\n\n    assertEquals( expectedKey, bucketKeyWithFileName );\n  }\n\n  @Test\n  public void testGetStagingBucketName_OneFolder() throws Exception {\n\n    String expectedBucketName = \"test\";\n\n    when( jobExecutor.getS3FileObjectPath() ).thenReturn( \"/test\" );\n\n    String bucketName = jobExecutor.getStagingBucketName();\n\n    assertEquals( expectedBucketName, bucketName );\n  }\n\n  @Test\n  public void testGetStagingBucketName_withSubfolder() throws Exception {\n\n    String expectedBucketName = \"test\";\n\n    when( jobExecutor.getS3FileObjectPath() ).thenReturn( \"/test/hive\" );\n\n    String bucketName = jobExecutor.getStagingBucketName();\n\n    assertEquals( expectedBucketName, bucketName );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/AmazonRegionTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.junit.Assert;\nimport org.junit.Test;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\npublic class AmazonRegionTest {\n\n  @Test\n  public void testGetHumanReadableRegion_getValidReadableRegion() {\n\n    String expectedReadableRegion = \"US East (N. Virginia)\";\n\n    String readableRegion = AmazonRegion.US_EAST_1.getHumanReadableRegion();\n\n    assertEquals( expectedReadableRegion, readableRegion );\n  }\n\n  @Test\n  public void testExtractRegionFromDescription_getValidRegion() {\n\n    String expectedRegion = \"us-east-1\";\n\n    String region = AmazonRegion.extractRegionFromDescription( \"US East (N. Virginia)\" );\n\n    Assert.assertEquals( expectedRegion, region );\n  }\n\n  @Test\n  public void testExtractRegionFromDescription_getDefaultRegion() {\n\n    String expectedRegion = \"us-east-1\";\n\n    String region = AmazonRegion.extractRegionFromDescription( null );\n\n    Assert.assertEquals( expectedRegion, region );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/InstanceTypeTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.junit.MockitoJUnitRunner;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class InstanceTypeTest {\n\n  private List<String> expectedSortedInstanceTypes;\n  private List<InstanceType> instanceTypes;\n\n  @Before\n  public void setUp() {\n    expectedSortedInstanceTypes = new ArrayList<>();\n\n    InstanceType cMedium = new InstanceType( \"c1.medium\", \"optimized\" );\n    InstanceType c2xLarge = new InstanceType( \"c1.2xlarge\", \"optimized\" );\n    InstanceType c4xLarge = new InstanceType( \"c1.4xlarge\", \"optimized\" );\n    InstanceType mMedium = new InstanceType( \"m3.medium\", \"memory\" );\n    InstanceType mxLarge = new InstanceType( \"m3.xlarge\", \"memory\" );\n    InstanceType m2xLarge = new InstanceType( \"m3.2xlarge\", \"memory\" );\n    InstanceType m4xLarge = new InstanceType( \"m3.4xlarge\", \"memory\" );\n\n    expectedSortedInstanceTypes.add( \"c1.medium\" );\n    expectedSortedInstanceTypes.add( \"c1.2xlarge\" );\n    expectedSortedInstanceTypes.add( \"c1.4xlarge\" );\n    expectedSortedInstanceTypes.add( \"m3.medium\" );\n    expectedSortedInstanceTypes.add( \"m3.xlarge\" );\n    expectedSortedInstanceTypes.add( \"m3.2xlarge\" );\n    expectedSortedInstanceTypes.add( \"m3.4xlarge\" );\n\n    instanceTypes = new ArrayList<>();\n\n    instanceTypes.add( m4xLarge );\n    instanceTypes.add( cMedium );\n    instanceTypes.add( c2xLarge );\n    instanceTypes.add( mMedium );\n    instanceTypes.add( m2xLarge );\n    instanceTypes.add( c4xLarge );\n    instanceTypes.add( mxLarge );\n  }\n\n  @Test\n  public void testSortInstanceTypes_getValidOrderOfSortedElements() {\n\n    List<String> sortedInstanceTypes = InstanceType.sortInstanceTypes( instanceTypes );\n\n    assertEquals( expectedSortedInstanceTypes, sortedInstanceTypes );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/PersistentPropertyChangeListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport java.beans.PropertyChangeEvent;\nimport java.beans.PropertyChangeListener;\nimport java.util.ArrayList;\nimport java.util.List;\n\n/**\n * Property Change Listener that records all received events, useful for test purposes.\n */\npublic class PersistentPropertyChangeListener implements PropertyChangeListener {\n  private List<PropertyChangeEvent> receivedEvents;\n\n  public PersistentPropertyChangeListener() {\n    receivedEvents = new ArrayList<>();\n  }\n\n  @Override\n  public void propertyChange( PropertyChangeEvent evt ) {\n    receivedEvents.add( evt );\n  }\n\n  /**\n   * @return every event received by this listener\n   */\n  public List<PropertyChangeEvent> getReceivedEvents() {\n    return receivedEvents;\n  }\n\n  /**\n   * @return only the events that resulted in changed values\n   */\n  public List<PropertyChangeEvent> getReceivedEventsWithChanges() {\n    List<PropertyChangeEvent> events = new ArrayList<PropertyChangeEvent>();\n    for ( PropertyChangeEvent evt : receivedEvents ) {\n      if ( !( evt.getOldValue() == null ? evt.getNewValue() == null :\n        evt.getOldValue().equals( evt.getNewValue() ) ) ) {\n        events.add( evt );\n      }\n    }\n    return events;\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/PropertyFiringObjectTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon;\n\nimport org.junit.Test;\nimport org.pentaho.amazon.emr.job.AmazonElasticMapReduceJobExecutor;\nimport org.pentaho.amazon.emr.ui.AmazonElasticMapReduceJobExecutorController;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\nimport org.pentaho.amazon.hive.ui.AmazonHiveJobExecutorController;\nimport org.pentaho.ui.xul.XulDomContainer;\nimport org.pentaho.ui.xul.XulEventSource;\nimport org.pentaho.ui.xul.binding.BindingFactory;\n\nimport java.beans.PropertyChangeEvent;\nimport java.lang.reflect.Field;\nimport java.lang.reflect.Method;\nimport java.lang.reflect.Modifier;\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.junit.Assert.assertFalse;\nimport static org.mockito.Mockito.mock;\n\n/**\n * This is a helper class to dynamically test a class' ability to get, set, and fire property change events for all\n * private fields.\n */\npublic class PropertyFiringObjectTest {\n\n  @Test\n  public void testEMRFromFieldsEvent() throws Exception {\n    List<String> getterFields = new ArrayList<>();\n    getterFields.add( \"jarUrl\" );\n\n    AmazonElasticMapReduceJobExecutorController controller =\n      new AmazonElasticMapReduceJobExecutorController( mock( XulDomContainer.class ),\n        mock( AmazonElasticMapReduceJobExecutor.class ), mock( BindingFactory.class ) );\n\n    List<Field> emrFormFields = collectChildFields( controller, getterFields );\n    Class<?> oClass = controller.getClass();\n    testPropertyFiringForAllPrivateFieldsOf( controller, emrFormFields, oClass );\n  }\n\n  @Test\n  public void testHiveFormFieldsEvent() throws Exception {\n    List<String> getterFields = new ArrayList<>();\n    getterFields.add( \"qUrl\" );\n    getterFields.add( \"bootstrapActions\" );\n\n    AmazonHiveJobExecutorController controller =\n      new AmazonHiveJobExecutorController( mock( XulDomContainer.class ),\n        mock( AmazonHiveJobExecutor.class ), mock( BindingFactory.class ) );\n\n    Class<?> oClass = controller.getClass();\n    List<Field> hiveFormFields = collectChildFields( controller, getterFields );\n    testPropertyFiringForAllPrivateFieldsOf( controller, hiveFormFields, oClass );\n  }\n\n  @Test\n  public void testCommonFieldsEvent() throws Exception {\n    AmazonHiveJobExecutorController controller =\n      new AmazonHiveJobExecutorController( mock( XulDomContainer.class ),\n        mock( AmazonHiveJobExecutor.class ), mock( BindingFactory.class ) );\n\n    Class<?> oClass = controller.getClass().getSuperclass();\n    List<Field> parentFields = collectParentFields( controller );\n    testPropertyFiringForAllPrivateFieldsOf( controller, parentFields, oClass );\n  }\n\n  private List<String> getFieldNamesFormJobEntryClass() {\n    List<String> fields = new ArrayList<>();\n    Class<?> oClass = AbstractAmazonJobEntry.class;\n    for ( Field f : oClass.getDeclaredFields() ) {\n      if ( !Modifier.isProtected( f.getModifiers() ) ) {\n        // Skip non-private or transient fields or final fields\n        continue;\n      }\n      fields.add( f.getName() );\n    }\n    return fields;\n  }\n\n  private List<Field> collectChildFields( XulEventSource object, List<String> getterFields ) {\n    List<Field> formFields = new ArrayList<>();\n    Class<?> oClass = object.getClass();\n    for ( Field f : oClass.getDeclaredFields() ) {\n      if ( !Modifier.isPrivate( f.getModifiers() ) || Modifier.isTransient( f.getModifiers() ) ||\n        Modifier.isFinal( f.getModifiers() ) || Modifier.isStatic( f.getModifiers() ) ) {\n        // Skip non-private or transient fields or final fields\n        continue;\n      }\n      if ( !getterFields.contains( f.getName() ) ) {\n        continue;\n      }\n      formFields.add( f );\n    }\n    return formFields;\n  }\n\n  private List<Field> collectParentFields( XulEventSource object ) {\n    List<String> jobEnrtyFields = getFieldNamesFormJobEntryClass();\n    List<Field> formFields = new ArrayList<>();\n    Class<?> oClass = object.getClass().getSuperclass();\n    for ( Field f : oClass.getDeclaredFields() ) {\n      if ( !Modifier.isProtected( f.getModifiers() ) || Modifier.isTransient( f.getModifiers() ) ||\n        Modifier.isFinal( f.getModifiers() ) || Modifier.isStatic( f.getModifiers() ) ) {\n        // Skip non-private or transient fields or final fields\n        continue;\n      }\n      if ( !jobEnrtyFields.contains( f.getName() ) ) {\n        continue;\n      }\n      formFields.add( f );\n    }\n    return formFields;\n  }\n\n  /**\n   * Test that all private fields have getter and setter methods, and they work as they should: the getter returns the\n   * value and the setter generates a {@link PropertyChangeEvent} for that property.\n   *\n   * @param object instance of event source to test\n   * @param fields list of form fields\n   * @param oClass JobEntry class\n   * @throws Exception\n   */\n  private void testPropertyFiringForAllPrivateFieldsOf( XulEventSource object, List<Field> fields, Class<?> oClass )\n    throws Exception {\n    // Attach our property change listener to the object\n    PersistentPropertyChangeListener listner = new PersistentPropertyChangeListener();\n    object.addPropertyChangeListener( listner );\n\n    try {\n      for ( Field f : fields ) {\n        String fullFieldName = oClass.getSimpleName() + \".\" + f.getName();\n        try {\n          // Clear out the previous run's events if there were any\n          listner.getReceivedEvents().clear();\n          String camelcaseFieldName = f.getName().substring( 0, 1 ).toUpperCase() + f.getName().substring( 1 );\n\n          // Grab the getter and setter methods for the field\n          Method getter = findMethod( oClass, camelcaseFieldName, null, \"get\", \"is\" );\n          Method setter = findMethod( oClass, camelcaseFieldName, new Class<?>[] { f.getType() }, \"set\" );\n\n          // Grab the original value so we can make sure we're changing it so guarantee a PropertyChangeEvent\n          Object originalValue = getter.invoke( object );\n          // Generate a test value to set this property to\n          Object value = getTestValue( f.getType(), originalValue );\n\n          assertFalse( fullFieldName + \": generated value does not differ from original value. Please update \"\n            + getClass() + \".getTestValue() to return a different value for \" + f.getType() + \".\", value.equals(\n            originalValue ) );\n          setter.invoke( object, value );\n          assertEquals( fullFieldName + \": value not get/set properly\", value, getter.invoke( object ) );\n          assertEquals( fullFieldName + \": PropertyChangeEvent not received when changing value\", 1, listner\n            .getReceivedEvents().size() );\n          PropertyChangeEvent evt = listner.getReceivedEvents().get( 0 );\n          assertEquals( fullFieldName + \": fired event with wrong property name\", f.getName(), evt.getPropertyName() );\n          assertEquals( fullFieldName + \": fired event with incorrect old value\", originalValue, evt.getOldValue() );\n          assertEquals( fullFieldName + \": fired event with incorrect new value\", value, evt.getNewValue() );\n        } catch ( Exception ex ) {\n          throw new Exception( \"Error testing getter/setter for \" + fullFieldName, ex );\n        }\n      }\n    } finally {\n      // Remove our listener when we're done\n      object.removePropertyChangeListener( listner );\n    }\n  }\n\n\n  /**\n   * Finds a method in the given class or any super class with the name {@code prefix + methodName} that accepts 0\n   * parameters.\n   *\n   * @param aClass         Class to search for method in\n   * @param methodName     Camelcase'd method name to search for with any of the provided prefixes\n   * @param parameterTypes The parameter types the method signature must match.\n   * @param prefixes       Prefixes to prepend to {@code methodName} when searching for method names, e.g. \"get\", \"is\"\n   * @return The first method found to match the format {@code prefix + methodName}\n   */\n  private static Method findMethod( Class<?> aClass, String methodName, Class<?>[] parameterTypes,\n                                    String... prefixes ) {\n    for ( String prefix : prefixes ) {\n      try {\n        return aClass.getDeclaredMethod( prefix + methodName, parameterTypes );\n      } catch ( NoSuchMethodException ex ) {\n        // ignore, continue searching prefixes\n      }\n    }\n    // If no method found with any prefixes search the super class\n    aClass = aClass.getSuperclass();\n    return aClass == null ? null : findMethod( aClass, methodName, parameterTypes, prefixes );\n  }\n\n\n  /**\n   * Get a value to test with that matches the type provided.\n   *\n   * @param type          Type of test value\n   * @param originalValue The test value returned by this method must differ from this object\n   * @return An object of type {@code Type}\n   */\n\n  private Object getTestValue( Class<?> type, Object originalValue ) {\n    if ( String.class.equals( type ) ) {\n      return String.valueOf( System.currentTimeMillis() );\n    }\n    if ( Boolean.class.equals( type ) || boolean.class.equals( type ) ) {\n      // Return the opposite\n      return originalValue == null ? Boolean.TRUE : !(Boolean) originalValue;\n    }\n    //not primitive\n    return mock( type );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/AmazonClientCredentialsTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n\nimport com.amazonaws.auth.AWSCredentials;\nimport com.amazonaws.auth.BasicAWSCredentials;\nimport org.junit.Assert;\nimport org.junit.Test;\n\npublic class AmazonClientCredentialsTest {\n\n  @Test\n  public void testGetAWSCredentials_getValidCredentials() {\n\n    String expectedAccesskey = \"accessKey\";\n    String expectedSecretKey = \"secretKey\";\n\n    AmazonClientCredentials clientInitializer =\n      new AmazonClientCredentials( \"accessKey\", \"secretKey\", \"\", \"US East (N. Virginia)\" );\n\n    AWSCredentials awsCredentials = clientInitializer.getAWSCredentials();\n\n    Assert.assertEquals( expectedAccesskey, awsCredentials.getAWSAccessKeyId() );\n    Assert.assertEquals( expectedSecretKey, awsCredentials.getAWSSecretKey() );\n  }\n\n  @Test\n  public void testGetRegion_getValidRegion() {\n\n    String expectedRegion = \"us-east-1\";\n\n    AmazonClientCredentials clientInitializer =\n      new AmazonClientCredentials( \"accessKey\", \"secretKey\", \"\", \"US East (N. Virginia)\" );\n\n    Assert.assertEquals( expectedRegion, clientInitializer.getRegion() );\n  }\n\n  @Test\n  public void testGetRegion_getDefaultRegion() {\n\n    String expectedRegion = \"us-east-1\";\n\n    AmazonClientCredentials clientInitializer =\n      new AmazonClientCredentials( \"accessKey\", \"secretKey\", \"\", null );\n\n    Assert.assertEquals( expectedRegion, clientInitializer.getRegion() );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/ClientFactoriesManagerTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client;\n\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.client.api.AimClient;\nimport org.pentaho.amazon.client.api.EmrClient;\nimport org.pentaho.amazon.client.api.PricingClient;\nimport org.pentaho.amazon.client.api.S3Client;\nimport org.pentaho.di.core.util.Assert;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class ClientFactoriesManagerTest {\n\n  private ClientFactoriesManager factoriesManager;\n\n  @Before\n  public void setUp() {\n    factoriesManager = Mockito.spy( ClientFactoriesManager.getInstance() );\n  }\n\n  @Test\n  public void createClient_whenClientTypeIsS3() {\n\n    S3Client s3Client;\n\n    s3Client = factoriesManager.createClient( \"accessKey\", \"secretKey\", \"\", \"US East ( N. Virginia)\", ClientType.S3 );\n\n    Assert.assertNotNull( s3Client );\n  }\n\n  @Test\n  public void createClient_whenClientTypeIsEmr() {\n\n    EmrClient emrClient;\n\n    emrClient = factoriesManager.createClient( \"accessKey\", \"secretKey\", \"\", \"US East ( N. Virginia)\", ClientType.EMR );\n\n    Assert.assertNotNull( emrClient );\n  }\n\n  @Test\n  public void createClient_whenClientTypeIsPricing() {\n\n    PricingClient pricingClient;\n\n    pricingClient =\n      factoriesManager.createClient( \"accessKey\", \"secretKey\", \"\", \"US East ( N. Virginia)\", ClientType.PRICING );\n\n    Assert.assertNotNull( pricingClient );\n  }\n\n  @Test\n  public void createClient_whenClientTypeIsAim() {\n\n    AimClient aimClient;\n\n    aimClient = factoriesManager.createClient( \"accessKey\", \"secretKey\", \"\", \"US East ( N. Virginia)\", ClientType.AIM );\n\n    Assert.assertNotNull( aimClient );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/AimClientImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.identitymanagement.AmazonIdentityManagement;\nimport com.amazonaws.services.identitymanagement.model.InstanceProfile;\nimport com.amazonaws.services.identitymanagement.model.Role;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.client.api.AimClient;\nimport org.pentaho.ui.xul.util.AbstractModelList;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.junit.Assert.assertEquals;\nimport static org.mockito.Mockito.RETURNS_DEEP_STUBS;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class AimClientImplTest {\n\n  private AimClient aimClient;\n  private AmazonIdentityManagement amazonIdentityManagement;\n\n  @Before\n  public void setUp() {\n    amazonIdentityManagement = Mockito.mock( AmazonIdentityManagement.class, RETURNS_DEEP_STUBS );\n    aimClient = Mockito.spy( new AimClientImpl( amazonIdentityManagement ) );\n  }\n\n  @Test\n  public void testGetEc2RolesFromAmazonAccount_getValidListOfRoles() {\n\n    List<String> expectedEc2Roles = new ArrayList<>();\n\n    List<InstanceProfile> ec2Profiles = new ArrayList<>();\n\n    InstanceProfile defaultProfile = new InstanceProfile();\n    defaultProfile.setInstanceProfileName( \"defaulr_role\" );\n\n    InstanceProfile testProfile = new InstanceProfile();\n    testProfile.setInstanceProfileName( \"test_role\" );\n\n    ec2Profiles.add( defaultProfile );\n    ec2Profiles.add( testProfile );\n\n    expectedEc2Roles.add( \"defaulr_role\" );\n    expectedEc2Roles.add( \"test_role\" );\n\n    Mockito.when( amazonIdentityManagement.listInstanceProfiles().getInstanceProfiles() ).thenReturn( ec2Profiles );\n\n    AbstractModelList<String> ec2Roles = aimClient.getEc2RolesFromAmazonAccount();\n\n    assertEquals( expectedEc2Roles, ec2Roles );\n  }\n\n  @Test\n  public void testGetEmrRolesFromAmazonAccount_getValidListOfRoles() {\n\n    List<String> expectedEmrRoles = new ArrayList<>();\n    expectedEmrRoles.add( \"default_role\" );\n    expectedEmrRoles.add( \"test_role\" );\n\n    List<Role> emrRolesList = new ArrayList<>();\n    Role defaultRole = new Role();\n    defaultRole.setRoleName( \"default_role\" );\n    defaultRole.setAssumeRolePolicyDocument( \"elasticmapreduce.amazonaws.com\" );\n    emrRolesList.add( defaultRole );\n\n    Role testRole = new Role();\n    testRole.setRoleName( \"test_role\" );\n    testRole.setAssumeRolePolicyDocument( \"elasticmapreduce.amazonaws.com\" );\n    emrRolesList.add( testRole );\n\n    Role wrongRole = new Role();\n    wrongRole.setRoleName( \"wrong_role\" );\n    wrongRole.setAssumeRolePolicyDocument( \"compute.amazonaws.com\" );\n    emrRolesList.add( wrongRole );\n\n    Mockito.when( amazonIdentityManagement.listRoles().getRoles() ).thenReturn( emrRolesList );\n\n    AbstractModelList<String> emrRoles = aimClient.getEmrRolesFromAmazonAccount();\n\n    assertEquals( expectedEmrRoles, emrRoles );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/Ec2ClientFactoryTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.amazon.client.impl;\n\nimport static org.junit.Assert.*;\nimport org.junit.Test;\nimport org.pentaho.amazon.client.api.Ec2Client;\n\n/**\n * Unit tests for Ec2ClientFactory\n */\npublic class Ec2ClientFactoryTest {\n\n  @Test\n  public void testCreateClient_WithBasicCredentials() {\n    // Arrange\n    Ec2ClientFactory factory = new Ec2ClientFactory();\n    String accessKey = \"AKIAIOSFODNN7EXAMPLE\";\n    String secretKey = \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\";\n    String region = \"US East (N. Virginia)\";\n\n    // Act\n    Ec2Client client = factory.createClient( accessKey, secretKey, null, region );\n\n    // Assert\n    assertNotNull( \"Client should not be null\", client );\n    assertTrue( \"Client should be instance of Ec2ClientImpl\", \n                client instanceof Ec2ClientImpl );\n  }\n\n  @Test\n  public void testCreateClient_WithSessionToken() {\n    // Arrange\n    Ec2ClientFactory factory = new Ec2ClientFactory();\n    String accessKey = \"AKIAIOSFODNN7EXAMPLE\";\n    String secretKey = \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\";\n    String sessionToken = \"FwoGZXIvYXdzEBYaDPCX3EXAMPLE\";\n    String region = \"US West (Oregon)\";\n\n    // Act\n    Ec2Client client = factory.createClient( accessKey, secretKey, sessionToken, region );\n\n    // Assert\n    assertNotNull( \"Client should not be null\", client );\n    assertTrue( \"Client should be instance of Ec2ClientImpl\", \n                client instanceof Ec2ClientImpl );\n  }\n\n  @Test\n  public void testCreateClient_WithEmptySessionToken() {\n    // Arrange\n    Ec2ClientFactory factory = new Ec2ClientFactory();\n    String accessKey = \"AKIAIOSFODNN7EXAMPLE\";\n    String secretKey = \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\";\n    String sessionToken = \"\";\n    String region = \"US East (Ohio)\";\n\n    // Act\n    Ec2Client client = factory.createClient( accessKey, secretKey, sessionToken, region );\n\n    // Assert\n    assertNotNull( \"Client should not be null\", client );\n    assertTrue( \"Client should be instance of Ec2ClientImpl\", \n                client instanceof Ec2ClientImpl );\n  }\n\n  @Test\n  public void testCreateClient_DifferentRegions() {\n    // Arrange\n    Ec2ClientFactory factory = new Ec2ClientFactory();\n    String accessKey = \"AKIAIOSFODNN7EXAMPLE\";\n    String secretKey = \"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\";\n\n    String[] regions = {\n      \"US East (N. Virginia)\",\n      \"US West (Oregon)\",\n      \"EU (Ireland)\",\n      \"Asia Pacific (Singapore)\"\n    };\n\n    // Act & Assert\n    for ( String region : regions ) {\n      Ec2Client client = factory.createClient( accessKey, secretKey, null, region );\n      assertNotNull( \"Client should not be null for region: \" + region, client );\n    }\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/Ec2ClientImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.ec2.AmazonEC2;\nimport com.amazonaws.services.ec2.model.DescribeSubnetsRequest;\nimport com.amazonaws.services.ec2.model.DescribeSubnetsResult;\nimport com.amazonaws.services.ec2.model.Subnet;\nimport com.amazonaws.services.ec2.model.Tag;\nimport java.lang.reflect.Field;\nimport java.util.List;\nimport static org.junit.Assert.*;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.mockito.ArgumentCaptor;\nimport static org.mockito.Mockito.*;\nimport org.pentaho.amazon.client.api.Ec2Client;\n\n/**\n * Unit tests for Ec2ClientImpl\n */\npublic class Ec2ClientImplTest {\n\n  private AmazonEC2 mockEc2Client;\n  private Ec2ClientImpl ec2Client;\n\n  @Before\n  public void setUp() throws Exception {\n    mockEc2Client = mock( AmazonEC2.class );\n    ec2Client = new Ec2ClientImpl( \"accessKey\", \"secretKey\", \"\", \"US East (N. Virginia)\" );\n    \n    // Inject mock EC2 client using reflection\n    Field ec2ClientField = Ec2ClientImpl.class.getDeclaredField( \"ec2Client\" );\n    ec2ClientField.setAccessible( true );\n    ec2ClientField.set( ec2Client, mockEc2Client );\n  }\n\n  @Test\n  public void testGetAvailableSubnets_Success() {\n    // Arrange\n    Subnet subnet1 = new Subnet()\n      .withSubnetId( \"subnet-12345\" )\n      .withVpcId( \"vpc-abcde\" )\n      .withAvailabilityZone( \"us-east-1a\" )\n      .withCidrBlock( \"10.0.1.0/24\" )\n      .withState( \"available\" )\n      .withTags( new Tag( \"Name\", \"Test Subnet 1\" ) );\n\n    Subnet subnet2 = new Subnet()\n      .withSubnetId( \"subnet-67890\" )\n      .withVpcId( \"vpc-fghij\" )\n      .withAvailabilityZone( \"us-east-1b\" )\n      .withCidrBlock( \"10.0.2.0/24\" )\n      .withState( \"available\" )\n      .withTags( new Tag( \"Name\", \"Test Subnet 2\" ) );\n\n    DescribeSubnetsResult result = new DescribeSubnetsResult()\n      .withSubnets( subnet1, subnet2 );\n\n    when( mockEc2Client.describeSubnets( any( DescribeSubnetsRequest.class ) ) )\n      .thenReturn( result );\n\n    // Act\n    List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n    // Assert\n    assertNotNull( \"Subnets list should not be null\", subnets );\n    assertEquals( \"Should return 2 subnets\", 2, subnets.size() );\n\n    Ec2Client.SubnetInfo subnet1Info = subnets.get( 0 );\n    assertEquals( \"subnet-12345\", subnet1Info.getSubnetId() );\n    assertEquals( \"vpc-abcde\", subnet1Info.getVpcId() );\n    assertEquals( \"us-east-1a\", subnet1Info.getAvailabilityZone() );\n    assertEquals( \"10.0.1.0/24\", subnet1Info.getCidrBlock() );\n    assertEquals( \"available\", subnet1Info.getState() );\n    assertEquals( \"Test Subnet 1\", subnet1Info.getSubnetName() );\n\n    // Verify the request has the correct filter\n    ArgumentCaptor<DescribeSubnetsRequest> requestCaptor = \n      ArgumentCaptor.forClass( DescribeSubnetsRequest.class );\n    verify( mockEc2Client ).describeSubnets( requestCaptor.capture() );\n    \n    DescribeSubnetsRequest capturedRequest = requestCaptor.getValue();\n    assertNotNull( \"Request should have filters\", capturedRequest.getFilters() );\n    assertEquals( \"Should have one filter\", 1, capturedRequest.getFilters().size() );\n    assertEquals( \"state\", capturedRequest.getFilters().get( 0 ).getName() );\n  }\n\n  @Test\n  public void testGetAvailableSubnets_NoNameTag() throws Exception {\n    // Arrange - subnet without Name tag (empty tag list, no Name tag)\n    Subnet subnet = new Subnet()\n      .withSubnetId( \"subnet-12345\" )\n      .withVpcId( \"vpc-12345\" )\n      .withAvailabilityZone( \"us-east-1a\" )\n      .withCidrBlock( \"10.0.1.0/24\" )\n      .withState( \"available\" );\n    // Don't set tags at all - this simulates a subnet with no tags\n\n    DescribeSubnetsResult describeSubnetsResult = new DescribeSubnetsResult()\n      .withSubnets( subnet );\n\n    when( mockEc2Client.describeSubnets( any( DescribeSubnetsRequest.class ) ) )\n      .thenReturn( describeSubnetsResult );\n\n    // Act\n    List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n    // Assert\n    assertEquals( \"Should return 1 subnet\", 1, subnets.size() );\n    Ec2Client.SubnetInfo subnetInfo = subnets.get( 0 );\n    assertEquals( \"subnet-12345\", subnetInfo.getSubnetId() );\n    // When no Name tag exists, getSubnetName() returns the subnet ID as fallback\n    assertEquals( \"Subnet name should fallback to subnet ID when no Name tag\",\n      \"subnet-12345\", subnetInfo.getSubnetName() );\n    assertEquals( \"vpc-12345\", subnetInfo.getVpcId() );\n  }\n\n  @Test\n  public void testGetAvailableSubnets_EmptyResult() {\n    // Arrange\n    DescribeSubnetsResult result = new DescribeSubnetsResult()\n      .withSubnets();\n\n    when( mockEc2Client.describeSubnets( any( DescribeSubnetsRequest.class ) ) )\n      .thenReturn( result );\n\n    // Act\n    List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n    // Assert\n    assertNotNull( \"Subnets list should not be null\", subnets );\n    assertTrue( \"Subnets list should be empty\", subnets.isEmpty() );\n  }\n\n  @Test\n  public void testGetAvailableSubnets_Exception() {\n    // Arrange\n    when( mockEc2Client.describeSubnets( any( DescribeSubnetsRequest.class ) ) )\n      .thenThrow( new RuntimeException( \"AWS error\" ) );\n\n    // Act\n    List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n    // Assert\n    assertNotNull( \"Should return empty list on exception\", subnets );\n    assertTrue( \"Should return empty list on exception\", subnets.isEmpty() );\n  }\n\n  @Test\n  public void testSubnetInfo_GetDisplayString() {\n    // Arrange\n    Ec2Client.SubnetInfo subnetInfo = new Ec2Client.SubnetInfo(\n      \"subnet-12345\",\n      \"Test Subnet\",\n      \"vpc-abcde\",\n      \"us-east-1a\",\n      \"10.0.1.0/24\",\n      \"available\"\n    );\n\n    // Act\n    String displayString = subnetInfo.getDisplayString();\n\n    // Assert\n    assertEquals( \"Test Subnet (subnet-12345) - AZ: us-east-1a - CIDR: 10.0.1.0/24\",\n      displayString );\n  }\n\n  @Test\n  public void testSubnetInfo_GetDisplayString_NoName() {\n    // Arrange\n    Ec2Client.SubnetInfo subnetInfo = new Ec2Client.SubnetInfo(\n      \"subnet-12345\",\n      null,\n      \"vpc-abcde\",\n      \"us-east-1a\",\n      \"10.0.1.0/24\",\n      \"available\"\n    );\n\n    // Act\n    String displayString = subnetInfo.getDisplayString();\n\n    // Assert\n    assertEquals( \"subnet-12345 - AZ: us-east-1a - CIDR: 10.0.1.0/24\",\n      displayString );\n  }\n\n  @Test\n  public void testSubnetInfo_GetDisplayString_EmptyName() {\n    // Arrange\n    Ec2Client.SubnetInfo subnetInfo = new Ec2Client.SubnetInfo(\n      \"subnet-12345\",\n      \"\",\n      \"vpc-abcde\",\n      \"us-east-1a\",\n      \"10.0.1.0/24\",\n      \"available\"\n    );\n\n    // Act\n    String displayString = subnetInfo.getDisplayString();\n\n    // Assert\n    assertEquals( \"subnet-12345 - AZ: us-east-1a - CIDR: 10.0.1.0/24\",\n      displayString );\n  }\n\n  @Test\n  public void testSubnetInfo_Getters() {\n    // Arrange\n    Ec2Client.SubnetInfo subnetInfo = new Ec2Client.SubnetInfo(\n      \"subnet-12345\",\n      \"Test Subnet\",\n      \"vpc-abcde\",\n      \"us-east-1a\",\n      \"10.0.1.0/24\",\n      \"available\"\n    );\n\n    // Assert\n    assertEquals( \"subnet-12345\", subnetInfo.getSubnetId() );\n    assertEquals( \"Test Subnet\", subnetInfo.getSubnetName() );\n    assertEquals( \"vpc-abcde\", subnetInfo.getVpcId() );\n    assertEquals( \"us-east-1a\", subnetInfo.getAvailabilityZone() );\n    assertEquals( \"10.0.1.0/24\", subnetInfo.getCidrBlock() );\n    assertEquals( \"available\", subnetInfo.getState() );\n  }\n\n  @Test\n  public void testGetAvailableSubnets_MultipleTags() {\n    // Arrange - subnet with multiple tags\n    Subnet subnet = new Subnet()\n      .withSubnetId( \"subnet-12345\" )\n      .withVpcId( \"vpc-abcde\" )\n      .withAvailabilityZone( \"us-east-1a\" )\n      .withCidrBlock( \"10.0.1.0/24\" )\n      .withState( \"available\" )\n      .withTags(\n        new Tag( \"Environment\", \"Production\" ),\n        new Tag( \"Name\", \"Prod Subnet\" ),\n        new Tag( \"Owner\", \"DevOps\" )\n      );\n\n    DescribeSubnetsResult result = new DescribeSubnetsResult()\n      .withSubnets( subnet );\n\n    when( mockEc2Client.describeSubnets( any( DescribeSubnetsRequest.class ) ) )\n      .thenReturn( result );\n\n    // Act\n    List<Ec2Client.SubnetInfo> subnets = ec2Client.getAvailableSubnets();\n\n    // Assert\n    assertNotNull( subnets );\n    assertEquals( 1, subnets.size() );\n    assertEquals( \"Prod Subnet\", subnets.get( 0 ).getSubnetName() );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/EmrClientImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduce;\nimport com.amazonaws.services.elasticmapreduce.model.ActionOnFailure;\nimport com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsRequest;\nimport com.amazonaws.services.elasticmapreduce.model.HadoopJarStepConfig;\nimport com.amazonaws.services.elasticmapreduce.model.RunJobFlowRequest;\nimport com.amazonaws.services.elasticmapreduce.model.StepConfig;\nimport com.amazonaws.services.elasticmapreduce.model.StepSummary;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.hive.job.AmazonHiveJobExecutor;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.nullable;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.doNothing;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\nimport static org.mockito.Mockito.times;\nimport static org.mockito.Mockito.verify;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class EmrClientImplTest {\n\n  private EmrClientImpl emrClient;\n  private AmazonElasticMapReduce awsEmrClient;\n  private AmazonHiveJobExecutor jobEntry;\n\n  @Before\n  public void setUp() {\n    awsEmrClient = mock( AmazonElasticMapReduce.class );\n    emrClient = spy( new EmrClientImpl( awsEmrClient ) );\n    jobEntry = spy( new AmazonHiveJobExecutor() );\n    setJobEntryFields();\n  }\n\n  @Test\n  public void testInitEmrCluster_setRequestParamsForHiveStep() throws Exception {\n    String stagingS3FileUrl = \"s3://bucket/key/test.q\";\n    String stagingS3BucketUrl = \"s3://bucket\";\n    String stepType = \"hive\";\n    String mainClass = \"\";\n    String bootstrapActions = \"\";\n\n    RunJobFlowRequest jobFlowRequest =\n      emrClient.initEmrCluster( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, bootstrapActions, jobEntry );\n\n    Assert.assertEquals( 1, jobFlowRequest.getApplications().size() );\n    Assert.assertEquals( 1, jobFlowRequest.getSteps().size() );\n\n    Assert.assertEquals( stagingS3BucketUrl, jobFlowRequest.getLogUri() );\n    Assert.assertEquals( jobEntry.getHadoopJobName(), jobFlowRequest.getName() );\n    Assert.assertEquals( jobEntry.getEmrRelease(), jobFlowRequest.getReleaseLabel() );\n    Assert.assertEquals( jobEntry.getNumInstances(), jobFlowRequest.getInstances().getInstanceCount().toString() );\n    Assert.assertEquals( jobEntry.getMasterInstanceType(), jobFlowRequest.getInstances().getMasterInstanceType() );\n    Assert.assertEquals( jobEntry.getSlaveInstanceType(), jobFlowRequest.getInstances().getSlaveInstanceType() );\n    Assert.assertEquals( jobEntry.getAlive(), jobFlowRequest.getInstances().getKeepJobFlowAliveWhenNoSteps() );\n    Assert.assertEquals( jobEntry.getEc2Role(), jobFlowRequest.getJobFlowRole() );\n    Assert.assertEquals( jobEntry.getEmrRole(), jobFlowRequest.getServiceRole() );\n  }\n\n  @Test\n  public void testInitEmrCluster_setRequestParamsForEmrStep() throws Exception {\n    String stagingS3FileUrl = \"s3://bucket/key/test.jar\";\n    String stagingS3BucketUrl = \"s3://bucket\";\n    String stepType = \"emr\";\n    String mainClass = \"WordCount\";\n    String bootstrapActions = \"\";\n\n    RunJobFlowRequest jobFlowRequest =\n      emrClient.initEmrCluster( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, bootstrapActions, jobEntry );\n\n    Assert.assertEquals( 0, jobFlowRequest.getApplications().size() );\n    Assert.assertEquals( 1, jobFlowRequest.getSteps().size() );\n\n    Assert.assertEquals( stagingS3BucketUrl, jobFlowRequest.getLogUri() );\n    Assert.assertEquals( jobEntry.getHadoopJobName(), jobFlowRequest.getName() );\n    Assert.assertEquals( jobEntry.getEmrRelease(), jobFlowRequest.getReleaseLabel() );\n    Assert.assertEquals( jobEntry.getNumInstances(), jobFlowRequest.getInstances().getInstanceCount().toString() );\n    Assert.assertEquals( jobEntry.getMasterInstanceType(), jobFlowRequest.getInstances().getMasterInstanceType() );\n    Assert.assertEquals( jobEntry.getSlaveInstanceType(), jobFlowRequest.getInstances().getSlaveInstanceType() );\n    Assert.assertEquals( jobEntry.getAlive(), jobFlowRequest.getInstances().getKeepJobFlowAliveWhenNoSteps() );\n    Assert.assertEquals( jobEntry.getEc2Role(), jobFlowRequest.getJobFlowRole() );\n    Assert.assertEquals( jobEntry.getEmrRole(), jobFlowRequest.getServiceRole() );\n  }\n\n  @Test\n  public void testInitEmrCluster_checkStepParamsForEmrStep() throws Exception {\n    String stagingS3FileUrl = \"s3://bucket/key/test.jar\";\n    String stagingS3BucketUrl = \"s3://bucket\";\n    String stepType = \"emr\";\n    String mainClass = \"WordCount\";\n    String bootstrapActions = \"\";\n\n    List<String> stepArgs = new ArrayList<>();\n    stepArgs.add( \"--\" );\n    stepArgs.add( \"bucket\" );\n    stepArgs.add( \"s3://test\" );\n\n    RunJobFlowRequest jobFlowRequest =\n      emrClient.initEmrCluster( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, bootstrapActions, jobEntry );\n\n    StepConfig stepConfig = jobFlowRequest.getSteps().get( 0 );\n    String hadoopJobName = stepConfig.getName();\n    String actionOnFailure = stepConfig.getActionOnFailure();\n    HadoopJarStepConfig hadoopJarStepConfig = stepConfig.getHadoopJarStep();\n\n    Assert.assertEquals( 1, jobFlowRequest.getSteps().size() );\n\n    Assert.assertEquals( \"custom jar: \" + stagingS3FileUrl, hadoopJobName );\n    Assert.assertEquals( ActionOnFailure.TERMINATE_JOB_FLOW.name(), actionOnFailure );\n    Assert.assertEquals( mainClass, hadoopJarStepConfig.getMainClass() );\n    Assert.assertEquals( stepArgs, hadoopJarStepConfig.getArgs() );\n    Assert.assertEquals( stagingS3FileUrl, hadoopJarStepConfig.getJar() );\n  }\n\n  @Test\n  public void testInitEmrCluster_checkStepParamsForHiveStep() throws Exception {\n    String stagingS3FileUrl = \"s3://bucket/key/test.q\";\n    String stagingS3BucketUrl = \"s3://bucket\";\n    String stepType = \"hive\";\n    String mainClass = \"\";\n    String bootstrapActions = \"\";\n\n    RunJobFlowRequest jobFlowRequest =\n      emrClient.initEmrCluster( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, bootstrapActions, jobEntry );\n\n    StepConfig stepConfig = jobFlowRequest.getSteps().get( 0 );\n    String hiveJobName = stepConfig.getName();\n    String actionOnFailure = stepConfig.getActionOnFailure();\n\n    Assert.assertEquals( 1, jobFlowRequest.getSteps().size() );\n\n    Assert.assertEquals( \"Hive\", hiveJobName );\n    Assert.assertEquals( ActionOnFailure.TERMINATE_JOB_FLOW.name(), actionOnFailure );\n  }\n\n  @Test\n  public void testAddStepToExistingJobFlow_getIdOfRunningStep() throws Exception {\n    String stagingS3FileUrl = \"s3://bucket/key/test.q\";\n    String stagingS3BucketUrl = \"s3://bucket\";\n    String stepType = \"hive\";\n    String mainClass = \"\";\n\n    doCallRealMethod().when( jobEntry ).setHadoopJobFlowId( anyString() );\n    jobEntry.setHadoopJobFlowId( \"j-11WRZQW6NIQOA\" );\n    doReturn( true ).when( jobEntry ).getAlive();\n\n    List<StepSummary> existingSteps = new ArrayList<>();\n    List<StepSummary> existingWithNewSteps = new ArrayList<>();\n\n    StepSummary stepSummary1 = new StepSummary();\n    stepSummary1.setId( \"s-1\" );\n    StepSummary stepSummary2 = new StepSummary();\n    stepSummary2.setId( \"s-2\" );\n    StepSummary stepSummary3 = new StepSummary();\n    stepSummary3.setId( \"s-3\" );\n\n    existingSteps.add( stepSummary1 );\n    existingSteps.add( stepSummary2 );\n\n    existingWithNewSteps.addAll( existingSteps );\n    existingWithNewSteps.add( stepSummary3 );\n\n    AddJobFlowStepsRequest jobFlowStepsRequest = mock( AddJobFlowStepsRequest.class );\n\n    doReturn( existingSteps, existingWithNewSteps ).when( emrClient ).getSteps();\n    doCallRealMethod().when( emrClient ).addStepToExistingJobFlow( anyString(), anyString(), anyString(), anyString(), nullable( AmazonHiveJobExecutor.class ) );\n\n    emrClient.addStepToExistingJobFlow( stagingS3FileUrl, stagingS3BucketUrl, stepType, mainClass, jobEntry );\n\n    Assert.assertEquals( stepSummary3.getId(), emrClient.getStepId() );\n  }\n\n  @Test\n  public void testStopSteps_whenLeaveClusterAlive() throws Exception {\n\n    doCallRealMethod().when( emrClient ).setAlive( true );\n    doCallRealMethod().when( emrClient ).isAlive();\n    doNothing().when( emrClient ).cancelStepExecution();\n\n    emrClient.setAlive( true );\n\n    boolean stopSteps = emrClient.stopSteps();\n\n    verify( emrClient, times( 0 ) ).terminateJobFlows();\n    verify( emrClient, times( 1 ) ).cancelStepExecution();\n    Assert.assertEquals( true, stopSteps );\n  }\n\n  @Test\n  public void testStopSteps_whenNotLeaveClusterAlive() throws Exception {\n\n    doCallRealMethod().when( emrClient ).setAlive( false );\n    doCallRealMethod().when( emrClient ).isAlive();\n    doNothing().when( emrClient ).terminateJobFlows();\n\n    emrClient.setAlive( false );\n\n    boolean stopSteps = emrClient.stopSteps();\n\n    verify( emrClient, times( 1 ) ).terminateJobFlows();\n    verify( emrClient, times( 0 ) ).cancelStepExecution();\n    Assert.assertEquals( false, stopSteps );\n  }\n\n  @Test\n  public void testRemoveLineBreaks_whenBootstrapActionStringIsNull(){\n    String resultBootstrapString = EmrClientImpl.removeLineBreaks( null );\n    Assert.assertEquals( null, resultBootstrapString );\n  }\n\n  @Test\n  public void testRemoveLineBreaks_whenBootstrapActionStringIsSpaces(){\n\n    String bootstrapStringWithBreaks = \"  \";\n    String expectedString = \"\";\n\n    String resultBootstrapString = EmrClientImpl.removeLineBreaks( bootstrapStringWithBreaks );\n\n    Assert.assertEquals( expectedString, resultBootstrapString );\n  }\n\n  @Test\n  public void testRemoveLineBreaks_whenBootstrapActionStringIsEmpty(){\n\n    String bootstrapStringWithBreaks = \"\";\n    String expectedString = \"\";\n\n    String resultBootstrapString = EmrClientImpl.removeLineBreaks( bootstrapStringWithBreaks );\n\n    Assert.assertEquals( expectedString, resultBootstrapString );\n  }\n\n  @Test\n  public void testRemoveLineBreaks_whenBootstrapActionStringIsNotNull(){\n\n    String bootstrapStringWithBreaks = \" --bootstrap-action\\n \\\"s3://hive-input/copymyfile.sh\\\" --args\\n\\n   s3://hive-input/input1/weblogs_small.txt   \\n   \";\n    String expectedString = \"--bootstrap-action \\\"s3://hive-input/copymyfile.sh\\\" --args s3://hive-input/input1/weblogs_small.txt\";\n\n    String resultBootstrapString = EmrClientImpl.removeLineBreaks( bootstrapStringWithBreaks );\n\n    Assert.assertEquals( expectedString, resultBootstrapString );\n  }\n\n  @Test\n  public void testRemoveLineBreaks_whenBootstrapActionStringEqualsToExpectedString(){\n\n    String bootstrapStringWithBreaks = \"--bootstrap-action \\\"s3://hive-input/copymyfile.sh\\\" --args s3://hive-input/input1/weblogs_small.txt\";\n    String expectedString = \"--bootstrap-action \\\"s3://hive-input/copymyfile.sh\\\" --args s3://hive-input/input1/weblogs_small.txt\";\n\n    String resultBootstrapString = EmrClientImpl.removeLineBreaks( bootstrapStringWithBreaks );\n\n    Assert.assertEquals( expectedString, resultBootstrapString );\n  }\n\n  private void setJobEntryFields() {\n    jobEntry.setAlive( false );\n    jobEntry.setHadoopJobName( \"Test Job Executor\" );\n    jobEntry.setEmrRelease( \"emr-5.11.0\" );\n    jobEntry.setNumInstances( \"2\" );\n    jobEntry.setMasterInstanceType( \"c1.medium\" );\n    jobEntry.setSlaveInstanceType( \"c1.medium\" );\n    jobEntry.setEc2Role( \"default_ec2_role\" );\n    jobEntry.setEmrRole( \"default_emr_role\" );\n    jobEntry.setCmdLineArgs( \"-- bucket s3://test\" );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/PricingClientImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.pricing.AWSPricing;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.amazon.client.api.PricingClient;\n\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.mock;\nimport static org.mockito.Mockito.spy;\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n@RunWith( MockitoJUnitRunner.class )\npublic class PricingClientImplTest {\n\n  private PricingClientImpl pricingClient;\n\n  @Before\n  public void setUp() {\n    AWSPricing awsPricing = mock( AWSPricing.class );\n    pricingClient = spy( new PricingClientImpl( awsPricing, \"US East (N. Virginia)\" ) );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_withValidValues() throws Exception {\n\n    List<String> expectedInstanceTypes = new ArrayList<>();\n    expectedInstanceTypes.add( \"c4.2xlarge\" );\n    expectedInstanceTypes.add( \"c4.4xlarge\" );\n\n    List<String> productDescriptions = new ArrayList<>();\n\n    String jsonDescriptionC2xlarge = \"{\" +\n      \"\\\"product\\\": {\" +\n      \"\\\"productFamily\\\": \\\"Elastic Map Reduce Instance\\\",\" +\n      \"\\\"attributes\\\": {\" +\n      \"\\\"servicecode\\\": \\\"ElasticMapReduce\\\",\" +\n      \"\\\"softwareType\\\": \\\"Hunk\\\",\" +\n      \"\\\"instanceType\\\": \\\"c4.2xlarge\\\",\" +\n      \"\\\"usagetype\\\": \\\"HunkBoxUsage:c4.2xlarge\\\",\" +\n      \"\\\"locationType\\\": \\\"AWS Region\\\",\" +\n      \"\\\"location\\\": \\\"US East (N. Virginia)\\\",\" +\n      \"\\\"servicename\\\": \\\"Amazon Elastic MapReduce\\\",\" +\n      \"\\\"instanceFamily\\\": \\\"Compute optimized\\\",\" +\n      \"\\\"operation\\\": \\\"\\\"\" +\n      \"},\" +\n      \"\\\"sku\\\": \\\"226GQ2CAZYZ8D7MG\\\"\" +\n      \"}}\";\n\n    String jsonDescriptionC4xlarge = \"{\" +\n      \"\\\"product\\\": {\" +\n      \"\\\"productFamily\\\": \\\"Elastic Map Reduce Instance\\\",\" +\n      \"\\\"attributes\\\": {\" +\n      \"\\\"servicecode\\\": \\\"ElasticMapReduce\\\",\" +\n      \"\\\"softwareType\\\": \\\"Hunk\\\",\" +\n      \"\\\"instanceType\\\": \\\"c4.4xlarge\\\",\" +\n      \"\\\"usagetype\\\": \\\"HunkBoxUsage:c4.4xlarge\\\",\" +\n      \"\\\"locationType\\\": \\\"AWS Region\\\",\" +\n      \"\\\"location\\\": \\\"US East (N. Virginia)\\\",\" +\n      \"\\\"servicename\\\": \\\"Amazon Elastic MapReduce\\\",\" +\n      \"\\\"instanceFamily\\\": \\\"Compute optimized\\\",\" +\n      \"\\\"operation\\\": \\\"\\\"\" +\n      \"},\" +\n      \"\\\"sku\\\": \\\"226GQ2CAZYZ8D7MG\\\"\" +\n      \"}}\";\n\n    productDescriptions.add( jsonDescriptionC2xlarge );\n    productDescriptions.add( jsonDescriptionC4xlarge );\n\n    doReturn( productDescriptions ).when( pricingClient ).getProductDescriptions();\n\n    List<String> instanceTypes = pricingClient.populateInstanceTypesForSelectedRegion();\n\n    Assert.assertEquals( expectedInstanceTypes, instanceTypes );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_whenDescriptionIsNull() throws Exception {\n\n    List<String> productDescriptions = null;\n\n    doReturn( productDescriptions ).when( pricingClient ).getProductDescriptions();\n\n    List<String> instanceTypes = pricingClient.populateInstanceTypesForSelectedRegion();\n\n    Assert.assertNull( instanceTypes );\n  }\n\n  @Test\n  public void testPopulateInstanceTypesForSelectedRegion_whenDescriptionIsEmpty() throws Exception {\n\n    List<String> productDescriptions = new ArrayList<>();\n\n    doReturn( productDescriptions ).when( pricingClient ).getProductDescriptions();\n\n    List<String> instanceTypes = pricingClient.populateInstanceTypesForSelectedRegion();\n\n    Assert.assertNull( instanceTypes );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/client/impl/S3ClientImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.client.impl;\n\nimport com.amazonaws.services.s3.AmazonS3Client;\nimport com.amazonaws.services.s3.model.S3Object;\nimport com.amazonaws.services.s3.model.S3ObjectInputStream;\nimport org.apache.commons.io.FileUtils;\nimport org.junit.Assert;\nimport org.junit.Before;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\n\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.nio.file.Paths;\nimport java.util.zip.GZIPOutputStream;\n\nimport static org.mockito.ArgumentMatchers.anyString;\nimport static org.mockito.ArgumentMatchers.eq;\nimport static org.mockito.Mockito.doCallRealMethod;\nimport static org.mockito.Mockito.doReturn;\nimport static org.mockito.Mockito.spy;\n\n\n/**\n * Created by Aliaksandr_Zhuk on 2/8/2018.\n */\n\n@RunWith( MockitoJUnitRunner.class )\npublic class S3ClientImplTest {\n\n  @Rule\n  public TemporaryFolder temporaryFolder = new TemporaryFolder();\n\n  @Mock\n  private AmazonS3Client awsS3Client;\n  private S3ClientImpl s3Client;\n  private String logFileName;\n  private String gzArchName;\n  private File stagingFolder;\n\n  @Before\n  public void setUp() throws Exception {\n    stagingFolder = temporaryFolder.newFolder( \"emr\" );\n    String path = getClass().getClassLoader().getResource( \"master.log\" ).getPath();\n    File logFile = new File( path );\n    FileUtils.copyFileToDirectory( logFile, stagingFolder );\n\n    logFileName = Paths.get( stagingFolder.getPath(), \"master.log\" ).toString();\n    gzArchName = Paths.get( stagingFolder.getPath(), \"stderr.gz\" ).toString();\n\n    createGzArchive();\n\n    s3Client = spy( new S3ClientImpl( awsS3Client ) );\n  }\n\n  @Test\n  public void testReadStepLogsFromS3_whenLogFileExistsInS3() throws Exception {\n    FileInputStream inputStream = null;\n    S3ObjectInputStream s3ObjectInputStream = null;\n    String stagingBucketName = \"alzhk\";\n    String hadoopJobFlowId = \"j-11WRZQW6NIQOA\";\n    String stepId = \"s-15PK2NMVIPRPF\";\n\n    doReturn( \"\" )\n      .when( s3Client ).readLogFromS3( stagingBucketName, hadoopJobFlowId + \"/steps/\" + stepId + \"/controller.gz\" );\n    doReturn( \"\" )\n      .when( s3Client ).readLogFromS3(  stagingBucketName, hadoopJobFlowId + \"/steps/\" + stepId + \"/stdout.gz\" );\n    doReturn( \"\" )\n      .when( s3Client ).readLogFromS3(  stagingBucketName, hadoopJobFlowId + \"/steps/\" + stepId + \"/syslog.gz\" );\n\n    try {\n      inputStream = new FileInputStream( gzArchName );\n      s3ObjectInputStream = new S3ObjectInputStream( inputStream, null, false );\n      S3Object s3Object = new S3Object();\n      s3Object.setObjectContent( s3ObjectInputStream );\n\n      Mockito.when( awsS3Client.getObject( anyString(), anyString() ) ).thenReturn( s3Object );\n      Mockito.when( awsS3Client.doesObjectExist( anyString(), anyString() ) ).thenReturn( true );\n\n      String lineFromStepLogFile = s3Client.readStepLogsFromS3( stagingBucketName, hadoopJobFlowId, stepId );\n      lineFromStepLogFile = lineFromStepLogFile.replace( \"\\n\", \"\" ).replace( \"\\r\", \"\" );\n      Assert.assertEquals( \"all bootstrap actions complete and instance ready\", lineFromStepLogFile );\n    } finally {\n      if ( inputStream != null ) {\n        inputStream.close();\n      }\n      if ( s3ObjectInputStream != null ) {\n        s3ObjectInputStream.close();\n      }\n    }\n  }\n\n  @Test\n  public void testReadStepLogsFromS3_whenLogFileNotExistsInS3() throws Exception {\n    FileInputStream inputStream = null;\n    S3ObjectInputStream s3ObjectInputStream = null;\n    String stagingBucketName = \"alzhk\";\n    String hadoopJobFlowId = \"j-11WRZQW6NIQOA\";\n    String stepId = \"s-15PK2NMVIPRPF\";\n    String expectedLineFromStepLogFile =\n      \"Step \" + stepId + \" failed. See logs here: s3://\" + stagingBucketName + \"/\" + hadoopJobFlowId + \"/steps/\"\n        + stepId;\n\n    doReturn( \"\" )\n      .when( s3Client ).readLogFromS3( eq( stagingBucketName ), anyString() );\n    doCallRealMethod().when( s3Client ).readStepLogsFromS3( anyString(), anyString(), anyString() );\n\n    try {\n      inputStream = new FileInputStream( gzArchName );\n      s3ObjectInputStream = new S3ObjectInputStream( inputStream, null, false );\n      S3Object s3Object = new S3Object();\n      s3Object.setObjectContent( s3ObjectInputStream );\n\n      String lineFromStepLogFile = s3Client.readStepLogsFromS3( stagingBucketName, hadoopJobFlowId, stepId );\n      lineFromStepLogFile = lineFromStepLogFile.replace( \"\\n\", \"\" ).replace( \"\\r\", \"\" );\n\n      Assert.assertEquals( expectedLineFromStepLogFile, lineFromStepLogFile );\n    } finally {\n      if ( inputStream != null ) {\n        inputStream.close();\n      }\n      if ( s3ObjectInputStream != null ) {\n        s3ObjectInputStream.close();\n      }\n    }\n  }\n\n  private void createGzArchive() throws Exception {\n\n    try ( FileInputStream fileInputStream = new FileInputStream( logFileName );\n         FileOutputStream fileOutputStream = new FileOutputStream( gzArchName );\n         GZIPOutputStream gzipOutputStream = new GZIPOutputStream( fileOutputStream ) ) {\n      byte[] buffer = new byte[ 1024 ];\n      int len;\n      while ( ( len = fileInputStream.read( buffer ) ) != -1 ) {\n        gzipOutputStream.write( buffer, 0, len );\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/emr/job/AmazonElasticMapReduceJobExecutorLoadSaveTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.emr.job;\n\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.pentaho.di.job.entry.loadSave.JobEntryLoadSaveTestSupport;\n\npublic class AmazonElasticMapReduceJobExecutorLoadSaveTest\nextends JobEntryLoadSaveTestSupport<AmazonElasticMapReduceJobExecutor> {\n\n  @Override\n  protected Class<AmazonElasticMapReduceJobExecutor> getJobEntryClass() {\n    return AmazonElasticMapReduceJobExecutor.class;\n  }\n\n  @Override\n  protected List<String> listCommonAttributes() {\n    return Arrays.asList( \"HadoopJobName\", \"HadoopJobFlowId\", \"JarUrl\", \"AccessKey\", \"SecretKey\",\n      \"StagingDir\", \"NumInstances\", \"MasterInstanceType\", \"SlaveInstanceType\", \"CmdLineArgs\",\n      \"Blocking\", \"LoggingInterval\", \"HadoopJobName\" );\n  }\n\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/java/org/pentaho/amazon/hive/job/AmazonHiveJobExecutorLoadSaveTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\n\npackage org.pentaho.amazon.hive.job;\n\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.pentaho.di.job.entry.loadSave.JobEntryLoadSaveTestSupport;\n\npublic class AmazonHiveJobExecutorLoadSaveTest extends JobEntryLoadSaveTestSupport<AmazonHiveJobExecutor> {\n\n  @Override\n  protected Class<AmazonHiveJobExecutor> getJobEntryClass() {\n    return AmazonHiveJobExecutor.class;\n  }\n\n  @Override\n  protected List<String> listCommonAttributes() {\n    return Arrays.asList( \"HadoopJobName\", \"HadoopJobFlowId\", \"QUrl\", \"AccessKey\", \"SecretKey\",\n      \"BootstrapActions\", \"StagingDir\", \"NumInstances\", \"MasterInstanceType\", \"SlaveInstanceType\",\n      \"CmdLineArgs\", \"Alive\", \"Blocking\", \"LoggingInterval\", \"HadoopJobName\" );\n  }\n}\n"
  },
  {
    "path": "legacy-amazon/core/src/test/resources/master.log",
    "content": "all bootstrap actions complete and instance ready"
  },
  {
    "path": "legacy-amazon/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n\n  <parent>\n    <groupId>pentaho</groupId>\n    <artifactId>pentaho-big-data-parent</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n\n  <artifactId>pentaho-big-data-legacy-amazon</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>PDI Legacy Amazon Plugin</name>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>https://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n\n  <modules>\n    <module>assemblies</module>\n    <module>core</module>\n  </modules>\n</project>\n"
  },
  {
    "path": "legacy-core/pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>pentaho</groupId>\n        <artifactId>pentaho-big-data-parent</artifactId>\n        <version>11.1.0.0-SNAPSHOT</version>\n    </parent>\n    <artifactId>pentaho-big-data-legacy-core</artifactId>\n    <packaging>jar</packaging>\n    <version>11.1.0.0-SNAPSHOT</version>\n    <licenses>\n        <license>\n            <name>Apache License, Version 2.0</name>\n            <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n            <distribution>repo</distribution>\n            <comments>A business-friendly OSS license</comments>\n        </license>\n    </licenses>\n    <dependencies>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-engine</artifactId>\n            <version>${pdi.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>shim-api</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "legacy-core/src/main/java/org/pentaho/big/data/api/services/BigDataServicesHelper.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.api.services;\n\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.services.BigDataServicesProxy;\n\nimport java.util.Collection;\n\npublic class BigDataServicesHelper {\n\n\n  private BigDataServicesHelper() {\n  }\n\n  public static NamedClusterServiceLocator getNamedClusterServiceLocator() {\n    try {\n      Collection<BigDataServicesProxy> namedClusterServiceLocatorFactories = PluginServiceLoader.loadServices( BigDataServicesProxy.class );\n      return namedClusterServiceLocatorFactories.stream().findFirst().map( BigDataServicesProxy::getNamedClusterServiceLocator ).orElse( null );\n    } catch ( KettlePluginException e ) {\n      e.printStackTrace();\n      return null;\n    }\n  }\n\n  public static HadoopFileSystemLocator getHadoopFileSystemLocator() {\n    try {\n      Collection<BigDataServicesProxy> namedClusterServiceLocatorFactories = PluginServiceLoader.loadServices( BigDataServicesProxy.class );\n      return namedClusterServiceLocatorFactories.stream().findFirst().map( BigDataServicesProxy::getHadoopFileSystemLocator ).orElse( null );\n    } catch ( KettlePluginException e ) {\n      e.printStackTrace();\n      return null;\n    }\n  }\n\n  public static String getShimIdentifier() {\n    try {\n      Collection<BigDataServicesProxy> namedClusterServiceLocatorFactories = PluginServiceLoader.loadServices( BigDataServicesProxy.class );\n      return namedClusterServiceLocatorFactories.stream().findFirst().map( BigDataServicesProxy::getShimIdentifier ).orElse( null );\n    } catch ( Exception e ) {\n      return null;\n    }\n  }\n\n  public static NamedClusterService getNamedClusterService() {\n    try {\n      Collection<BigDataServicesProxy> bigDataServicesProxies = PluginServiceLoader.loadServices( BigDataServicesProxy.class );\n      return bigDataServicesProxies.stream().findFirst().map( BigDataServicesProxy::getNamedClusterService ).orElse( null );\n    } catch ( KettlePluginException e ) {\n      e.printStackTrace();\n      return null;\n    }\n  }\n}\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\"?>\n<project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\"\n    xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>org.pentaho</groupId>\n    <artifactId>pentaho-ce-jar-parent-pom</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n  </parent>\n  <groupId>pentaho</groupId>\n  <artifactId>pentaho-big-data-parent</artifactId>\n  <version>11.1.0.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Pentaho Community Edition Project: ${project.artifactId}</name>\n  <description>Parent project for Pentaho Big Data Plugin</description>\n  <url>http://www.pentaho.com</url>\n  <licenses>\n    <license>\n      <name>Apache License, Version 2.0</name>\n      <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>\n      <distribution>repo</distribution>\n      <comments>A business-friendly OSS license</comments>\n    </license>\n  </licenses>\n  <scm>\n    <connection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</connection>\n    <developerConnection>scm:git:git@github.com:${github.user}/${project.artifactId}.git</developerConnection>\n    <url>scm:git:git@github.com:${github.user}/${project.artifactId}.git</url>\n  </scm>\n  <properties>\n    <pentaho-hadoop-shims.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims.version>\n    <pentaho-hadoop-shims-cdh514.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims-cdh514.version>\n    <pentaho-hadoop-shims-emr770.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims-emr770.version>\n    <pentaho-hadoop-shims-dataproc23.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims-dataproc23.version>\n    <pentaho-hadoop-shims-cdpdc71.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims-cdpdc71.version>\n    <pentaho-hadoop-shims-hdi40.version>11.1.0.0-SNAPSHOT</pentaho-hadoop-shims-hdi40.version>\n    <dependency.mockito.revision>1.10.19</dependency.mockito.revision>\n    <dependency.joda-time.revision>2.9.4</dependency.joda-time.revision>\n    <pentaho-json.version>11.1.0.0-SNAPSHOT</pentaho-json.version>\n    <plugin.org.codehaus.mojo.build-helper-maven-plugin.version>1.5</plugin.org.codehaus.mojo.build-helper-maven-plugin.version>\n    <pentaho-registry.version>11.1.0.0-SNAPSHOT</pentaho-registry.version>\n    <dependency.maven-resources-plugin.revision>2.7</dependency.maven-resources-plugin.revision>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n    <dependency.common.revision>3.3.0-v20070426</dependency.common.revision>\n    <metastore.version>11.1.0.0-SNAPSHOT</metastore.version>\n    <dependency.commons-configuration.revision>1.9</dependency.commons-configuration.revision>\n    <dependency.high-scale-lib.revision>1.1.2</dependency.high-scale-lib.revision>\n    <mondrian.version>11.1.0.0-SNAPSHOT</mondrian.version>\n    <pdi.version>11.1.0.0-SNAPSHOT</pdi.version>\n    <pentaho-reporting.version>11.1.0.0-SNAPSHOT</pentaho-reporting.version>\n    <dependency.xpp3.revision>1.1.4c</dependency.xpp3.revision>\n    <dependency.jets3t.revision>0.9.4</dependency.jets3t.revision>\n    <dependency.oozie.revision>3.1.3-incubating</dependency.oozie.revision>\n    <pentaho-metadata.version>11.1.0.0-SNAPSHOT</pentaho-metadata.version>\n    <dependency.maven-clean-plugin.revision>2.5</dependency.maven-clean-plugin.revision>\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n    <commons-database.version>11.1.0.0-SNAPSHOT</commons-database.version>\n    <dependency.jackson.revision>1.5.2</dependency.jackson.revision>\n    <dependency.commands.revision>3.3.0-I20070605-0010</dependency.commands.revision>\n    <publish-sonar-phase></publish-sonar-phase>\n    <plugin.org.apache.maven.plugins.maven-failsafe-plugin.version>2.17</plugin.org.apache.maven.plugins.maven-failsafe-plugin.version>\n    <dependency.junit.revision>4.13.2</dependency.junit.revision>\n    <dependency.jline.revision>0.9.94</dependency.jline.revision>\n    <dependency.xmlpull.revision>1.1.3.1</dependency.xmlpull.revision>\n    <dependency.bean-matchers.revision>0.9</dependency.bean-matchers.revision>\n    <dependency.cxf.revision>3.0.13</dependency.cxf.revision>\n    <dependency.hamcrest.revision>1.3</dependency.hamcrest.revision>\n    <commons-xul.version>11.1.0.0-SNAPSHOT</commons-xul.version>\n    <platform.version>11.1.0.0-SNAPSHOT</platform.version>\n    <pentaho-karaf.version>11.1.0.0-SNAPSHOT</pentaho-karaf.version>\n    <pentaho-metaverse.version>11.1.0.0-SNAPSHOT</pentaho-metaverse.version>\n    <dependency.pdi-osgi-bridge-core.revision>${project.version}</dependency.pdi-osgi-bridge-core.revision>\n    <thoughtworks.paranamer.version>2.7</thoughtworks.paranamer.version>\n    <dependency.commons-httpclient.revision>3.1</dependency.commons-httpclient.revision>\n    <mockito.version>5.10.0</mockito.version>\n    <hamcrest.version>1.3</hamcrest.version>\n    <mockito-inline.version>5.2.0</mockito-inline.version>\n    <mockito-inline.version>5.2.0</mockito-inline.version>\n    <parquet.version>1.15.2</parquet.version>\n    <jackson-asl.version>1.9.13-cloudera.4</jackson-asl.version>\n    <maven-surefire-plugin.argLine> --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED </maven-surefire-plugin.argLine>\n  </properties>\n\n  <dependencyManagement>\n    <dependencies>\n      <dependency>\n        <groupId>org.mockito</groupId>\n        <artifactId>mockito-core</artifactId>\n        <version>${mockito.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.codehaus.jackson</groupId>\n        <artifactId>jackson-mapper-asl</artifactId>\n        <version>${jackson-asl.version}</version>\n      </dependency>\n      <dependency>\n        <groupId>org.codehaus.jackson</groupId>\n        <artifactId>jackson-core-asl</artifactId>\n        <version>${jackson-asl.version}</version>\n      </dependency>\n    </dependencies>\n  </dependencyManagement>\n  <profiles>\n    <profile>\n      <id>lowdeps</id>\n      <activation>\n        <property>\n          <name>!skipDefault</name>\n        </property>\n      </activation>\n      <modules>\n        <module>legacy-core</module>\n        <module>legacy</module>\n        <module>api</module>\n        <module>impl</module>\n        <module>authentication-mapper</module>\n      </modules>\n    </profile>\n    <profile>\n      <id>highdeps</id>\n      <activation>\n        <property>\n          <name>!skipDefault</name>\n        </property>\n      </activation>\n      <modules>\n        <module>legacy-amazon</module>\n        <module>kettle-plugins</module>\n      </modules>\n    </profile>\n    <profile>\n      <id>bootstrap</id>\n      <activation>\n        <property>\n          <name>!skipDefault</name>\n        </property>\n      </activation>\n      <modules>\n        <module>services-bootstrap</module>\n      </modules>\n    </profile>\n    <profile>\n      <id>assembly</id>\n      <activation>\n        <property>\n          <name>!skipDefault</name>\n        </property>\n      </activation>\n      <modules>\n        <module>assemblies</module>\n      </modules>\n    </profile>\n  </profiles>\n\n  <repositories>\n    <repository>\n      <id>pentaho-public</id>\n      <name>Pentaho Public</name>\n      <url>https://repo.orl.eng.hitachivantara.com/artifactory/pnt-mvn/</url>\n      <releases>\n        <enabled>true</enabled>\n        <updatePolicy>daily</updatePolicy>\n      </releases>\n      <snapshots>\n        <enabled>true</enabled>\n        <updatePolicy>interval:15</updatePolicy>\n      </snapshots>\n    </repository>\n  </repositories>\n\n  <pluginRepositories>\n    <pluginRepository>\n      <id>pentaho-public-plugins</id>\n      <name>Pentaho Public Plugins</name>\n      <url>https://repo.orl.eng.hitachivantara.com/artifactory/pnt-mvn/</url>\n      <snapshots>\n        <enabled>false</enabled>\n      </snapshots>\n      <releases>\n        <updatePolicy>never</updatePolicy>\n      </releases>\n    </pluginRepository>\n  </pluginRepositories>\n</project>\n\n"
  },
  {
    "path": "services-bootstrap/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>pentaho</groupId>\n        <artifactId>pentaho-big-data-parent</artifactId>\n        <version>11.1.0.0-SNAPSHOT</version>\n    </parent>\n\n    <artifactId>services-bootstrap</artifactId>\n    <version>11.1.0.0-SNAPSHOT</version>\n\n    <properties>\n        <junit.version>4.13.2</junit.version>\n        <mockito-inline.version>5.2.0</mockito-inline.version>\n        <mockito-junit-jupitor.version>5.1.1</mockito-junit-jupitor.version>\n        <osgi.core.version>8.0.0</osgi.core.version>\n    </properties>\n    <dependencies>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-core</artifactId>\n            <version>${pdi.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-engine</artifactId>\n            <version>${pdi.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-legacy-core</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-legacy</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho.hadoop.shims</groupId>\n            <artifactId>pentaho-hadoop-shims-common-base</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>shim-api-core</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>shim-api</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>pentaho-hadoop-shims-common-services-api</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-impl-vfs-hdfs-core</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n            <version>${pdi.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pdi-hive-core</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-authentication-mapper-impl</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n            <version>${pdi.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pdi-hive-core</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-kettle-plugins-browse</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-authentication-mapper-impl</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-api-runtimeTest</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-big-data-impl-clusterTests</artifactId>\n            <version>${pdi.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pdi-hive-core</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-authentication-mapper-impl</artifactId>\n            <version>${project.version}</version>\n        </dependency>\n        <dependency>\n            <groupId>org.pentaho</groupId>\n            <artifactId>shim-api</artifactId>\n            <version>${pentaho-hadoop-shims.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>pentaho</groupId>\n            <artifactId>pentaho-platform-core</artifactId>\n            <version>${project.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-core</artifactId>\n            <version>${pdi.version}</version>\n            <scope>compile</scope>\n        </dependency>\n        <dependency>\n            <groupId>pentaho-kettle</groupId>\n            <artifactId>kettle-core</artifactId>\n            <version>${pdi.version}</version>\n            <scope>compile</scope>\n        </dependency>\n            <!-- Test dependencies -->\n        <dependency>\n          <groupId>junit</groupId>\n          <artifactId>junit</artifactId>\n          <version>${junit.version}</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-inline</artifactId>\n          <version>${mockito-inline.version}</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n          <groupId>org.mockito</groupId>\n          <artifactId>mockito-junit-jupiter</artifactId>\n          <version>${mockito-junit-jupitor.version}</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.osgi</groupId>\n            <artifactId>osgi.core</artifactId>\n            <scope>test</scope>\n        </dependency>\n    </dependencies>\n</project>\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/api/services/impl/BigDataServicesProxyImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.api.services.impl;\n\nimport org.pentaho.big.data.api.cluster.service.locator.impl.NamedClusterServiceLocatorImpl;\nimport org.pentaho.big.data.hadoop.bootstrap.HadoopConfigurationBootstrap;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.bigdata.api.hdfs.impl.HadoopFileSystemLocatorImpl;\nimport org.pentaho.di.core.service.ServiceProvider;\nimport org.pentaho.di.core.service.ServiceProviderInterface;\nimport org.pentaho.hadoop.shim.HadoopConfiguration;\nimport org.pentaho.hadoop.shim.HadoopConfigurationLocator;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterService;\nimport org.pentaho.hadoop.shim.api.cluster.NamedClusterServiceLocator;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemLocator;\nimport org.pentaho.hadoop.shim.api.internal.ShimIdentifier;\nimport org.pentaho.hadoop.shim.api.services.BigDataServicesProxy;\n\n@ServiceProvider(\n        id = \"BigDataServicesProxy\",\n        description = \"Provides access to shared big data services\",\n        provides = BigDataServicesProxy.class\n)\npublic class BigDataServicesProxyImpl implements BigDataServicesProxy, ServiceProviderInterface<BigDataServicesProxy> {\n\n    private static NamedClusterServiceLocator namedClusterServiceLocator = null;\n    private static HadoopFileSystemLocator hadoopFileSystemLocator = null;\n    private static NamedClusterService namedClusterService = null;\n\n    @Override\n    public boolean isSingleton() {\n        return true;\n    }\n\n\n    @Override\n    public NamedClusterServiceLocator getNamedClusterServiceLocator() {\n        if ( namedClusterServiceLocator == null ) {\n            String shimIdentifier = getShimIdentifier();\n            if ( shimIdentifier == null ) {\n              shimIdentifier = \"NONE\";\n            }\n            namedClusterServiceLocator = NamedClusterServiceLocatorImpl.getInstance( shimIdentifier);\n        }\n        return namedClusterServiceLocator;\n    }\n\n    @Override\n    public HadoopFileSystemLocator getHadoopFileSystemLocator() {\n        if ( hadoopFileSystemLocator == null ) {\n            hadoopFileSystemLocator = HadoopFileSystemLocatorImpl.getInstance();\n        }\n        return hadoopFileSystemLocator;\n    }\n\n  @Override\n  public String getShimIdentifier() {\n    try {\n      HadoopConfigurationBootstrap hadoopConfigurationBootstrap = HadoopConfigurationBootstrap.getInstance();\n      HadoopConfigurationLocator hadoopConfigurationProvider = (HadoopConfigurationLocator) hadoopConfigurationBootstrap.getProvider();\n      HadoopConfiguration hadoopConfiguration = hadoopConfigurationProvider.getActiveConfiguration();\n      ShimIdentifier identifier = hadoopConfiguration.getHadoopShim().getShimIdentifier();\n      return identifier.getId();\n    } catch ( org.pentaho.hadoop.shim.api.ConfigurationException e ) {\n      return null;\n    }\n  }\n\n    @Override\n    public NamedClusterService getNamedClusterService() {\n        if ( namedClusterService == null ) {\n            namedClusterService = NamedClusterManager.getInstance();\n        }\n        return namedClusterService;\n    }\n}\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/hadoop/bootstrap/HadoopConfigurationBootstrap.java",
    "content": "/*******************************************************************************\n * Pentaho Big Data\n * <p/>\n * Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n * <p/>\n * ******************************************************************************\n * <p/>\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with\n * the License. You may obtain a copy of the License at\n * <p/>\n * http://www.apache.org/licenses/LICENSE-2.0\n * <p/>\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n ******************************************************************************/\n\npackage org.pentaho.big.data.hadoop.bootstrap;\n\nimport org.apache.commons.lang.math.NumberUtils;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileType;\nimport org.apache.commons.vfs2.impl.DefaultFileSystemManager;\nimport org.pentaho.di.core.annotations.KettleLifecyclePlugin;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.hadoop.HadoopConfigurationInfo;\nimport org.pentaho.di.core.hadoop.HadoopConfigurationPrompter;\nimport org.pentaho.di.core.hadoop.HadoopSpoonPlugin;\nimport org.pentaho.di.core.lifecycle.KettleLifecycleListener;\nimport org.pentaho.di.core.lifecycle.LifecycleException;\nimport org.pentaho.di.core.logging.LogChannel;\nimport org.pentaho.di.core.logging.LogChannelInterface;\nimport org.pentaho.di.core.plugins.LifecyclePluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.util.Utils;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.di.i18n.BaseMessages;\nimport org.pentaho.hadoop.PluginPropertiesUtil;\nimport org.pentaho.hadoop.shim.HadoopConfiguration;\nimport org.pentaho.hadoop.shim.HadoopConfigurationLocator;\nimport org.pentaho.hadoop.shim.api.ConfigurationException;\nimport org.pentaho.hadoop.shim.api.internal.ActiveHadoopConfigurationLocator;\nimport org.pentaho.hadoop.shim.api.internal.ShimProperties;\nimport org.pentaho.hadoop.shim.spi.HadoopConfigurationProvider;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.net.URL;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\nimport java.util.Properties;\nimport java.util.Set;\nimport java.util.concurrent.ConcurrentHashMap;\nimport java.util.concurrent.CountDownLatch;\n\n/**\n * This class serves to initialize the Hadoop Configuration subsystem. This class provides an anchor point for all\n * Hadoop Configuration-related lookups to happen.\n */\n@KettleLifecyclePlugin( id = \"HadoopConfigurationBootstrap\", name = \"Hadoop Configuration Bootstrap\" )\npublic class HadoopConfigurationBootstrap implements KettleLifecycleListener, ActiveHadoopConfigurationLocator {\n  public static final String PLUGIN_ID = \"HadoopConfigurationBootstrap\";\n  public static final String PROPERTY_ACTIVE_HADOOP_CONFIGURATION = \"active.hadoop.configuration\";\n  public static final String PROPERTY_HADOOP_CONFIGURATIONS_PATH = \"hadoop.configurations.path\";\n  public static final String DEFAULT_FOLDER_HADOOP_CONFIGURATIONS = \"hadoop-configurations\";\n  public static final String CONFIG_PROPERTIES = \"config.properties\";\n  private static final Class<?> PKG = HadoopConfigurationBootstrap.class;\n  public static final String PMR_PROPERTIES = \"pmr.properties\";\n  private static final String NOTIFICATIONS_BEFORE_LOADING_SHIM = \"notificationsBeforeLoadingShim\";\n  private static LogChannelInterface log = new LogChannel( BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.LoggingPrefix\" ) );\n  private static HadoopConfigurationBootstrap instance = new HadoopConfigurationBootstrap();\n  private final Set<HadoopConfigurationListener> hadoopConfigurationListeners =\n    Collections.newSetFromMap( new ConcurrentHashMap<HadoopConfigurationListener, Boolean>() );\n  /**\n   * Number of notifications to receive before shim may be loaded.\n   */\n  private final CountDownLatch remainingDependencies = new CountDownLatch(\n    NumberUtils.toInt( getMergedPmrAndPluginProperties().getProperty( NOTIFICATIONS_BEFORE_LOADING_SHIM ), 0 ) );\n\n  private HadoopConfigurationPrompter prompter;\n  private HadoopConfigurationProvider provider;\n  /**\n   * Cached plugin description for locating Plugin\n   */\n  private PluginInterface plugin;\n\n  /**\n   * @return A Hadoop configuration provider capable of finding Hadoop configurations loaded for this Big Data Plugin\n   * instance\n   * @throws ConfigurationException The provider is not initialized (KettleEnvironment.init() has not been called)\n   */\n  public static HadoopConfigurationProvider getHadoopConfigurationProvider() throws ConfigurationException {\n    return instance.getProvider();\n  }\n\n  public static HadoopConfigurationBootstrap getInstance() {\n    return instance;\n  }\n\n  protected static void setInstance( HadoopConfigurationBootstrap instance ) {\n    HadoopConfigurationBootstrap.instance = instance;\n  }\n\n  public HadoopConfigurationProvider getProvider() throws ConfigurationException {\n    initProvider();\n    return provider;\n  }\n\n  public void setPrompter( HadoopConfigurationPrompter prompter ) {\n    this.prompter = prompter;\n  }\n\n  protected synchronized void initProvider() throws ConfigurationException {\n    if ( provider == null ) {\n      HadoopConfigurationPrompter prompter = this.prompter;\n      if ( Utils.isEmpty( getWillBeActiveConfigurationId() ) && prompter != null ) {\n        try {\n          setActiveShim( prompter.getConfigurationSelection( getHadoopConfigurationInfos() ) );\n        } catch ( Exception e ) {\n          throw new ConfigurationException( e.getMessage(), e );\n        }\n      }\n\n      if ( Utils.isEmpty( getWillBeActiveConfigurationId() ) ) {\n        log.logBasic( \"WARNING: \" + BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.NoShimSet\" ) );\n//        throw new NoShimSpecifiedException(\n//          BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.NoShimSet\" ) );\n        return;\n      }\n\n      // Initialize the HadoopConfigurationProvider\n      try {\n        FileObject hadoopConfigurationsDir = resolveHadoopConfigurationsDirectory();\n        HadoopConfigurationProvider p = initializeHadoopConfigurationProvider( hadoopConfigurationsDir );\n\n        // verify the active configuration exists\n        HadoopConfiguration activeConfig = null;\n        try {\n          activeConfig = p.getActiveConfiguration();\n        } catch ( Exception ex ) {\n          throw new ConfigurationException( BaseMessages\n            .getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.InvalidActiveConfiguration\",\n              getActiveConfigurationId() ), ex );\n        }\n        if ( activeConfig == null ) {\n          throw new ConfigurationException( BaseMessages\n            .getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.InvalidActiveConfiguration\",\n              getActiveConfigurationId() ) );\n        }\n\n        provider = p;\n\n        for ( HadoopConfigurationListener hadoopConfigurationListener : hadoopConfigurationListeners ) {\n          hadoopConfigurationListener.onConfigurationOpen( activeConfig, true );\n        }\n\n        log.logDetailed( BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.Loaded\" ),\n          provider.getConfigurations().size(), hadoopConfigurationsDir );\n      } catch ( Exception ex ) {\n        if ( ex instanceof ConfigurationException ) {\n          throw (ConfigurationException) ex;\n        } else {\n          throw new ConfigurationException( BaseMessages.getString( PKG,\n            \"HadoopConfigurationBootstrap.HadoopConfiguration.StartupError\" ), ex );\n        }\n      }\n    }\n  }\n\n  /**\n   * Initialize the Hadoop configuration provider for the plugin. We're currently relying on a file-based configuration\n   * provider: {@link HadoopConfigurationLocator}.\n   *\n   * @param hadoopConfigurationsDir\n   * @return\n   * @throws ConfigurationException\n   */\n  protected HadoopConfigurationProvider initializeHadoopConfigurationProvider( FileObject hadoopConfigurationsDir )\n    throws ConfigurationException {\n    final String activeConfigurationId = getWillBeActiveConfigurationId();\n    HadoopConfigurationLocator locator = new HadoopConfigurationLocator() {\n      @Override\n      protected ClassLoader createConfigurationLoader( FileObject root, ClassLoader parent, List<URL> classpathUrls,\n                                                       ShimProperties configurationProperties, String... ignoredClasses ) throws ConfigurationException {\n        ClassLoader classLoader =\n          super.createConfigurationLoader( root, parent, classpathUrls, configurationProperties, ignoredClasses );\n\n        for ( HadoopConfigurationListener listener : hadoopConfigurationListeners ) {\n          listener.onClassLoaderAvailable( classLoader );\n        }\n\n        return classLoader;\n      }\n    };\n    locator.init( hadoopConfigurationsDir, new ActiveHadoopConfigurationLocator() {\n      @Override\n      public String getActiveConfigurationId() throws ConfigurationException {\n        return activeConfigurationId;\n      }\n    }, new DefaultFileSystemManager() );\n    return locator;\n  }\n\n  public synchronized List<HadoopConfigurationInfo> getHadoopConfigurationInfos()\n    throws KettleException, ConfigurationException, IOException {\n    List<HadoopConfigurationInfo> result = new ArrayList<>();\n    FileObject hadoopConfigurationsDir = resolveHadoopConfigurationsDirectory();\n    // If the folder doesn't exist, return an empty list\n    if ( hadoopConfigurationsDir.exists() ) {\n      String activeId = getActiveConfigurationId();\n      String willBeActiveId = getWillBeActiveConfigurationId();\n      for ( FileObject childFolder : hadoopConfigurationsDir.getChildren() ) {\n        if ( childFolder.getType() == FileType.FOLDER ) {\n          String id = childFolder.getName().getBaseName();\n          FileObject configPropertiesFile = childFolder.getChild( CONFIG_PROPERTIES );\n          if ( configPropertiesFile.exists() ) {\n            Properties properties = new Properties();\n            properties.load( configPropertiesFile.getContent().getInputStream() );\n            result.add( new HadoopConfigurationInfo( id, properties.getProperty( \"name\", id ),\n              id.equals( activeId ), willBeActiveId.equals( id ) ) );\n          }\n        }\n      }\n    }\n    return result;\n  }\n\n  /**\n   * Retrieves the plugin properties from disk every call. This allows the plugin properties to change at runtime.\n   *\n   * @return Properties loaded from \"$PLUGIN_DIR/plugin.properties\".\n   * @throws ConfigurationException Error loading properties file\n   */\n  public Properties getPluginProperties() throws ConfigurationException {\n    try {\n      return new PluginPropertiesUtil().loadPluginProperties( getPluginInterface() );\n    } catch ( Exception ex ) {\n      throw new ConfigurationException( BaseMessages.getString( PKG,\n        \"HadoopConfigurationBootstrap.UnableToLoadPluginProperties\" ), ex );\n    }\n  }\n\n\n\n  /**\n   * @return the {@link PluginInterface} for the HadoopSpoonPlugin. Will be used to resolve plugin directory\n   * @throws KettleException Unable to locate ourself in the Plugin Registry\n   */\n  protected PluginInterface getPluginInterface() throws KettleException {\n    if ( plugin == null ) {\n      PluginInterface pi =\n        PluginRegistry.getInstance().findPluginWithId( LifecyclePluginType.class, HadoopSpoonPlugin.PLUGIN_ID );\n      if ( pi == null ) {\n        throw new KettleException( BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.CannotLocatePlugin\" ) );\n      }\n      plugin = pi;\n    }\n    return plugin;\n  }\n\n  /**\n   * Find the location of the big data plugin. This relies on the Hadoop Job Executor job entry existing within the big\n   * data plugin.\n   *\n   * @return The VFS location of the big data plugin\n   * @throws KettleException\n   */\n  public FileObject locatePluginDirectory() throws ConfigurationException {\n    FileObject dir = null;\n    boolean exists = false;\n    try {\n      dir = KettleVFS.getFileObject( getPluginInterface().getPluginDirectory().toExternalForm() );\n      exists = dir.exists();\n    } catch ( Exception e ) {\n      throw new ConfigurationException( BaseMessages.getString( PKG,\n        \"HadoopConfigurationBootstrap.PluginDirectoryNotFound\" ), e );\n    }\n    if ( !exists ) {\n      throw new ConfigurationException( BaseMessages.getString( PKG,\n        \"HadoopConfigurationBootstrap.PluginDirectoryNotFound\" ) );\n    }\n    return dir;\n  }\n\n  /**\n   * Resolve the directory to look for Hadoop configurations in. This is based on the plugin property {@link\n   * #PROPERTY_HADOOP_CONFIGURATIONS_PATH} in the plugin's properties file.\n   *\n   * @return Folder to look for Hadoop configurations within\n   * @throws ConfigurationException Error locating plugin directory\n   * @throws KettleException        Error resolving hadoop configuration's path\n   * @throws IOException            Error loading plugin properties\n   */\n  public FileObject resolveHadoopConfigurationsDirectory() throws ConfigurationException, IOException, KettleException {\n    String hadoopConfigurationPath =\n      getPluginProperties().getProperty( PROPERTY_HADOOP_CONFIGURATIONS_PATH, DEFAULT_FOLDER_HADOOP_CONFIGURATIONS );\n    return locatePluginDirectory().resolveFile( hadoopConfigurationPath );\n  }\n\n  @Override\n  public synchronized String getActiveConfigurationId() throws ConfigurationException {\n    if ( provider != null ) {\n      return provider.getActiveConfiguration().getIdentifier();\n    }\n    return getWillBeActiveConfigurationId();\n  }\n\n  public synchronized void setActiveShim( String shimId ) throws ConfigurationException {\n    if ( provider != null && !shimId.equals( provider.getActiveConfiguration().getIdentifier() ) && prompter != null ) {\n      prompter.promptForRestart();\n    }\n    getPluginProperties().setProperty( PROPERTY_ACTIVE_HADOOP_CONFIGURATION, shimId );\n  }\n\n  public String getWillBeActiveConfigurationId() throws ConfigurationException {\n    Properties p;\n    try {\n      p = getPluginProperties();\n    } catch ( Exception ex ) {\n      throw new ConfigurationException( BaseMessages.getString( PKG,\n        \"HadoopConfigurationBootstrap.UnableToDetermineActiveConfiguration\" ), ex );\n    }\n    if ( !p.containsKey( PROPERTY_ACTIVE_HADOOP_CONFIGURATION ) ) {\n      throw new ConfigurationException( BaseMessages.getString( PKG,\n        \"HadoopConfigurationBootstrap.MissingActiveConfigurationProperty\", PROPERTY_ACTIVE_HADOOP_CONFIGURATION ) );\n    }\n    return p.getProperty( PROPERTY_ACTIVE_HADOOP_CONFIGURATION );\n  }\n\n  @Override\n  public void onEnvironmentInit() throws LifecycleException {\n    Properties pmrProperties = getPmrProperties();\n    String isPmr = pmrProperties.getProperty( \"isPmr\", \"false\" );\n    if ( \"true\".equals( isPmr ) ) {\n      try {\n        log.logDebug(\n          BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.InitializingShimPmr\" ) );\n        getInstance().getProvider();\n        log.logBasic(\n          BaseMessages.getString( PKG, \"HadoopConfigurationBootstrap.HadoopConfiguration.InitializedShimPmr\" ) );\n      } catch ( Exception e ) {\n        throw new LifecycleException( BaseMessages.getString( PKG,\n          \"HadoopConfigurationBootstrap.HadoopConfiguration.StartupError\" ), e, true );\n      }\n    }\n  }\n\n  @Override\n  public void onEnvironmentShutdown() {\n    // noop\n  }\n\n  public synchronized void registerHadoopConfigurationListener(\n    HadoopConfigurationListener hadoopConfigurationListener )\n    throws ConfigurationException {\n    if ( hadoopConfigurationListeners.add( hadoopConfigurationListener ) && provider != null ) {\n      hadoopConfigurationListener.onConfigurationOpen( getProvider().getActiveConfiguration(), true );\n    }\n  }\n\n  public void unregisterHadoopConfigurationListener( HadoopConfigurationListener hadoopConfigurationListener ) {\n    hadoopConfigurationListeners.remove( hadoopConfigurationListener );\n  }\n\n  public void notifyDependencyLoaded() {\n    getRemainingDependencies().countDown();\n  }\n\n  protected Properties getMergedPmrAndPluginProperties() {\n    Properties properties = new Properties();\n    try {\n      properties.putAll( getPluginProperties() );\n    } catch ( Exception ce ) {\n      // Ignore, will use defaults\n    }\n\n    properties.putAll( getPmrProperties() );\n\n    return properties;\n  }\n\n  protected CountDownLatch getRemainingDependencies() {\n    return remainingDependencies;\n  }\n\n  private Properties getPmrProperties() {\n    InputStream pmrProperties = HadoopConfigurationBootstrap.class.getClassLoader().getResourceAsStream(\n        PMR_PROPERTIES );\n    Properties properties = new Properties();\n    if ( pmrProperties != null ) {\n      try {\n        properties.load( pmrProperties );\n      } catch ( IOException ioe ) {\n        // pmr.properties not available\n      }\n    }\n    return properties;\n  }\n}\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/hadoop/bootstrap/HadoopConfigurationListener.java",
    "content": "/*******************************************************************************\n *\n * Pentaho Big Data\n *\n * Copyright (C) 2002-2017 by Hitachi Vantara : http://www.pentaho.com\n *\n *******************************************************************************\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with\n * the License. You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n *\n ******************************************************************************/\n\npackage org.pentaho.big.data.hadoop.bootstrap;\n\nimport org.pentaho.hadoop.shim.HadoopConfiguration;\n\n/**\n * Created by bryan on 6/8/15.\n */\npublic interface HadoopConfigurationListener {\n  void onClassLoaderAvailable( ClassLoader classLoader );\n\n  void onConfigurationOpen( HadoopConfiguration hadoopConfiguration, boolean defaultConfiguration );\n\n  void onConfigurationClose( HadoopConfiguration hadoopConfiguration );\n}\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/services/bootstrap/BigDataCEServiceInitializerImpl.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.services.bootstrap;\n\nimport org.pentaho.di.core.database.DatabaseInterface;\nimport org.pentaho.database.IDatabaseDialect;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.plugins.DatabasePluginType;\nimport org.pentaho.platform.engine.core.system.PentahoSystem;\nimport com.pentaho.big.data.bundles.impl.shim.hbase.HBaseServiceFactory;\nimport com.pentaho.big.data.bundles.impl.shim.hdfs.HadoopFileSystemFactoryImpl;\nimport com.pentaho.big.data.bundles.impl.shim.hive.HiveDriver;\nimport com.pentaho.big.data.bundles.impl.shim.hive.ImpalaDriver;\nimport com.pentaho.big.data.bundles.impl.shim.hive.ImpalaSimbaDriver;\nimport com.pentaho.big.data.bundles.impl.shim.hive.SparkSimbaDriver;\nimport org.apache.commons.vfs2.FileObject;\nimport org.apache.commons.vfs2.FileSystemException;\nimport org.apache.logging.log4j.Logger;\nimport org.pentaho.authentication.mapper.api.AuthenticationMappingManager;\nimport org.pentaho.big.data.api.cluster.service.locator.impl.NamedClusterServiceLocatorImpl;\nimport org.pentaho.big.data.api.jdbc.impl.ClusterInitializingDriver;\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.api.jdbc.impl.JdbcUrlParserImpl;\nimport org.pentaho.big.data.api.shims.LegacyShimLocator;\nimport org.pentaho.big.data.hadoop.bootstrap.HadoopConfigurationBootstrap;\nimport org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayListHomeDirectoryTest;\nimport org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayListRootDirectoryTest;\nimport org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayPingFileSystemEntryPoint;\nimport org.pentaho.big.data.impl.cluster.tests.hdfs.GatewayWriteToAndDeleteFromUsersHomeFolderTest;\nimport org.pentaho.big.data.impl.cluster.tests.kafka.KafkaConnectTest;\nimport org.pentaho.big.data.impl.cluster.tests.mr.GatewayPingJobTrackerTest;\nimport org.pentaho.big.data.impl.cluster.tests.oozie.GatewayPingOozieHostTest;\nimport org.pentaho.big.data.impl.cluster.tests.zookeeper.GatewayPingZookeeperEnsembleTest;\nimport org.pentaho.big.data.impl.shim.HadoopClientServicesFactory;\nimport org.pentaho.big.data.impl.shim.format.FormatServiceFactory;\nimport org.pentaho.big.data.impl.shim.mapreduce.MapReduceServiceFactoryImpl;\nimport org.pentaho.big.data.impl.shim.mapreduce.TransformationVisitorService;\nimport org.pentaho.big.data.impl.vfs.hdfs.AzureHdInsightsFileNameParser;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileNameParser;\nimport org.pentaho.big.data.impl.vfs.hdfs.HDFSFileProvider;\nimport org.pentaho.big.data.impl.vfs.hdfs.MapRFileNameParser;\nimport org.pentaho.big.data.impl.vfs.hdfs.nc.NamedClusterProvider;\nimport org.pentaho.bigdata.api.hdfs.impl.HadoopFileSystemLocatorImpl;\nimport org.pentaho.di.core.bowl.DefaultBowl;\nimport org.pentaho.di.core.exception.KettleException;\nimport org.pentaho.di.core.hadoop.HadoopSpoonPlugin;\nimport org.pentaho.di.core.plugins.LifecyclePluginType;\nimport org.pentaho.di.core.plugins.PluginInterface;\nimport org.pentaho.di.core.plugins.PluginRegistry;\nimport org.pentaho.di.core.service.ServiceProvider;\nimport org.pentaho.di.core.service.ServiceProviderInterface;\nimport org.pentaho.di.core.vfs.KettleVFS;\nimport org.pentaho.hadoop.shim.HadoopConfiguration;\nimport org.pentaho.hadoop.shim.HadoopConfigurationLocator;\nimport org.pentaho.hadoop.shim.api.ConfigurationException;\nimport org.pentaho.hadoop.shim.api.core.ShimIdentifierInterface;\nimport org.pentaho.hadoop.shim.api.hdfs.HadoopFileSystemFactory;\nimport org.pentaho.hadoop.shim.api.jdbc.JdbcUrlParser;\nimport org.pentaho.hadoop.shim.api.services.BigDataServicesInitializer;\nimport org.pentaho.hadoop.shim.common.CommonFormatShim;\nimport org.pentaho.hadoop.shim.spi.HadoopShim;\nimport org.pentaho.hbase.shim.common.HBaseShimImpl;\nimport org.pentaho.runtime.test.RuntimeTester;\nimport org.pentaho.runtime.test.i18n.impl.BaseMessagesMessageGetterFactoryImpl;\nimport org.pentaho.runtime.test.impl.RuntimeTesterImpl;\nimport org.pentaho.runtime.test.network.impl.ConnectivityTestFactoryImpl;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Level;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.lang.reflect.InvocationTargetException;\nimport java.util.ArrayList;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Properties;\nimport java.util.concurrent.Executors;\n\n@ServiceProvider(id = \"BigDataCEServiceInitializer\", description = \"\", provides = BigDataServicesInitializer.class)\npublic class BigDataCEServiceInitializerImpl implements BigDataServicesInitializer, ServiceProviderInterface<BigDataServicesInitializer> {\n  protected static final Logger logger = LogManager.getLogger( BigDataCEServiceInitializerImpl.class );\n  private static final String LOGGING_PROPERTIES_FILE = \"bigdata-logging.properties\";\n  private static final String LOGGER_PREFIX = \"logger.\";\n\n\n  @Override\n  public void doInitialize() {\n    // Register loggers from properties file first\n    registerLoggers();\n\n    // Initialize Big Data logging configuration\n    BigDataLogConfig.initializeBigDataLogging();\n\n    logger.info( \"Starting Pentaho Big Data Plugin bootstrap process.\" );\n    try {\n      HadoopShim hadoopShim = initializeCommonServices();\n      if ( hadoopShim == null ) {\n        return;\n      }\n\n      List<String> shimAvailableServices = hadoopShim.getAvailableServices();\n      AuthenticationMappingManager authenticationMappingManager =\n        initializeAuthenticationManager( hadoopShim, shimAvailableServices );\n      HadoopFileSystemLocatorImpl hadoopFileSystemLocator =\n        initializeHdfsServices( hadoopShim, shimAvailableServices, authenticationMappingManager );\n      NamedClusterServiceLocatorImpl namedClusterServiceLocator = NamedClusterServiceLocatorImpl\n        .getInstance( hadoopShim.getShimIdentifier().getId() );\n      initializeFormatServices( hadoopShim, shimAvailableServices, namedClusterServiceLocator );\n      initializeMapReduceServices( hadoopShim, shimAvailableServices, authenticationMappingManager,\n        namedClusterServiceLocator );\n      initializeSqoopServices( hadoopShim, shimAvailableServices, authenticationMappingManager,\n        namedClusterServiceLocator );\n      initializeHiveServices( hadoopShim, shimAvailableServices, authenticationMappingManager );\n      initializeHBaseServices( hadoopShim, shimAvailableServices, authenticationMappingManager,\n        namedClusterServiceLocator );\n      initializeYarnServices( hadoopShim, shimAvailableServices, authenticationMappingManager,\n        hadoopFileSystemLocator, namedClusterServiceLocator );\n      initializeRuntimeTests( hadoopFileSystemLocator, namedClusterServiceLocator );\n      registerBigDataDatabaseDialects();\n\n    } catch ( ConfigurationException | ClassNotFoundException | IllegalAccessException | InstantiationException |\n              KettlePluginException e ) {\n      logger.error(\n        \"There was an error during the Pentaho Big Data Plugin bootstrap process. Some Big Data features may not be \"\n          + \"available after startup.\",\n        e );\n    }\n\n    logger.info( \"Finished Pentaho Big Data Plugin bootstrap process.\" );\n\n  }\n\n  /**\n   * Register all loggers from the bigdata-logging.properties file.\n   * Loggers are registered dynamically based on the file contents.\n   */\n  protected void registerLoggers() {\n    logger.debug( \"Registering Big Data loggers from {}\", LOGGING_PROPERTIES_FILE );\n\n    Properties props = new Properties();\n    InputStream is = null;\n    try {\n      // Get the plugin interface to locate the plugin directory\n      PluginInterface pluginInterface = PluginRegistry.getInstance()\n        .findPluginWithId( LifecyclePluginType.class, HadoopSpoonPlugin.PLUGIN_ID );\n\n      if ( pluginInterface == null ) {\n        logger.warn( \"Could not find Big Data plugin in registry - cannot load {}\", LOGGING_PROPERTIES_FILE );\n        return;\n      }\n\n      // Construct the path to the logging properties file in the plugin root directory\n      FileObject pluginDir = KettleVFS.getInstance( DefaultBowl.getInstance() )\n        .getFileObject( pluginInterface.getPluginDirectory().getPath() );\n      FileObject loggingPropsFile = pluginDir.resolveFile( LOGGING_PROPERTIES_FILE );\n\n      if ( !loggingPropsFile.exists() ) {\n        logger.warn( \"Could not find {} in plugin directory {} - no loggers will be registered\",\n          LOGGING_PROPERTIES_FILE, pluginDir.getName().getPath() );\n        return;\n      }\n\n      is = loggingPropsFile.getContent().getInputStream();\n      props.load( is );\n      logger.debug( \"Loaded logging configuration from {}\", loggingPropsFile.getName().getPath() );\n\n      int registeredCount = 0;\n      for ( String propName : props.stringPropertyNames() ) {\n        if ( propName.startsWith( LOGGER_PREFIX ) ) {\n          String loggerName = propName.substring( LOGGER_PREFIX.length() );\n          String levelStr = props.getProperty( propName );\n\n          try {\n            Level level = Level.toLevel( levelStr, Level.INFO );\n            BigDataLogConfig.registerLogger( loggerName, level );\n            logger.debug( \"Registered logger: {} = {}\", loggerName, level );\n            registeredCount++;\n          } catch ( Exception e ) {\n            logger.warn( \"Invalid log level '{}' for logger '{}', defaulting to INFO\", levelStr, loggerName );\n            BigDataLogConfig.registerLogger( loggerName, Level.INFO );\n            registeredCount++;\n          }\n        }\n      }\n      logger.debug( \"Registered {} Big Data loggers\", registeredCount );\n    } catch ( KettleException e ) {\n      logger.error( \"Error accessing plugin directory for {} - no loggers will be registered\",\n        LOGGING_PROPERTIES_FILE, e );\n    } catch ( IOException e ) {\n      logger.error( \"Error loading {} - no loggers will be registered\", LOGGING_PROPERTIES_FILE, e );\n    } finally {\n      if ( is != null ) {\n        try {\n          is.close();\n        } catch ( IOException e ) {\n          logger.warn( \"Error closing input stream for {}\", LOGGING_PROPERTIES_FILE, e );\n        }\n      }\n    }\n  }\n\n  @Override\n  public int getPriority() {\n    return 100;\n  }\n\n  @Override\n  public boolean useProxyWrap() {\n    return true;\n  }\n\n  /**\n   * Initialize common Hadoop services and configuration\n   *\n   * @return HadoopShim instance or null if no configuration found\n   */\n  protected HadoopShim initializeCommonServices() throws ConfigurationException {\n    logger.debug( \"Bootstrapping the Common Services.\" );\n    HadoopConfigurationBootstrap hadoopConfigurationBootstrap = HadoopConfigurationBootstrap.getInstance();\n    HadoopConfigurationLocator hadoopConfigurationProvider =\n      ( HadoopConfigurationLocator ) hadoopConfigurationBootstrap.getProvider();\n    if ( hadoopConfigurationProvider == null ) {\n      logger.info( \"No Hadoop active configuration found.\" );\n      return null;\n    }\n    HadoopConfiguration hadoopConfiguration = hadoopConfigurationProvider.getActiveConfiguration();\n    HadoopShim hadoopShim = hadoopConfiguration.getHadoopShim();\n\n    // Add active shim to the Legacy Shim Locator\n    List<ShimIdentifierInterface> registeredShims = new ArrayList<>();\n    registeredShims.add( hadoopShim.getShimIdentifier() );\n    LegacyShimLocator.getInstance().setRegisteredShims( registeredShims );\n\n    return hadoopShim;\n  }\n\n  /**\n   * Initialize authentication manager service\n   *\n   * @param hadoopShim            the Hadoop shim\n   * @param shimAvailableServices list of available services\n   * @return AuthenticationMappingManager instance or null\n   */\n  protected AuthenticationMappingManager initializeAuthenticationManager( HadoopShim hadoopShim,\n                                                                          List<String> shimAvailableServices ) {\n    logger.debug( \"Bootstrapping the authentication manager service.\" );\n    logger.debug( \"No authentication manager service available in CE - continuing without authentication\" );\n    return null;\n  }\n\n  /**\n   * Initialize HDFS services and file providers\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   * @return HadoopFileSystemLocatorImpl instance or null\n   */\n  protected HadoopFileSystemLocatorImpl initializeHdfsServices( HadoopShim hadoopShim,\n                                                                List<String> shimAvailableServices,\n                                                                AuthenticationMappingManager authenticationMappingManager ) {\n    logger.debug( \"Bootstrapping the HDFS Services.\" );\n    HadoopFileSystemLocatorImpl hadoopFileSystemLocator = null;\n    if ( shimAvailableServices.contains( \"hdfs\" ) ) {\n      List<String> availableHdfsOptions = hadoopShim.getServiceOptions( \"hdfs\" );\n      List<String> availableHdfsSchemas = hadoopShim.getAvailableHdfsSchemas();\n      List<HadoopFileSystemFactory> hadoopFileSystemFactoryList = new ArrayList<>();\n\n      if ( availableHdfsOptions.contains( \"hdfs\" ) ) {\n        logger.debug( \"Adding 'hdfs' factory.\" );\n        HadoopFileSystemFactory hadoopFileSystemFactory =\n          new HadoopFileSystemFactoryImpl( hadoopShim, hadoopShim.getShimIdentifier() );\n        hadoopFileSystemFactoryList.add( hadoopFileSystemFactory );\n      }\n\n      hadoopFileSystemLocator = HadoopFileSystemLocatorImpl.getInstance();\n      hadoopFileSystemLocator.setHadoopFileSystemFactories( hadoopFileSystemFactoryList );\n\n      initializeHdfsSchemas( hadoopFileSystemLocator, availableHdfsSchemas );\n    } else {\n      logger.debug( \"No HDFS Services defined.\" );\n    }\n    return hadoopFileSystemLocator;\n  }\n\n  /**\n   * Initialize HDFS schemas (hdfs, maprfs, wasb, etc.)\n   *\n   * @param hadoopFileSystemLocator the file system locator\n   * @param availableHdfsSchemas    list of available HDFS schemas\n   */\n  protected void initializeHdfsSchemas( HadoopFileSystemLocatorImpl hadoopFileSystemLocator,\n                                        List<String> availableHdfsSchemas ) {\n    if ( availableHdfsSchemas.contains( \"hdfs\" ) ) {\n      logger.debug( \"Adding 'hdfs' schema.'\" );\n      try {\n        HDFSFileProvider hdfsHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"hdfs\", HDFSFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"maprfs\" ) ) {\n      logger.debug( \"Adding 'maprfs' schema.'\" );\n      try {\n        HDFSFileProvider maprfsHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"maprfs\", MapRFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"escalefs\" ) ) {\n      logger.debug( \"Adding 'escalefs' schema.'\" );\n      try {\n        HDFSFileProvider escalefsHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"escalefs\", MapRFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"wasb\" ) ) {\n      logger.debug( \"Adding 'wasb' schema.'\" );\n      try {\n        HDFSFileProvider wasbHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"wasb\", AzureHdInsightsFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"wasbs\" ) ) {\n      logger.debug( \"Adding 'wasbs' schema.'\" );\n      try {\n        HDFSFileProvider wasbsHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"wasbs\", AzureHdInsightsFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"abfs\" ) ) {\n      logger.debug( \"Adding 'abfs' schema.'\" );\n      try {\n        HDFSFileProvider abfsHDFSFileProvider =\n          new HDFSFileProvider( hadoopFileSystemLocator, \"abfs\", AzureHdInsightsFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n    if ( availableHdfsSchemas.contains( \"hc\" ) ) {\n      logger.debug( \"Adding 'hc' schema.'\" );\n      try {\n        NamedClusterProvider namedClusterProvider =\n          new NamedClusterProvider( hadoopFileSystemLocator, \"hc\", HDFSFileNameParser.getInstance() );\n      } catch ( FileSystemException e ) {\n        throw new RuntimeException( e );\n      }\n\n      // Initialize UI NamedClusterProvider if available (Spoon environment)\n      initializeUINamedClusterProvider();\n    }\n  }\n\n  /**\n   * Initialize format service factories (Parquet, ORC, Avro, etc.)\n   *\n   * @param hadoopShim                 the Hadoop shim\n   * @param shimAvailableServices      list of available services\n   * @param namedClusterServiceLocator the service locator\n   */\n  protected void initializeFormatServices( HadoopShim hadoopShim,\n                                           List<String> shimAvailableServices,\n                                           NamedClusterServiceLocatorImpl namedClusterServiceLocator ) {\n    logger.debug( \"Bootstrap the common format service factories.\" );\n    if ( shimAvailableServices.contains( \"common_formats\" ) ) {\n      CommonFormatShim commonFormatShim = new CommonFormatShim();\n      FormatServiceFactory formatServiceFactory = new FormatServiceFactory( commonFormatShim );\n      Map formatFactoryMap = new HashMap<String, String>();\n      formatFactoryMap.put( \"shim\", hadoopShim.getShimIdentifier().getId() );\n      formatFactoryMap.put( \"service\", \"format\" );\n      namedClusterServiceLocator.factoryAdded( formatServiceFactory, formatFactoryMap );\n    } else {\n      logger.debug( \"No common format service factories defined.\" );\n    }\n  }\n\n  /**\n   * Initialize MapReduce service factories\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   * @param namedClusterServiceLocator   the service locator\n   */\n  protected void initializeMapReduceServices( HadoopShim hadoopShim,\n                                              List<String> shimAvailableServices,\n                                              AuthenticationMappingManager authenticationMappingManager,\n                                              NamedClusterServiceLocatorImpl namedClusterServiceLocator ) {\n    logger.debug( \"Bootstrap the mapreduce service factories.\" );\n    if ( shimAvailableServices.contains( \"mapreduce\" ) ) {\n      List<String> availableMapreduceOptions = hadoopShim.getServiceOptions( \"mapreduce\" );\n      List<TransformationVisitorService> visitorServices = new ArrayList<>();\n      Map mapReducefactoryMap = new HashMap<String, String>();\n      mapReducefactoryMap.put( \"shim\", hadoopShim.getShimIdentifier().getId() );\n      mapReducefactoryMap.put( \"service\", \"mapreduce\" );\n\n      if ( availableMapreduceOptions.contains( \"mapreduce\" ) ) {\n        logger.debug( \"Adding 'mapreduce' factory.\" );\n        MapReduceServiceFactoryImpl mapReduceServiceFactory = new MapReduceServiceFactoryImpl(\n          hadoopShim,\n          Executors.newCachedThreadPool(),\n          visitorServices\n        );\n        namedClusterServiceLocator.factoryAdded( mapReduceServiceFactory, mapReducefactoryMap );\n      }\n    } else {\n      logger.debug( \"No mapreduce service factories defined.\" );\n    }\n  }\n\n  /**\n   * Initialize Sqoop (Hadoop client) service factories\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   * @param namedClusterServiceLocator   the service locator\n   */\n  protected void initializeSqoopServices( HadoopShim hadoopShim,\n                                          List<String> shimAvailableServices,\n                                          AuthenticationMappingManager authenticationMappingManager,\n                                          NamedClusterServiceLocatorImpl namedClusterServiceLocator ) {\n    logger.debug( \"Bootstrap the hadoop client (Sqoop) service factories.\" );\n    if ( shimAvailableServices.contains( \"sqoop\" ) ) {\n      List<String> availableSqoopOptions = hadoopShim.getServiceOptions( \"sqoop\" );\n      Map hadoopClientFactoryMap = new HashMap<String, String>();\n      hadoopClientFactoryMap.put( \"shim\", hadoopShim.getShimIdentifier().getId() );\n      hadoopClientFactoryMap.put( \"service\", \"shimservices\" );\n\n      if ( availableSqoopOptions.contains( \"sqoop\" ) ) {\n        logger.debug( \"Adding 'sqoop' factory.\" );\n        HadoopClientServicesFactory hadoopClientServicesFactory = new HadoopClientServicesFactory( hadoopShim );\n        namedClusterServiceLocator.factoryAdded( hadoopClientServicesFactory, hadoopClientFactoryMap );\n      }\n    } else {\n      logger.debug( \"No hadoop client (Sqoop) service factories defined.\" );\n    }\n  }\n\n  /**\n   * Initialize Hive service drivers\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   */\n  protected void initializeHiveServices( HadoopShim hadoopShim,\n                                         List<String> shimAvailableServices,\n                                         AuthenticationMappingManager authenticationMappingManager ) throws ClassNotFoundException, IllegalAccessException, InstantiationException {\n    logger.debug( \"Bootstrap the Hive services.\" );\n    if ( shimAvailableServices.contains( \"hive\" ) ) {\n      JdbcUrlParser jdbcUrlParser = JdbcUrlParserImpl.getInstance();\n      DriverLocatorImpl driverLocator = DriverLocatorImpl.getInstance();\n      int pentaho_jdbc_lazydrivers_num = 5;\n      ClusterInitializingDriver clusterInitializingDriver = new ClusterInitializingDriver(\n        jdbcUrlParser,\n        driverLocator,\n        pentaho_jdbc_lazydrivers_num );\n      List<String> availableHiveDrivers = hadoopShim.getAvailableHiveDrivers();\n\n      // Register CE Hive drivers\n      registerHiveDrivers( hadoopShim, availableHiveDrivers, jdbcUrlParser, driverLocator );\n    } else {\n      logger.debug( \"No Hive services defined.\" );\n    }\n  }\n\n  /**\n   * We use to register using the Activation.java class when in OSGi. We have to now register all big-data\n   * dialect one by one using PentahoSystem.registerObject.  The reason to use reflection is that the bootstrap jar\n   * in big-data plugin has its own classloader and pdi-hive-core.jar which contains all these dialect is not loaded\n   * from this classloader\n   */\n  protected void registerBigDataDatabaseDialects() throws KettlePluginException {\n    if ( PentahoSystem.getInitializedOK() ) {\n      try {\n        PluginRegistry registry = PluginRegistry.getInstance();\n        PluginInterface plugin = registry.getPlugin( DatabasePluginType.class, \"HIVE2\" );\n        //Register a DatabaseMeta in the Pentaho System\n        Object hive2Meta = registry.loadClass( plugin );\n        PentahoSystem.registerObject( hive2Meta, DatabaseInterface.class );\n        //Register a dialect in the Pentaho System\n        Class<IDatabaseDialect> hive2Dialect = registry.getClass( plugin, \"org.pentaho.big.data.kettle.plugins.hive.Hive2DatabaseDialect\" );\n        PentahoSystem.registerObject( hive2Dialect.getDeclaredConstructor().newInstance(), IDatabaseDialect.class );\n\n        plugin = registry.getPlugin( DatabasePluginType.class, \"IMPALA\" );\n        //Register a DatabaseMeta in the Pentaho System\n        Object impalaMeta = registry.loadClass( plugin );\n        PentahoSystem.registerObject( impalaMeta, DatabaseInterface.class );\n        //Register a dialect in the Pentaho System\n        Class<IDatabaseDialect> impalaDialect = registry.getClass( plugin, \"org.pentaho.big.data.kettle.plugins.hive.ImpalaDatabaseDialect\" );\n        PentahoSystem.registerObject( impalaDialect.getDeclaredConstructor().newInstance(), IDatabaseDialect.class );\n\n        plugin = registry.getPlugin( DatabasePluginType.class, \"IMPALASIMBA\" );\n        //Register a DatabaseMeta in the Pentaho System\n        Object impalaSimbaMeta = registry.loadClass( plugin );\n        PentahoSystem.registerObject( impalaSimbaMeta, DatabaseInterface.class );\n        //Register a dialect in the Pentaho System\n        Class<IDatabaseDialect> impalaSimbaDialect = registry.getClass( plugin, \"org.pentaho.big.data.kettle.plugins.hive.ImpalaSimbaDatabaseDialect\" );\n        PentahoSystem.registerObject( impalaSimbaDialect.getDeclaredConstructor().newInstance(), IDatabaseDialect.class );\n\n        plugin = registry.getPlugin( DatabasePluginType.class, \"SPARKSIMBA\" );\n        //Register a DatabaseMeta in the Pentaho System\n        Object sparkSimbaMeta = registry.loadClass( plugin );\n        PentahoSystem.registerObject( sparkSimbaMeta, DatabaseInterface.class );\n        //Register a dialect in the Pentaho System\n        Class<IDatabaseDialect> sparkSimbaDialect = registry.getClass( plugin, \"org.pentaho.big.data.kettle.plugins.hive.SparkSimbaDatabaseDialect\" );\n        PentahoSystem.registerObject( sparkSimbaDialect.getDeclaredConstructor().newInstance(), IDatabaseDialect.class );\n      } catch ( NoSuchMethodException e ) {\n        throw new RuntimeException( e );\n      } catch ( InvocationTargetException e ) {\n        throw new RuntimeException( e );\n      } catch ( InstantiationException e ) {\n        throw new RuntimeException( e );\n      } catch ( IllegalAccessException e ) {\n        throw new RuntimeException( e );\n      }\n    }\n  }\n\n  /**\n   * Register Community Edition Hive drivers\n   *\n   * @param hadoopShim           the Hadoop shim\n   * @param availableHiveDrivers list of available Hive drivers\n   * @param jdbcUrlParser        JDBC URL parser\n   * @param driverLocator        driver locator\n   */\n  protected void registerHiveDrivers( HadoopShim hadoopShim,\n                                      List<String> availableHiveDrivers,\n                                      JdbcUrlParser jdbcUrlParser,\n                                      DriverLocatorImpl driverLocator ) throws ClassNotFoundException, IllegalAccessException, InstantiationException {\n    if ( availableHiveDrivers.contains( \"hive\" ) ) {\n      logger.debug( \"Adding 'hive' driver.\" );\n      HiveDriver hiveDriver = null;\n      hiveDriver = new HiveDriver(\n        jdbcUrlParser,\n        \"org.apache.hive.jdbc.HiveDriver\",\n        hadoopShim.getShimIdentifier().getId() );\n      driverLocator.registerDriver( hiveDriver );\n    }\n    if ( availableHiveDrivers.contains( \"impala\" ) ) {\n      logger.debug( \"Adding 'impala' driver.\" );\n      ImpalaDriver impalaDriver = null;\n      impalaDriver = new ImpalaDriver(\n        jdbcUrlParser,\n        \"com.cloudera.impala.jdbc41.Driver\",\n        hadoopShim.getShimIdentifier().getId() );\n      driverLocator.registerDriver( impalaDriver );\n    }\n    if ( availableHiveDrivers.contains( \"impala_simba\" ) ) {\n      logger.debug( \"Adding 'impala_simba' driver.\" );\n      ImpalaSimbaDriver impalaSimbaDriver = null;\n      impalaSimbaDriver = new ImpalaSimbaDriver(\n        jdbcUrlParser,\n        \"com.cloudera.impala.jdbc41.Driver\",\n        hadoopShim.getShimIdentifier().getId() );\n      driverLocator.registerDriver( impalaSimbaDriver );\n    }\n    if ( availableHiveDrivers.contains( \"spark_simba\" ) ) {\n      logger.debug( \"Adding 'spark_simba' driver.\" );\n      SparkSimbaDriver sparkSimbaDriver = null;\n      sparkSimbaDriver = new SparkSimbaDriver(\n        jdbcUrlParser,\n        \"org.apache.hive.jdbc.HiveDriver\",\n        hadoopShim.getShimIdentifier().getId() );\n      driverLocator.registerDriver( sparkSimbaDriver );\n    }\n  }\n\n  /**\n   * Initialize HBase service factories\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   * @param namedClusterServiceLocator   the service locator\n   */\n  protected void initializeHBaseServices( HadoopShim hadoopShim,\n                                          List<String> shimAvailableServices,\n                                          AuthenticationMappingManager authenticationMappingManager,\n                                          NamedClusterServiceLocatorImpl namedClusterServiceLocator ) throws ConfigurationException {\n    logger.debug( \"Bootstrap the HBase services.\" );\n    if ( shimAvailableServices.contains( \"hbase\" ) ) {\n      List<String> availableHbaseOptions = hadoopShim.getServiceOptions( \"hbase\" );\n      HBaseShimImpl hBaseShim = new HBaseShimImpl();\n      Map hBaseServiceFactoryMap = new HashMap<String, String>();\n      hBaseServiceFactoryMap.put( \"shim\", hadoopShim.getShimIdentifier().getId() );\n      hBaseServiceFactoryMap.put( \"service\", \"hbase\" );\n\n      if ( availableHbaseOptions.contains( \"hbase\" ) ) {\n        logger.debug( \"Adding 'hbase' factory.\" );\n        HBaseServiceFactory hBaseServiceFactory = new HBaseServiceFactory( hBaseShim );\n        namedClusterServiceLocator.factoryAdded( hBaseServiceFactory, hBaseServiceFactoryMap );\n      }\n    } else {\n      logger.debug( \"No HBase services defined.\" );\n    }\n  }\n\n  /**\n   * Initialize Yarn service factories\n   *\n   * @param hadoopShim                   the Hadoop shim\n   * @param shimAvailableServices        list of available services\n   * @param authenticationMappingManager authentication manager\n   * @param hadoopFileSystemLocator      file system locator\n   * @param namedClusterServiceLocator   the service locator\n   */\n  protected void initializeYarnServices( HadoopShim hadoopShim,\n                                         List<String> shimAvailableServices,\n                                         AuthenticationMappingManager authenticationMappingManager,\n                                         HadoopFileSystemLocatorImpl hadoopFileSystemLocator,\n                                         NamedClusterServiceLocatorImpl namedClusterServiceLocator ) {\n    logger.debug( \"Bootstrap the Yarn services.\" );\n    // CE version does not support Yarn - EE version will override\n    logger.debug( \"No Yarn services available in CE\" );\n  }\n\n  /**\n   * Initialize UI NamedClusterProvider for Spoon environment\n   * This provider is optional and only available in UI contexts\n   */\n  protected void initializeUINamedClusterProvider() {\n    try {\n      // Try to instantiate the UI NamedClusterProvider (browse plugin)\n      org.pentaho.big.data.impl.browse.NamedClusterProvider uiProvider =\n        new org.pentaho.big.data.impl.browse.NamedClusterProvider();\n      logger.debug( \"UI NamedClusterProvider initialized successfully for Spoon environment\" );\n    } catch ( NoClassDefFoundError | Exception e ) {\n      logger.debug(\n        \"The UI NamedClusterProvider could not be instantiated. This is OK for Pentaho Server but it should be \"\n          + \"examined for Spoon.\" );\n    }\n  }\n\n  /**\n   * Initialize runtime tests for cluster connectivity and health checks\n   *\n   * @param hadoopFileSystemLocator    file system locator\n   * @param namedClusterServiceLocator the service locator\n   */\n  protected void initializeRuntimeTests( HadoopFileSystemLocatorImpl hadoopFileSystemLocator,\n                                         NamedClusterServiceLocatorImpl namedClusterServiceLocator ) {\n    logger.debug( \"Bootstrap the run time tests.\" );\n    RuntimeTester runtimeTester = RuntimeTesterImpl.getInstance();\n    runtimeTester.addRuntimeTest(\n      new GatewayPingFileSystemEntryPoint( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n        new ConnectivityTestFactoryImpl() ) );\n    runtimeTester.addRuntimeTest( new GatewayPingJobTrackerTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n      new ConnectivityTestFactoryImpl() ) );\n    runtimeTester.addRuntimeTest( new GatewayPingOozieHostTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n      new ConnectivityTestFactoryImpl() ) );\n    runtimeTester.addRuntimeTest(\n      new GatewayPingZookeeperEnsembleTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n        new ConnectivityTestFactoryImpl() ) );\n    runtimeTester.addRuntimeTest(\n      new GatewayListRootDirectoryTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n        new ConnectivityTestFactoryImpl(), hadoopFileSystemLocator ) );\n    runtimeTester.addRuntimeTest(\n      new GatewayListHomeDirectoryTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n        new ConnectivityTestFactoryImpl(), hadoopFileSystemLocator ) );\n    runtimeTester.addRuntimeTest(\n      new GatewayWriteToAndDeleteFromUsersHomeFolderTest( BaseMessagesMessageGetterFactoryImpl.getInstance(),\n        hadoopFileSystemLocator ) );\n    runtimeTester.addRuntimeTest(\n      new KafkaConnectTest( BaseMessagesMessageGetterFactoryImpl.getInstance(), namedClusterServiceLocator ) );\n  }\n\n\n}\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/services/bootstrap/BigDataLogConfig.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.services.bootstrap;\n\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.apache.logging.log4j.core.LoggerContext;\n\n/**\n * Configuration class for Big Data plugin logging.\n * The actual logging configuration is defined in log4j2.xml in the plugin resources,\n * which configures Big Data loggers to write to the existing Pentaho logs (pdi.log and console).\n * This class ensures the Log4j2 configuration is properly initialized and provides utility methods.\n */\npublic class BigDataLogConfig {\n\n  private static boolean initialized = false;\n  private static final Logger logger = LogManager.getLogger( BigDataLogConfig.class );\n\n  /**\n   * Initializes the Big Data logging configuration.\n   * This method triggers Log4j2 to reconfigure if needed and verifies the configuration is loaded.\n   * The actual logger configuration is defined in the log4j2.xml file in the plugin resources,\n   * which ensures Big Data logs are written to existing Pentaho log files (pdi.log).\n   */\n  public static synchronized void initializeBigDataLogging() {\n    if ( initialized ) {\n      logger.debug( \"Big Data logging already initialized\" );\n      return;\n    }\n\n    try {\n      // Get the Log4j2 context and trigger reconfiguration to pick up our log4j2.xml\n      LoggerContext context = ( LoggerContext ) LogManager.getContext( false );\n\n      // Log the configuration source\n      logger.debug( \"Big Data Plugin logging initialized. Configuration source: \" +\n        context.getConfiguration().getName() );\n      logger.debug( \"Big Data logs will be written to logs/pdi.log and console output\" );\n\n      initialized = true;\n\n    } catch ( Exception e ) {\n      logger.error( \"Failed to initialize Big Data logging configuration\", e );\n    }\n  }\n\n  /**\n   * Gets a logger for the Big Data plugin with the configured appender.\n   *\n   * @param clazz the class requesting the logger\n   * @return configured Logger instance\n   */\n  public static Logger getBigDataLogger( Class<?> clazz ) {\n    // Ensure initialization before returning logger\n    if ( !initialized ) {\n      initializeBigDataLogging();\n    }\n    return LogManager.getLogger( clazz );\n  }\n\n  /**\n   * Gets a logger for the Big Data plugin with the configured appender.\n   *\n   * @param name the name for the logger\n   * @return configured Logger instance\n   */\n  public static Logger getBigDataLogger( String name ) {\n    // Ensure initialization before returning logger\n    if ( !initialized ) {\n      initializeBigDataLogging();\n    }\n    return LogManager.getLogger( name );\n  }\n\n  /**\n   * Checks if the Big Data logging has been initialized.\n   *\n   * @return true if initialized, false otherwise\n   */\n  public static boolean isInitialized() {\n    return initialized;\n  }\n\n  /**\n   * Resets the initialization flag (mainly for testing purposes).\n   */\n  protected static void resetInitialization() {\n    initialized = false;\n  }\n\n  /**\n   * Registers a logger with a specific log level.\n   * This method dynamically configures a logger at runtime.\n   *\n   * @param loggerName the name of the logger to register\n   * @param level      the Log4j2 Level to set for this logger\n   * @return true if the logger was successfully registered, false otherwise\n   */\n  public static boolean registerLogger( String loggerName, org.apache.logging.log4j.Level level ) {\n    try {\n      LoggerContext context = ( LoggerContext ) LogManager.getContext( false );\n      org.apache.logging.log4j.core.config.Configuration config = context.getConfiguration();\n      org.apache.logging.log4j.core.config.LoggerConfig loggerConfig = config.getLoggerConfig( loggerName );\n\n      // Create a new logger config if it doesn't exist or if it's the root logger\n      if ( !loggerName.equals( loggerConfig.getName() ) ) {\n        // Create a new logger config\n        logger.debug( \"Logger : \" + loggerName + \" does not exist. Registering and adding of the logger\" );\n        loggerConfig = new org.apache.logging.log4j.core.config.LoggerConfig( loggerName, level, true );\n        config.addLogger( loggerName, loggerConfig );\n        context.updateLoggers();\n      } else {\n        logger.debug( \"Logger : \" + loggerName + \" does exist. Skipping the registering and adding of the logger\" );\n      }\n      return true;\n    } catch ( Exception e ) {\n      logger.error( \"Failed to register logger: \" + loggerName, e );\n      return false;\n    }\n  }\n}\n\n\n\n\n"
  },
  {
    "path": "services-bootstrap/src/main/java/org/pentaho/big/data/services/bootstrap/BigDataPluginLifecycleListener.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.services.bootstrap;\n\nimport org.apache.logging.log4j.Logger;\nimport org.pentaho.di.core.exception.KettlePluginException;\nimport org.pentaho.di.core.service.PluginServiceLoader;\nimport org.pentaho.di.core.annotations.KettleLifecyclePlugin;\nimport org.pentaho.di.core.lifecycle.KettleLifecycleListener;\nimport org.pentaho.hadoop.shim.api.services.BigDataServicesInitializer;\n\nimport java.util.Collection;\n\n\n@KettleLifecyclePlugin( id = \"BigDataPlugin\", name = \"Big Data Plugin\" )\npublic class BigDataPluginLifecycleListener implements KettleLifecycleListener {\n\n  protected static final Logger logger = BigDataLogConfig.getBigDataLogger(BigDataPluginLifecycleListener.class);\n\n\n\n\n  @Override\n  public void onEnvironmentInit() {\n    try {\n      Collection<BigDataServicesInitializer> bigDataServicesInitializerCollection = PluginServiceLoader.loadServices( BigDataServicesInitializer.class );\n      BigDataServicesInitializer bigDataServicesInitializer = bigDataServicesInitializerCollection.stream().findFirst().get();\n      bigDataServicesInitializer.doInitialize();\n    } catch ( KettlePluginException e ) {\n      throw new RuntimeException( e );\n    }\n  }\n\n\n  @Override\n  public void onEnvironmentShutdown() {\n    // No action needed on exit\n  }\n}\n"
  },
  {
    "path": "services-bootstrap/src/test/java/org/pentaho/big/data/services/bootstrap/BigDataCEServiceInitializerImplTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.services.bootstrap;\n\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Ignore;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.mockito.Mock;\nimport org.mockito.MockedStatic;\nimport org.mockito.Mockito;\nimport org.mockito.junit.MockitoJUnitRunner;\nimport org.pentaho.big.data.hadoop.bootstrap.HadoopConfigurationBootstrap;\nimport org.pentaho.big.data.api.jdbc.impl.JdbcUrlParserImpl;\nimport org.pentaho.big.data.api.jdbc.impl.DriverLocatorImpl;\nimport org.pentaho.big.data.impl.cluster.NamedClusterManager;\nimport org.pentaho.hadoop.shim.HadoopConfiguration;\nimport org.pentaho.hadoop.shim.HadoopConfigurationLocator;\nimport org.pentaho.hadoop.shim.api.ConfigurationException;\nimport org.pentaho.hadoop.shim.api.internal.ShimIdentifier;\nimport org.pentaho.hadoop.shim.spi.HadoopShim;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport static org.junit.Assert.*;\nimport static org.mockito.Mockito.*;\n\n/**\n * Unit test for BigDataCEServiceInitializerImpl\n */\n@RunWith(MockitoJUnitRunner.class)\npublic class BigDataCEServiceInitializerImplTest {\n\n  private BigDataCEServiceInitializerImpl initializer;\n\n  @Mock(lenient = true)\n  private HadoopConfigurationBootstrap hadoopConfigurationBootstrap;\n\n  @Mock(lenient = true)\n  private HadoopConfigurationLocator hadoopConfigurationLocator;\n\n  @Mock(lenient = true)\n  private HadoopConfiguration hadoopConfiguration;\n\n  @Mock(lenient = true)\n  private HadoopShim hadoopShim;\n\n  @Mock(lenient = true)\n  private ShimIdentifier shimIdentifier;\n  private JdbcUrlParserImpl jdbcUrlParser;\n\n  @Mock\n  private DriverLocatorImpl driverLocator;\n\n  @Mock\n  private NamedClusterManager namedClusterManager;\n\n  private MockedStatic<HadoopConfigurationBootstrap> hadoopConfigurationBootstrapMockedStatic;\n  private MockedStatic<JdbcUrlParserImpl> jdbcUrlParserMockedStatic;\n  private MockedStatic<DriverLocatorImpl> driverLocatorMockedStatic;\n  private MockedStatic<NamedClusterManager> namedClusterManagerMockedStatic;\n\n  @Before\n  public void setUp() throws ConfigurationException {\n    initializer = new BigDataCEServiceInitializerImpl();        // Setup static mocks\n    hadoopConfigurationBootstrapMockedStatic = Mockito.mockStatic( HadoopConfigurationBootstrap.class );\n    hadoopConfigurationBootstrapMockedStatic.when( HadoopConfigurationBootstrap::getInstance )\n      .thenReturn( hadoopConfigurationBootstrap );\n\n    jdbcUrlParserMockedStatic = Mockito.mockStatic( JdbcUrlParserImpl.class );\n    jdbcUrlParserMockedStatic.when( JdbcUrlParserImpl::getInstance )\n      .thenReturn( jdbcUrlParser );\n\n    driverLocatorMockedStatic = Mockito.mockStatic( DriverLocatorImpl.class );\n    driverLocatorMockedStatic.when( DriverLocatorImpl::getInstance )\n      .thenReturn( driverLocator );\n\n    namedClusterManagerMockedStatic = Mockito.mockStatic( NamedClusterManager.class );\n    namedClusterManagerMockedStatic.when( NamedClusterManager::getInstance )\n      .thenReturn( namedClusterManager );\n\n    // Setup common mock behavior\n    when( hadoopConfigurationBootstrap.getProvider() ).thenReturn( hadoopConfigurationLocator );\n    when( hadoopConfigurationLocator.getActiveConfiguration() ).thenReturn( hadoopConfiguration );\n    when( hadoopConfiguration.getHadoopShim() ).thenReturn( hadoopShim );\n    when( hadoopShim.getShimIdentifier() ).thenReturn( shimIdentifier );\n    when( shimIdentifier.getId() ).thenReturn( \"test-shim\" );\n  }\n\n  @After\n  public void tearDown() {\n    if ( hadoopConfigurationBootstrapMockedStatic != null ) {\n      hadoopConfigurationBootstrapMockedStatic.close();\n    }\n    if ( jdbcUrlParserMockedStatic != null ) {\n      jdbcUrlParserMockedStatic.close();\n    }\n    if ( driverLocatorMockedStatic != null ) {\n      driverLocatorMockedStatic.close();\n    }\n    if ( namedClusterManagerMockedStatic != null ) {\n      namedClusterManagerMockedStatic.close();\n    }\n  }\n\n  @Test\n  public void testGetPriority() {\n    assertEquals( 100, initializer.getPriority() );\n  }\n\n  @Test\n  public void testUseProxyWrap() {\n    assertTrue( initializer.useProxyWrap() );\n  }\n\n  @Test\n  public void testInitializeCommonServices_Success() throws ConfigurationException {\n    // Test successful initialization\n    HadoopShim result = initializer.initializeCommonServices();\n\n    assertNotNull( result );\n    assertEquals( hadoopShim, result );\n    verify( hadoopConfigurationBootstrap ).getProvider();\n    verify( hadoopConfigurationLocator ).getActiveConfiguration();\n  }\n\n  @Test\n  public void testInitializeCommonServices_NoProvider() throws ConfigurationException {\n    // Test when no provider is available\n    when( hadoopConfigurationBootstrap.getProvider() ).thenReturn( null );\n\n    HadoopShim result = initializer.initializeCommonServices();\n\n    assertNull( result );\n  }\n\n  @Test\n  public void testInitializeAuthenticationManager() {\n    List<String> services = new ArrayList<>();\n\n    // CE version should return null\n    assertNull( initializer.initializeAuthenticationManager( hadoopShim, services ) );\n  }\n\n  @Test\n  @Ignore(\"VFS filesystem conflicts - Multiple providers registered for URL scheme 'hdfs'\")\n  public void testInitializeHdfsServices_WithHdfsService() {\n    List<String> shimAvailableServices = Arrays.asList( \"hdfs\" );\n    List<String> hdfsOptions = Arrays.asList( \"hdfs\" );\n    List<String> hdfsSchemas = Arrays.asList( \"hdfs\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getServiceOptions( \"hdfs\" ) ).thenReturn( hdfsOptions );\n    when( hadoopShim.getAvailableHdfsSchemas() ).thenReturn( hdfsSchemas );\n\n    var result = initializer.initializeHdfsServices( hadoopShim, shimAvailableServices, null );\n\n    assertNotNull( result );\n  }\n\n  @Test\n  public void testInitializeHdfsServices_WithoutHdfsService() {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    var result = initializer.initializeHdfsServices( hadoopShim, shimAvailableServices, null );\n\n    assertNull( result );\n  }\n\n  @Test\n  @Ignore(\"CE version passes null for namedClusterServiceLocator\")\n  public void testInitializeFormatServices_WithCommonFormats() {\n    List<String> shimAvailableServices = Arrays.asList( \"common_formats\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n\n    // Should not throw exception\n    initializer.initializeFormatServices( hadoopShim, shimAvailableServices, null );\n  }\n\n  @Test\n  public void testInitializeFormatServices_WithoutCommonFormats() {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // Should not throw exception\n    initializer.initializeFormatServices( hadoopShim, shimAvailableServices, null );\n  }\n\n  @Test\n  @Ignore(\"CE version passes null for namedClusterServiceLocator\")\n  public void testInitializeMapReduceServices_WithMapReduce() {\n    List<String> shimAvailableServices = Arrays.asList( \"mapreduce\" );\n    List<String> mapreduceOptions = Arrays.asList( \"mapreduce\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getServiceOptions( \"mapreduce\" ) ).thenReturn( mapreduceOptions );\n\n    // Should not throw exception\n    initializer.initializeMapReduceServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  public void testInitializeMapReduceServices_WithoutMapReduce() {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // Should not throw exception\n    initializer.initializeMapReduceServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  @Ignore(\"CE version passes null for namedClusterServiceLocator\")\n  public void testInitializeSqoopServices_WithSqoop() {\n    List<String> shimAvailableServices = Arrays.asList( \"sqoop\" );\n    List<String> sqoopOptions = Arrays.asList( \"sqoop\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getServiceOptions( \"sqoop\" ) ).thenReturn( sqoopOptions );\n\n    // Should not throw exception\n    initializer.initializeSqoopServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  public void testInitializeSqoopServices_WithoutSqoop() {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // Should not throw exception\n    initializer.initializeSqoopServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  public void testInitializeHiveServices_WithHive() throws Exception {\n    List<String> shimAvailableServices = Arrays.asList( \"hive\" );\n    List<String> hiveDrivers = Arrays.asList( \"hive\", \"impala\", \"impala_simba\", \"spark_simba\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getAvailableHiveDrivers() ).thenReturn( hiveDrivers );\n\n    // Should not throw exception\n    initializer.initializeHiveServices( hadoopShim, shimAvailableServices, null );\n\n    // Verify driver registration was attempted\n    verify( driverLocator, atLeastOnce() ).registerDriver( any() );\n  }\n\n  @Test\n  public void testInitializeHiveServices_WithoutHive() throws Exception {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // Should not throw exception\n    initializer.initializeHiveServices( hadoopShim, shimAvailableServices, null );\n\n    // Verify no driver registration was attempted\n    verify( driverLocator, never() ).registerDriver( any() );\n  }\n\n  @Test\n  @Ignore(\"CE version does not implement HBase service registration\")\n  public void testInitializeHBaseServices_WithHBase() throws Exception {\n    List<String> shimAvailableServices = Arrays.asList( \"hbase\" );\n    List<String> hbaseOptions = Arrays.asList( \"hbase\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getServiceOptions( \"hbase\" ) ).thenReturn( hbaseOptions );\n\n    // Should not throw exception\n    initializer.initializeHBaseServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  public void testInitializeHBaseServices_WithoutHBase() throws Exception {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // Should not throw exception\n    initializer.initializeHBaseServices( hadoopShim, shimAvailableServices, null, null );\n  }\n\n  @Test\n  public void testInitializeYarnServices() {\n    List<String> shimAvailableServices = new ArrayList<>();\n\n    // CE version does nothing - should not throw exception\n    initializer.initializeYarnServices( hadoopShim, shimAvailableServices, null, null, null );\n  }\n\n  @Test\n  public void testInitializeRuntimeTests() {\n    // Should not throw exception\n    initializer.initializeRuntimeTests( null, null );\n  }\n\n  @Test\n  public void testDoInitialize_WithNoConfiguration() throws ConfigurationException {\n    when( hadoopConfigurationBootstrap.getProvider() ).thenReturn( null );\n\n    // Should complete without throwing exception\n    initializer.doInitialize();\n  }\n\n  @Test\n  public void testDoInitialize_WithConfiguration() throws ConfigurationException {\n    List<String> shimAvailableServices = new ArrayList<>();\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n\n    // Should complete without throwing exception and should call registerLoggers\n    initializer.doInitialize();\n\n    // Verify BigDataLogConfig was initialized (indirectly through successful execution)\n    assertTrue( BigDataLogConfig.isInitialized() );\n  }\n\n  @Test\n  @Ignore(\"VFS filesystem conflicts - Multiple providers registered for URL scheme 'hdfs'\")\n  public void testDoInitialize_WithFullServices() {\n    List<String> shimAvailableServices = Arrays.asList(\n      \"hdfs\", \"common_formats\", \"mapreduce\", \"sqoop\", \"hive\", \"hbase\"\n    );\n    List<String> hdfsOptions = Arrays.asList( \"hdfs\" );\n    List<String> hdfsSchemas = Arrays.asList( \"hdfs\" );\n    List<String> mapreduceOptions = Arrays.asList( \"mapreduce\" );\n    List<String> sqoopOptions = Arrays.asList( \"sqoop\" );\n    List<String> hbaseOptions = Arrays.asList( \"hbase\" );\n    List<String> hiveDrivers = Arrays.asList( \"hive\" );\n\n    when( hadoopShim.getAvailableServices() ).thenReturn( shimAvailableServices );\n    when( hadoopShim.getServiceOptions( \"hdfs\" ) ).thenReturn( hdfsOptions );\n    when( hadoopShim.getAvailableHdfsSchemas() ).thenReturn( hdfsSchemas );\n    when( hadoopShim.getServiceOptions( \"mapreduce\" ) ).thenReturn( mapreduceOptions );\n    when( hadoopShim.getServiceOptions( \"sqoop\" ) ).thenReturn( sqoopOptions );\n    when( hadoopShim.getServiceOptions( \"hbase\" ) ).thenReturn( hbaseOptions );\n    when( hadoopShim.getAvailableHiveDrivers() ).thenReturn( hiveDrivers );\n\n    // Should complete without throwing exception\n    initializer.doInitialize();\n  }\n\n  @Test\n  public void testRegisterHiveDrivers_AllDriverTypes() throws Exception {\n    List<String> hiveDrivers = Arrays.asList( \"hive\", \"impala\", \"impala_simba\", \"spark_simba\" );\n\n    initializer.registerHiveDrivers( hadoopShim, hiveDrivers, jdbcUrlParser, driverLocator );\n\n    // Verify all 4 drivers were registered\n    verify( driverLocator, times( 4 ) ).registerDriver( any() );\n  }\n\n  @Test\n  public void testRegisterHiveDrivers_OnlyHive() throws Exception {\n    List<String> hiveDrivers = Arrays.asList( \"hive\" );\n\n    initializer.registerHiveDrivers( hadoopShim, hiveDrivers, jdbcUrlParser, driverLocator );\n\n    // Verify only 1 driver was registered\n    verify( driverLocator, times( 1 ) ).registerDriver( any() );\n  }\n\n  @Test\n  public void testRegisterHiveDrivers_NoDrivers() throws Exception {\n    List<String> hiveDrivers = new ArrayList<>();\n\n    initializer.registerHiveDrivers( hadoopShim, hiveDrivers, jdbcUrlParser, driverLocator );\n\n    // Verify no drivers were registered\n    verify( driverLocator, never() ).registerDriver( any() );\n  }\n\n  @Test\n  public void testInitializeUINamedClusterProvider() {\n    // Should not throw exception even if UI provider is not available\n    initializer.initializeUINamedClusterProvider();\n  }\n\n  @Test\n  public void testRegisterLoggers_WithValidPropertiesFile() {\n    // Test that registerLoggers loads and processes properties correctly\n    // This is an integration test that requires the bigdata-logging.properties file\n    // in the classpath\n    try {\n      initializer.registerLoggers();\n      // If we get here without exception, the method worked\n      assertTrue( true );\n    } catch ( Exception e ) {\n      fail( \"registerLoggers should not throw exception: \" + e.getMessage() );\n    }\n  }\n\n  @Test\n  public void testRegisterLoggers_CallsWithoutException() {\n    // Test that registerLoggers can be called without throwing exception\n    // even if properties file is not found\n    try {\n      initializer.registerLoggers();\n      assertTrue( true );\n    } catch ( Exception e ) {\n      fail( \"registerLoggers should handle missing file gracefully: \" + e.getMessage() );\n    }\n  }\n}\n"
  },
  {
    "path": "services-bootstrap/src/test/java/org/pentaho/big/data/services/bootstrap/BigDataLogConfigTest.java",
    "content": "/*! ******************************************************************************\n *\n * Pentaho\n *\n * Copyright (C) 2024 by Hitachi Vantara, LLC : http://www.pentaho.com\n *\n * Use of this software is governed by the Business Source License included\n * in the LICENSE.TXT file.\n *\n * Change Date: 2029-07-20\n ******************************************************************************/\n\npackage org.pentaho.big.data.services.bootstrap;\n\nimport org.apache.logging.log4j.Level;\nimport org.apache.logging.log4j.LogManager;\nimport org.apache.logging.log4j.Logger;\nimport org.apache.logging.log4j.core.LoggerContext;\nimport org.apache.logging.log4j.core.config.Configuration;\nimport org.apache.logging.log4j.core.config.LoggerConfig;\nimport org.junit.After;\nimport org.junit.Before;\nimport org.junit.Test;\n\nimport static org.junit.Assert.*;\n\n/**\n * Unit tests for BigDataLogConfig\n */\npublic class BigDataLogConfigTest {\n\n    @Before\n    public void setUp() {\n        // Reset initialization state before each test\n        BigDataLogConfig.resetInitialization();\n    }\n\n    @After\n    public void tearDown() {\n        // Clean up after tests\n        BigDataLogConfig.resetInitialization();\n    }\n\n    @Test\n    public void testInitializeBigDataLogging_FirstTime() {\n        assertFalse(\"Should not be initialized initially\", BigDataLogConfig.isInitialized());\n\n        BigDataLogConfig.initializeBigDataLogging();\n\n        assertTrue(\"Should be initialized after first call\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testInitializeBigDataLogging_MultipleCalls() {\n        BigDataLogConfig.initializeBigDataLogging();\n        assertTrue(\"Should be initialized after first call\", BigDataLogConfig.isInitialized());\n\n        // Call again\n        BigDataLogConfig.initializeBigDataLogging();\n        assertTrue(\"Should remain initialized after second call\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testGetBigDataLogger_WithClass() {\n        Logger logger = BigDataLogConfig.getBigDataLogger(BigDataLogConfigTest.class);\n\n        assertNotNull(\"Logger should not be null\", logger);\n        assertEquals(\"Logger name should match class name\", \n                     BigDataLogConfigTest.class.getName(), \n                     logger.getName());\n        assertTrue(\"BigDataLogConfig should be initialized\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testGetBigDataLogger_WithString() {\n        String loggerName = \"org.pentaho.test.logger\";\n        Logger logger = BigDataLogConfig.getBigDataLogger(loggerName);\n\n        assertNotNull(\"Logger should not be null\", logger);\n        assertEquals(\"Logger name should match\", loggerName, logger.getName());\n        assertTrue(\"BigDataLogConfig should be initialized\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testGetBigDataLogger_EnsuresInitialization() {\n        assertFalse(\"Should not be initialized initially\", BigDataLogConfig.isInitialized());\n\n        BigDataLogConfig.getBigDataLogger(\"test.logger\");\n\n        assertTrue(\"Should be initialized after getting logger\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testIsInitialized_InitialState() {\n        assertFalse(\"Should not be initialized initially\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testIsInitialized_AfterInitialization() {\n        BigDataLogConfig.initializeBigDataLogging();\n        assertTrue(\"Should be initialized\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testResetInitialization() {\n        BigDataLogConfig.initializeBigDataLogging();\n        assertTrue(\"Should be initialized\", BigDataLogConfig.isInitialized());\n\n        BigDataLogConfig.resetInitialization();\n        assertFalse(\"Should not be initialized after reset\", BigDataLogConfig.isInitialized());\n    }\n\n    @Test\n    public void testRegisterLogger_NewLogger() {\n        String loggerName = \"org.pentaho.big.data.test.newlogger\";\n        Level level = Level.DEBUG;\n\n        boolean result = BigDataLogConfig.registerLogger(loggerName, level);\n\n        assertTrue(\"Should successfully register new logger\", result);\n\n        // Verify the logger was registered with correct level\n        LoggerContext context = (LoggerContext) LogManager.getContext(false);\n        Configuration config = context.getConfiguration();\n        LoggerConfig loggerConfig = config.getLoggerConfig(loggerName);\n\n        assertNotNull(\"Logger config should exist\", loggerConfig);\n        assertEquals(\"Logger should have correct name\", loggerName, loggerConfig.getName());\n        assertEquals(\"Logger should have correct level\", level, loggerConfig.getLevel());\n    }\n\n    @Test\n    public void testRegisterLogger_ExistingLogger() {\n        String loggerName = \"org.pentaho.big.data.test.existing\";\n        Level initialLevel = Level.INFO;\n        Level newLevel = Level.DEBUG;\n\n        // Register logger first time\n        BigDataLogConfig.registerLogger(loggerName, initialLevel);\n\n        // Register same logger again with different level\n        boolean result = BigDataLogConfig.registerLogger(loggerName, newLevel);\n\n        assertTrue(\"Should handle existing logger without error\", result);\n\n        // Verify the logger exists (level may or may not be updated depending on logic)\n        LoggerContext context = (LoggerContext) LogManager.getContext(false);\n        Configuration config = context.getConfiguration();\n        LoggerConfig loggerConfig = config.getLoggerConfig(loggerName);\n\n        assertNotNull(\"Logger config should exist\", loggerConfig);\n        assertEquals(\"Logger should have correct name\", loggerName, loggerConfig.getName());\n    }\n\n    @Test\n    public void testRegisterLogger_MultipleLoggers() {\n        String logger1 = \"org.pentaho.big.data.logger1\";\n        String logger2 = \"org.apache.hadoop.logger2\";\n        String logger3 = \"com.pentaho.big.data.logger3\";\n\n        boolean result1 = BigDataLogConfig.registerLogger(logger1, Level.INFO);\n        boolean result2 = BigDataLogConfig.registerLogger(logger2, Level.WARN);\n        boolean result3 = BigDataLogConfig.registerLogger(logger3, Level.DEBUG);\n\n        assertTrue(\"Should register logger1 successfully\", result1);\n        assertTrue(\"Should register logger2 successfully\", result2);\n        assertTrue(\"Should register logger3 successfully\", result3);\n\n        // Verify all loggers exist\n        LoggerContext context = (LoggerContext) LogManager.getContext(false);\n        Configuration config = context.getConfiguration();\n\n        LoggerConfig config1 = config.getLoggerConfig(logger1);\n        LoggerConfig config2 = config.getLoggerConfig(logger2);\n        LoggerConfig config3 = config.getLoggerConfig(logger3);\n\n        assertNotNull(\"Logger1 config should exist\", config1);\n        assertNotNull(\"Logger2 config should exist\", config2);\n        assertNotNull(\"Logger3 config should exist\", config3);\n    }\n\n    @Test\n    public void testRegisterLogger_DifferentLevels() {\n        Level[] levels = {Level.TRACE, Level.DEBUG, Level.INFO, Level.WARN, Level.ERROR, Level.FATAL};\n\n        for (int i = 0; i < levels.length; i++) {\n            String loggerName = \"org.pentaho.test.level\" + i;\n            Level level = levels[i];\n\n            boolean result = BigDataLogConfig.registerLogger(loggerName, level);\n\n            assertTrue(\"Should register logger with \" + level + \" level\", result);\n\n            // Verify the logger has correct level\n            LoggerContext context = (LoggerContext) LogManager.getContext(false);\n            Configuration config = context.getConfiguration();\n            LoggerConfig loggerConfig = config.getLoggerConfig(loggerName);\n\n            assertNotNull(\"Logger config should exist for \" + loggerName, loggerConfig);\n            assertEquals(\"Logger should have correct level\", level, loggerConfig.getLevel());\n        }\n    }\n\n    @Test\n    public void testRegisterLogger_NullLoggerName() {\n        try {\n            boolean result = BigDataLogConfig.registerLogger(null, Level.INFO);\n            // If we get here, it should return false or handle gracefully\n            assertFalse(\"Should handle null logger name gracefully\", result);\n        } catch (Exception e) {\n            // It's acceptable to throw an exception for null logger name\n            assertTrue(\"Exception is acceptable for null logger name\", true);\n        }\n    }\n\n    @Test\n    public void testRegisterLogger_EmptyLoggerName() {\n        String loggerName = \"\";\n        Level level = Level.INFO;\n\n        // Should handle empty string gracefully\n        try {\n            boolean result = BigDataLogConfig.registerLogger(loggerName, level);\n            // Result doesn't matter as long as it doesn't crash\n            assertTrue(\"Should handle empty logger name without crashing\", true);\n        } catch (Exception e) {\n            // It's acceptable to throw an exception for empty logger name\n            assertTrue(\"Exception is acceptable for empty logger name\", true);\n        }\n    }\n\n    @Test\n    public void testRegisterLogger_RootLogger() {\n        String rootLoggerName = \"\";\n        Level level = Level.DEBUG;\n\n        // Should handle root logger appropriately\n        try {\n            boolean result = BigDataLogConfig.registerLogger(rootLoggerName, level);\n            // Should complete without crashing\n            assertTrue(\"Should handle root logger without crashing\", true);\n        } catch (Exception e) {\n            // Exception is acceptable\n            assertTrue(\"Exception is acceptable for root logger\", true);\n        }\n    }\n\n    @Test\n    public void testRegisterLogger_HierarchicalLoggers() {\n        String parentLogger = \"org.pentaho.big.data\";\n        String childLogger = \"org.pentaho.big.data.services\";\n        String grandchildLogger = \"org.pentaho.big.data.services.bootstrap\";\n\n        boolean result1 = BigDataLogConfig.registerLogger(parentLogger, Level.INFO);\n        boolean result2 = BigDataLogConfig.registerLogger(childLogger, Level.DEBUG);\n        boolean result3 = BigDataLogConfig.registerLogger(grandchildLogger, Level.TRACE);\n\n        assertTrue(\"Should register parent logger\", result1);\n        assertTrue(\"Should register child logger\", result2);\n        assertTrue(\"Should register grandchild logger\", result3);\n\n        // Verify all exist in hierarchy\n        LoggerContext context = (LoggerContext) LogManager.getContext(false);\n        Configuration config = context.getConfiguration();\n\n        assertNotNull(\"Parent logger should exist\", config.getLoggerConfig(parentLogger));\n        assertNotNull(\"Child logger should exist\", config.getLoggerConfig(childLogger));\n        assertNotNull(\"Grandchild logger should exist\", config.getLoggerConfig(grandchildLogger));\n    }\n\n    @Test\n    public void testRegisterLogger_ThreadSafety() throws InterruptedException {\n        // Test concurrent registration of loggers\n        final int threadCount = 10;\n        final int loggersPerThread = 10;\n        Thread[] threads = new Thread[threadCount];\n\n        for (int i = 0; i < threadCount; i++) {\n            final int threadId = i;\n            threads[i] = new Thread(() -> {\n                for (int j = 0; j < loggersPerThread; j++) {\n                    String loggerName = \"org.pentaho.test.thread\" + threadId + \".logger\" + j;\n                    BigDataLogConfig.registerLogger(loggerName, Level.INFO);\n                }\n            });\n            threads[i].start();\n        }\n\n        // Wait for all threads to complete\n        for (Thread thread : threads) {\n            thread.join();\n        }\n\n        // Verify all loggers were registered\n        LoggerContext context = (LoggerContext) LogManager.getContext(false);\n        Configuration config = context.getConfiguration();\n\n        int registeredCount = 0;\n        for (int i = 0; i < threadCount; i++) {\n            for (int j = 0; j < loggersPerThread; j++) {\n                String loggerName = \"org.pentaho.test.thread\" + i + \".logger\" + j;\n                LoggerConfig loggerConfig = config.getLoggerConfig(loggerName);\n                if (loggerConfig != null && loggerConfig.getName().equals(loggerName)) {\n                    registeredCount++;\n                }\n            }\n        }\n\n        assertTrue(\"Should register multiple loggers concurrently\", registeredCount > 0);\n    }\n}\n"
  }
]